Assessment

The risks of assessing only what students know and can do (essay)

A central tenet of the student learning outcomes "movement" is that higher education institutions must articulate a specific set of skills, traits and/or dispositions that all of its students will learn before graduation. Then, through legitimate means of measurement, institutions must assess and publicize the degree to which its students make gains on each of these outcomes.

Although many institutions have yet to implement this concept fully (especially regarding the thorough assessment of institutional outcomes), this idea is more than just a suggestion. Each of the regional accrediting bodies now requires institutions to identify specific learning outcomes and demonstrate evidence of outcomes assessment as a standard of practice.

This approach to educational design seems at the very least reasonable. All students, regardless of major, need a certain set of skills and aptitudes (things like critical thinking, collaborative leadership, intercultural competence) to succeed in life as they take on additional professional responsibilities, embark (by choice or by circumstance) on a new career, or address a daunting civic or personal challenge. In light of the educational mission our institutions espouse, committing ourselves to a set of learning outcomes for all students seems like what we should have been doing all along.

Yet too often the outcomes that institutions select to represent the full scope of their educational mission, and the way that those institutions choose to assess gains on those outcomes, unwittingly limit their ability to fulfill the mission they espouse. For when institutions narrow their educational vision to a discrete set of skills and dispositions that can be presented, performed or produced at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision that would cultivate in students a self-sustaining approach to learning. What we measure dictates the focus of our efforts to improve.

As such, it’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. Instead of redesigning the college learning experience to alter the lifetime trajectory of an individual, we allow the whole to be nothing more than the sum of the parts -- because all we have done is swap one collection of parts for another. Although there may be value in establishing and implementing a threshold of competence for a bachelor’s degree (for which a major serves a legitimate purpose), limiting ourselves to this framework fails to account for the deeply held belief that a college experience should approach learning as a process -- one that is cumulative, iterative, multidimensional and, most importantly, self-sustaining long beyond graduation.

The disconnect between our conception of a college education as a process and our tendency to track learning as a finite set of productions (outcomes) is particularly apparent in the way that we assess our students’ development as lifelong learners. Typically, we measure this construct with a pre-test and a post-test that tracks learning gains between the years of 18 and 22 -- hardly a lifetime (the fact that a few institutions gather data from alumni 5 and 10 years after graduation doesn’t invalidate the larger point).

Under these conditions, trying to claim empirically that (1) an individual has developed and maintained a perpetual interest in learning throughout their life, and that (2) this lifelong approach is directly attributable to one’s undergraduate education probably borders on the delusional. The complexity of life even under the most mundane of circumstances makes such a hypothesis deeply suspect. Yet we all know of students that experienced college as a process through which they found a direction that excited them and a momentum that carried them down a purposeful path that extended far beyond commencement.

I am by no means suggesting that institutions should abandon assessing learning gains on a given set of outcomes. On the contrary, we should expect no less of ourselves than substantial growth in all of our students as a result of our efforts. Designed appropriately, a well-organized sequence of outcomes assessment snapshots can provide information vital to tracking student learning over time and potentially increasing institutional effectiveness. However, because the very act of learning occurs (as the seminal developmental psychologist Lev Vygotsky would describe it) in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve.

If you think that assessing learning outcomes effectively is difficult, then assessing the quality of the learning process ought to send chills down even the most skilled assessment coordinator’s spine. Defining and measuring the nature of process requires a very different conception of assessment – and for that matter a substantially more complex understanding of learning outcomes.

Instead of merely measuring what is already in the rearview mirror (i.e., whatever has already been acquired), assessing the college experience as a process requires a look at the road ahead, emphasizing the connection between what has already occurred and what is yet to come. In other words, assessment of the learning that results from a given experience would include the degree to which a student is prepared or “primed” to make the most of a future learning experience (either one that is intentionally designed to follow immediately, or one that is likely to occur somewhere down the road). Ultimately, this approach would substantially improve our ability to determine the degree to which we are preparing students to approach life in a way that is thoughtful, pro-actively adaptable, and even nimble in the face of both unforeseen opportunity and sudden disappointment.

Of course, this idea runs counter to the way that we typically organize our students’ postsecondary educational experience. For if we are going to track the degree to which a given experience “primes” students for subsequent experiences -- especially subsequent experiences that occur during college -- then the educational experience can’t be so loosely constructed that the number of potential variations in the order of a student experiences virtually equals the number of students enrolled at our institution.

This doesn’t mean that we return to the days in which every student took the same courses at the same time in the same order, but it does require an increased level of collective commitment to the intentional design of the student experience, a commitment to student-centered learning that will likely come at the expense of an individual instructor’s or administrator’s preference for which courses they teach or programs they lead and when they might be offered.

The other serious challenge is the act of operationalizing a concept of assessment that attempts to directly measure an individual’s preparation to make the most of a subsequent educational experience. But if we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes – whether these outcomes are somehow connected or entirely independent of each other – then we have to expand our approach to include process as well as product. 

Only then can we actually demonstrate that the whole is greater than the sum of the parts, that in fact the educational process is the glue that fuses those disparate parts into a greater -- and qualitatively distinct -- whole.

Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. He blogs at Delicious Ambiguity.

Editorial Tags: 
Image Source: 
Getty Images

Students, faculty sign pledge for college completion

Smart Title: 

Students are asking faculty members to pledge to create a culture of completion.

Seven state coalition pushes for more information about military credit recommendations

Smart Title: 

Seven states partner up to ensure that student veterans earn college credit for service, while also calling for help from ACE and the Pentagon.

Colleges should end outdated policies that don't put students first (essay)

When institutions and organizations begin to identify with processes instead of intended outcomes, they become vulnerable. They lose sight of their real missions and, when faced with challenges or disruptive innovation, often struggle to survive. 

Eastman Kodak, once the dominant brand in photography, identified too closely with the chemical processes it used and failed to recognize that its overarching mission was photography rather than film and film processing. Swiss watch manufacturers likewise identified too closely with the mechanical workings of their watches and lost market share to companies that understood that the real mission was the production of reliable and wearable instruments to tell time. If railroads had viewed their mission as transportation of people and goods rather than moving trains on tracks, we might have some different brand names on airplanes and vehicles today.  

In retrospect, it seems that the decisions made by these industries defied common sense. Although the leaders were experienced and capable, they were blinded by tradition, and they confused established processes with the real mission of their enterprises.

Higher education today identifies closely with its processes. In open-access public institutions, we recruit, admit and enroll students; assess them for college readiness; place or advise those who are not adequately prepared into remedial classes; give others access to a bewildering variety of course options, often without adequate orientation and advising; provide instruction, often in a passive lecture format; offer services to those who seek and find their way to them; grade students on how well they can navigate our systems and how well they perform on assignments and tests; and issue degrees and certificates based upon the number of credits the students accumulate in required and elective courses. 

We need to fund our institutions, so we concentrate on enrollment targets and make sure classroom seats are filled in accordance with regulations that specify when we count our students for revenue purposes.

At the same time that American higher education is so focused on and protective of its processes, it is also facing both significant challenges and potentially disruptive innovation. Challenges include responding to calls from federal and state policy makers for higher education to increase completion rates and to keep costs down, finding ways that are more effective to help students who are unprepared for college to become successful students, making college information more accessible and processes more transparent for prospective students and their parents, explaining new college rating systems and public score cards, coordinating across institutional boundaries to help an increasingly mobile student population to transfer more seamlessly and successfully from one institution to another and to graduate, dealing with the threat to shift from peer-based institutional accreditation to a federal system of quality assurance, and responding to new funding systems that are based upon institutional performance. 

Potentially disruptive innovations include the increasing use of social media such as YouTube and other open education resources (OER) for learning, the advent of massive online open courses (MOOCs), the quick access to information made possible by advances in technology, and the potential for a shift from the Carnegie unit to documented competencies as the primary way to measure student progression.

One of today’s most significant challenges to higher education is the increased focus on student success. In response to calls and sometimes financial incentives from policy makers -- and with the assistance provided by major foundations -- colleges and universities are shifting their focus from student access and opportunity to student access and success. Higher education associations have committed themselves to helping institutions improve college completion rates. The terminology used is that we are shifting from an “access agenda” to a “success agenda” or a “completion agenda.” 

This identification with outcomes is positive, but it raises concerns about both loss of access to higher education for those students who are less likely to succeed, and the potential for decreased academic rigor. The real mission of higher education is student learning; degrees and certificates must be the institution’s certification of identified student learning outcomes rather than just accumulated credits.

Faculty and academic administrators, perhaps working with appropriate representatives from business and industry, need to identify the learner competencies that should be developed by the curriculum. The curriculum should be designed or modified to ensure that those competencies are appropriately addressed. Students should be challenged to rise to the high expectations required to master the identified competencies and should be provided the support they need to become successful. Finally, learners should be assessed in order to ensure that a degree or certificate is a certification of acquired competencies. 

What would we do differently if, rather than identifying with our processes, we identified with our overarching mission -- student learning? When viewed through the lens of student learning, many of the processes that we currently rely upon and the decisions we make (or fail to make) seem to defy common sense. The institution itself controls some of these policies and practices; others are policies (or the lack of policies) between and among educational institutions; and some are the result of state or federal legislation.

A prime example of a detrimental institutional process is late registration, the practice of allowing students to register after orientation activities -- and often after classes have begun. Can we really expect students to be successful if they enter a class after it is under way? Research consistently shows that students who register late are at a significant disadvantage and, most often, fail or drop out.

Yet, many institutions continue this practice, perhaps in the belief that they are providing opportunity -- but it is opportunity that most often leads to discouragement and failure. Some institutional leaders may worry about the potential negative impact on budgets of not having seats filled. However, the enrollment consequences to eliminating late registration have almost always been temporary or negligible.

Sometimes institutional policies are developed in isolation and create unintended roadblocks for students. When I assumed the presidency of Palomar College, the college had a policy that students could not repeat a course in which they received a passing grade (C or above). But another policy prohibited students who had not received a grade of B or higher in the highest-level developmental writing class from progressing to freshman composition. Students who passed the developmental class with a grade of C were out of luck and had to transfer to another institution if they were to proceed with their education. The English faculty likely wanted only the best-performing students from developmental writing in their freshman composition classes, but this same objective could be accomplished by raising the standards for a C grade in the developmental writing class.

Higher education institutions rely on their faculty and staff to accomplish their missions, so it is important for everyone to understand it in the same way. A faculty member I once met told me that he was proud of the high rate of failure in his classes. He believed that it demonstrated both the rigor of his classes and his excellence as a teacher. If we measured the excellence of medical doctors by the percentage of their patients who die, it would make as much sense. Everyone at the institution has a role in promoting student learning, and everyone needs to understand that the job is to inspire students and help them to be successful rather than sorting out those who have challenges.

"The mission of higher education is student learning, and all of or policies, procedures, and practices must be aligned with that mission if our institutions are to remain relevant."

It is important for faculty and staff to enjoy their work, to feel valued by trustees, administrators, peers, and students -- and for them to feel free to innovate and secure in their employment. As important as our people are to accomplishing our mission, their special interests are not the mission. Periodic discussions about revising general education requirements are often influenced by faculty biases about the importance of their disciplines or even by concerns about job security rather than what students need to learn as part of a degree or certificate program. Before these discussions begin, ground rules should be established so that the determinations are based upon desired skills and knowledge of graduates.

Too often, students leave high school unprepared for college, and they almost always face barriers when transferring from one higher education institution to another. The only solution to these problems is for educators to agree on expectations and learning outcome standards. However, institutional autonomy and sometimes prejudice act as barriers to faculty dialogue across institutional boundaries. It is rare for community college faculty and administrators to interact with their colleagues in high schools -- and interaction between university and community college faculty is just as rare. 

Why should we be surprised when students leaving high school are often not ready to succeed in college or when the transition between community college and university is not as seamless as it should be for students? If we are serious about increasing the rates of success for students, educators will need to come together to begin important discussions about standards for curriculums and expectations for students.

Despite the best intentions of legislators, government policies often force the focus of institutions away from the mission of student learning.  In California, legislation requires community colleges to spend at least 50 percent of their revenue on classroom faculty.  Librarians, counselors, student advisers, and financial aid officers are “on the other side of the Fifty Percent Law.”  The ratio of student advisers or counselors is most often greater than a thousand to one. Research clearly demonstrates that investments in student guidance pay off in increased student learning and success.  Despite the fact that community college students are the most financially disadvantaged students in higher education, they are less likely to receive the financial aid they deserve. Yet, the Fifty Percent Law severely limits what local college faculty and academic administrators can do on their campuses to meet the needs of students in these areas.  Clearly, this law is a barrier to increasing student learning and success. Perhaps state legislators and the faculty unions that lobby them do not trust local trustees and administrators to spend resources appropriately, but this law, in its current form, defies common sense if our mission is student learning.

At the federal level, systems of accountability that track only students who are first-time, full-time freshmen to an institution do not make sense in an era when college students are more mobile than ever and in an environment in which most community college students attend part-time.  A few years ago, I met with a group of presidents of historically black universities and encouraged them to work with community colleges to increase the number of students who transfer to their institutions.  The presidents told me that doing so could lower their measured student success rates because transfers are not first-time freshmen, and the presidents were not willing to take that risk. Fortunately, officials in the U.S. Department of Education are aware of this issue and are working to correct data systems. 

There are many other examples of policies and procedures that seem senseless when viewed through the lens of student learning rather than cherished processes and tradition, just as it seems silly that Eastman Kodak did not recognize that its business was photography or that the Swiss watch manufacturers did not understand that their business was to manufacture accurate and affordable wristwatches. 

American higher education today is increasingly criticized for increasing costs and low completion rates. Higher education costs have risen at an even faster rate than those of health care; student indebtedness has skyrocketed to nearly $1 trillion; and college completion rates in the United States have fallen to 16th in the world. In addition, new technologies and innovations may soon threaten established practices.

Challenging the status quo and confronting those with special interests that are not aligned with the mission of higher education can be risky for both elected officials and educational leaders. But given the challenges that we face today, “muddling through” brings even greater risks. Every decision that is made and every policy that is proposed must be data-informed, and policy makers and leaders need the courage to ask how the changes will affect student learning, student success, and college costs. Existing policies and practices should be examined with the same questions in mind. Faculty and staff need to be free of restraining practices so they can experiment with strategies to engage students and to help them to learn.

Colleges and universities are too important for educators to deny the challenges and demands of today and too important for policy makers to pass laws because of pressure from special interests or based on their recollection of what college used to be. Decisions cannot be based on past practices when the world is changing so rapidly. The mission of higher education is student learning, and all of our policies, procedures and practices must be aligned with that mission if our institutions are to remain relevant.  

George R. Boggs is the president and CEO emeritus of the American Association of Community Colleges. He is a clinical professor for the Roueche Graduate Center at National American University.

Essay on how President Obama's rating system should work

After a month of speculation, President Obama unveiled his plan to “shake up” higher education last week. As promised, the proposal contained some highly controversial elements, none greater than an announcement that the U.S. Department of Education will begin to rate colleges and universities in 2015 and tie financial aid to those results three years later. The announcement prompted typical clichéd Beltway commentary from the higher education industry of “the devil is in the details” and the need to avoid “unintended consequences,” which should rightfully be attributed as, “We are not going to outright object now when everyone’s watching but instead will nitpick to death later.”

But the ratings threat is more substantive than past announcements to put colleges “on notice,” if for no other reason than it is something the department can do without Congressional approval. Though it cannot actually tie aid received directly to these ratings without lawmakers (and the threat to do so would occur after Obama leaves office), the department can send a powerful message both to the higher education community and consumers nationwide by publishing these ratings.

Ratings systems, however, are no easy matter and require lots of choices in their methodologies. With that in mind, here are a few recommendations for how the ratings should work. 

Ratings aren’t rankings.

Colleges have actually rated themselves in various forms for well over a hundred years. The Association of American Universities is an exclusive club of the top research universities that formed in 1900. The more in-depth Carnegie classifications, which group institutions based upon their focus and types of credentials awarded, have been around since the early 1970s. Though they may not be identified as such by most people, they are forms of ratings — recognitions of the distinctions between universities by mission and other factors.

A federal rating system should be constructed similarly. There’s no reason to bother with ordinal rankings like the U.S. News and World Report because distinguishing among a few top colleges is less important than sorting out those that really are worse than others. Groupings that are narrow enough to recognize differences but sufficiently broad to represent a meaningful sample are the way to go. The Department could even consider letting colleges choose their initial groupings, as some already do for the data feedback reports the Department produces through the Integrated Postsecondary Education Data System (IPEDS).

It’s easier to find the bottom tail of the distribution than the middle or top.

There are around 7,000 colleges in this country. Some are fantastic world leaders. Others are unmitigated disasters that should probably be shut down. But the vast majority fall somewhere in between. Sorting out the middle part is probably the hardest element of a ratings system — how do you discern within averageness?

We probably shouldn’t. A ratings system should sort out the worst of the worst by setting minimum performance standards on a few clear measures. It would clearly demonstrate that there is some degree of results so bad thatit  merits being rated poorly. This standard could be excessively, laughably low, like a 10 percent graduation rate. Identifying the worst of the worst would be a huge step forward from what we do now. An ambitious ratings system could do the same thing on the top end using different indicators, setting very high bars that only a tiny handful of colleges would reach, but that’s much harder to get right.

Don’t let calls for the “right” data be an obstructionist tactic.

Hours after the President’s speech, representatives of the higher education lobby stated the administration’s ratings “have an obligation to perfect data.” It’s a reasonable requirement that a rating system not be based only on flawed measures, like holding colleges accountable just  for the completion of first-time, full-time students. But the call for perfect data is a smokescreen for intransigence by setting a nearly unobtainable bar. Even worse, the very people calling for this standard are the same ones representing the institutions that will be the biggest roadblock to obtaining information fulfilling this requirement. Having data demands come from those keeping it hostage creates a perfect opportunity for future vetoes in the name of making perfect be the enemy of the good. It’s also a tried and true tactic from One Dupont Circle. Look at graduation rates, where the higher education lobby is happy to put out reports critiquing their accuracy after getting Congress to enact provisions that banned the creation of better numbers during the last Higher Education Act reauthorization.

To be sure, the Obama administration has an obligation to engage in an open dialogue with willing partners to make a good faith effort at getting the best data possible for its ratings. Some of this will happen anyway thanks to improvements to the department’s IPEDS database. But if colleges are not serious about being partners in the ratings and refuse to contribute the data needed, they should not then turn around and complain about the results.

Stick with real numbers that reflect policy goals.

Input-adjusted metrics are a wonk’s dream. Controlling for factors and running regressions get us all excited. But they’re also useless from a policy implementation standpoint. Complex figures that account for every last difference in institutions will contextualize away all meaningful information until all that remains is a homogenous jumble where everyone looks the same. Controlling for socioeconomic conditions also runs the risk of just inculcating low expectations for students based upon their existing results. Not to mention any modeling choices in an input-adjusted system will add another dimension of criticism to the firestorm that will already surround the measures chosen.

That does not mean context should be ignored. There are just better ways to handle it. First and foremost is making ratings on measures based on performance relative to peers. Well-crafted peer comparisons can accomplish largely the same thing as input adjustment since institutions would be facing similar circumstances, but still rely on straightforward figures. Second, unintended consequences should be addressed by measuring them with additional metrics and clear goals. For example, afraid that focusing on a college's completion rate will discourage enrolling low-income students or unfairly penalize those that serve large numbers of this type of students? The ratings should give institutions credit for the socioeconomic diversity of their student body, require a minimum percentage of Pell students, and break out the completion rate by familial income. Doing so not only provides a backstop against gaming, it also lays out clearer expectations to guide colleges' behavior, something the U.S. News rankings experience has shown that colleges clearly know how to do with less useful measures like alumni giving (sorry, Brown, for holding you back on that one).

Mix factors a college can directly control with ones it cannot.

Institutions have an incentive to improve on measures included in a rating system. But some subset of colleges will also try to evade or “game” the measure. This is particularly true if it’s something under their control — look at the use of forbearances or deferments to avoid sanctions under the cohort default rate. No system will ever be able to fully root out gaming and loopholes, but one way to adjust for them is by complementing measures under a college’s control with ones that are not. For example, concerns about sacrificing academic quality to increase graduation rates could be partially offset by adding a focus on graduates’ earnings or some other post-completion behavior that is not under the college’s control. Institutions will certainly object to being held accountable for things they cannot directly influence. But basing the uncontrollable elements on relative instead of absolute performance should further ameliorate this concern.

Focus on outputs but don’t forget inputs.

Results matter. An institution that cannot graduate its students or avoid saddling them with large loan debts they cannot repay upon completion is not succeeding. But a sole focus on outputs could encourage an institution to avoid serving the neediest students as a way of improving its metrics and undermine the access goals that are an important part of federal education policy.

To account for this, a ratings system should include a few targeted input metrics that reflect larger policy goals like socioeconomic diversity or first-generation college students. Giving colleges “credit” in the ratings for serving the students we care most about will provide at least some check against potential gaming. Even better, some metrics should have a threshold a school has to reach to avoid automatic classification into the lowest rating.

Put it together.

A good ratings system is both consistent and iterative. It keeps the core pieces the same year to year but isn’t too arrogant to include new items and tweak ones that aren’t working. These recommendations present somewhere to start. Group the schools sensibly — maybe even rely on existing classifications like those done by Carnegie. The ratings should establish minimum performance thresholds on the metrics we think are most indicative of an unsuccessful institution — things like completion rates, success with student loans, time to degree, etc. They should consist of outcomes metrics that reflect their missions—such as transfer success for two-year schools, licensure and placement for vocational offerings, earnings, completion and employment for four-year colleges and universities. But they should also have separate metrics to acknowledge policy challenges we care about — success in serving Pell students, the ability to get remedial students college-ready, socioeconomic diversity, etc. — to discourage creaming. The result should be something that reflects values and policy challenges, acknowledges attempts to find workarounds, and refrains from dissolving into wonkiness and theoretical considerations that are divorced from reality.

Author/s: 
Ben Miller
Author's email: 
millerb@newamerica.net

Ben Miller is a senior policy analyst in the New America Foundation's education policy program, where he provides research and analysis on policies related to postsecondary education. Previously, Miller was a senior policy advisor in the Office of Planning, Evaluation, and Policy Development in the U.S. Department of Education.

For-profit Kaplan branches out with learning science projects

Smart Title: 

Kaplan, which includes Pearson-like ed-tech offerings as well as for-profit degree programs, won't miss a beat as The Washington Post moves on.

Competency-based education puts efficiency before learning (essay)

When concerns about the quality of education swept the nation in the 1990s, test results were said to promise a reliable measure of instructional effectiveness. They offered a way to make comparisons across teachers, schools and students, all while assuring good value for Americans’ tax or tuition dollars. Faith in data, long built into U.S. educational practices, now came to support the ideal of schooling as a fair, honest, and well-managed service. The costs to Americans of public or private education would now need to be justified by those doing the educating.

Unfortunately, that justification, like any economic calculation, started from presumptions about what is worth paying for, and increased public spending on poorer communities was not on the table. The weaker performances of under-resourced urban or rural schools called forth not more public funding but less under No Child Left Behind. However precise its format and consistent its application, measurement in this instance served entirely subjective ideas about public good, and old race, class and geographic differentials were reproduced.

That standards-based heart of No Child Left Behind beats on in current advocacy for outcomes as the main drivers of educational design and evaluation. New metrics such as President Obama’s “College Scorecard” have helped make the idea of a measurable educational “return on investment” meaningful to schools and to students and their families. And this strong emphasis on the free market as a means of quality assurance in teaching and learning continues to spread.

For example, in “competency-based learning,” the organization of higher education shifts from the familiar credit hour system to one based on assessments of student mastery of skills and content. This means that familiar units such as courses, or classroom and contact hours, may disappear altogether in some programs. It also means that students pay for credentials not on the basis of certain numbers or types of instructional activities undertaken in a degree program, but on the basis of their own educational achievements.

A kind of industrial model of efficiency and market competition emerges in competency-based education. Advocates for this shift point to lowered tuition costs as classroom time, faculty wages and other institutional expenditures are reduced (the same savings often used to justify the use of MOOCs). And Lumina Foundation’s Jamie Merisotis predicts gains in quality control because colleges and students will undertake measurement of “what is learned” rather than “what is taught.” Federal officials also firmly endorsed competency-based college programs earlier this year by declaring them eligible for Title IV financial aid.

But learning is poorly served by such supposed efficiencies. There is a fundamental inequity in the character of competency-based education as a kind of scrimping: The “saving” of money supposedly in the interest of affordability and inclusion that in actuality achieves only social demarcation. Those students with the least money to spend on college will not be walking away with the same product as their more affluent fellow enrollees, uplifting rhetoric notwithstanding. Budget versions of education, like surgery or car repairs, are no bargain. In such outcomes-focused college curriculums, stripped of “unnecessary” instruction, open-ended, liberal learning easily is deemed wasteful. And so much for the profoundly energizing (and developmentally crucial) experience of encountering messy, uncertain arguments -- of experiencing cognition without identifiable outcomes. The distance will grow between the student who can afford traditional university instruction and the one who needs to save money.

We should be careful not to presume that those who teach in competency-based programs are necessarily weaker or less committed instructors. Yet, if a pre-set body of skills, identifiable upon graduation, is what demarcates one program from another in this kind of higher education, bringing revenue and market share to a school, in whose interest is an inventive classroom experience, or one that leads to diverse intellectual experiences for different students? What faculty member will take pedagogical risks or welcome the challenging student?

There’s an important echo here, I think, with recently renewed interest in K-12 classroom tracking. New proponents of that practice recently interviewed by The New York Times point out how such tracking matches the level, speed and style of teaching more closely to divergent student needs than can any single, unified classroom. It sounds like an inclusive reform. But both trends threaten a kind of separate but equal educational system, reasserting group identities even as they claim to customize education. They do so through projections of how best to distribute resources in our society, and also through more subtle projections of student abilities and the assertion that such abilities may be predicted.

Both propose tiered education on the presumption that underachievement and differentials in life opportunities are not something we can try to prevent. Tracking and competency-based education both assert that solutions to missing or poorly executed education involve reshaping student experiences, not expanding resources. That’s a very different ideology than the one that fueled compensatory programs of the 1970s. Those initiatives managed to accommodate diverse learning styles and paces while also bolstering educational provisions for disadvantaged communities.

Competency-based education, for its part, engages in some extraordinarily selective definitions of efficiency and inclusion. The results-based model of higher education supposedly weds quality control to flexibility; some competency-based programs give equal credit for students’ classroom, online, life-experience and video-, book- or game-based learning. Those students who are shown through assessment to have pertinent skills are credentialed, however those skills were obtained; they need not pay for “unneeded credits.” For federal supporters of this scheme and approving think-tank voices, standards in each subject will reliably determine what is worth knowing and what learning counts. They also assure that the “consumer” will be well-served throughout.

Let’s think about this. A conflict of interest certainly resides in a system whereby educational providers measure learning outcomes in their own institutions. But to be fair, that conflict can afflict any instructional effort, whether good performance promises a school more revenue, more public funding or simply greater prestige. Competency-based education, however, seems systematically to deny criticality about its own operations. It uses only its own terms to judge its success. That’s troubling. If educational standards are conflated with the instruments of industry, we should not be surprised to encounter the self-serving methods of industrial quality control. Here, as in a profitable factory, the system claims a basis in economies and managerial oversight, the supposedly no-lose technics of mass-production. But industry standards invariably best serve their creators.

The multi-tiered and modular have certainly long been the American educational way. The new instructional models simply extend older beliefs in natural distributions of talent and diligence, in inborn differentials of cognition and character. Calling such schooling “diverse,” “flexible,” or “customer focused” will not make it democratic.

In outcomes-focused education, I see strong support for the idea that each individual who enters the classroom, aged 5, 15 or 25, is one with predetermined potential, with an identifiable niche on the ladder of aptitude that will match with a certain amount and kind of instruction. High or low, that ascription of talent is more than merely a subjective judgment, it is an iniquitous one: The customized learning experiences currently being praised proceed from the idea that an individual can be known by such categories and then placed in an appropriate position in a classroom or curriculum. Ultimately, that will also continue with the employment ladder. These so-called innovations don’t promise enriched learning and expanded opportunity, but outward rippling discrimination.

Author/s: 
Amy Slaton
Author's email: 
slatonae@drexel.edu

Amy Slaton is a professor of history in the department of history and politics at Drexel University.

Voluntary performance measures from Gates-backed group

Smart Title: 

Diverse group of 18 institutions, with Gates's backing, releases new set of metrics to measure colleges' performance and return on investment.

Higher ed discovers competency, again (essay)

In every spring, it seems, higher education finds something attractive in the flower pollen. This year, it is the discovery of competence as superior to course credits, and in an embrace of that notion in ways suitable to the age and its digital environments  This may be all well and good for the enterprise, as long as we acknowledge its history and key relationships over many springs.

Alverno offered authentic competency-based degrees in the 1970s (as did a few others at the periphery of our then institutional universe), and, for those who noticed, started teaching us what assessing competence means. Competence vaulted over credits in the 1984 higher education follow-up to "A Nation at Risk," blandly titled "Involvement in Learning." In fact, 9 of the 27 recommendations in that federal document addressed competence and assessment (though the parameters of the assessments recommended were fuzzy). Nonetheless, "Involvement" gave birth to the “assessment movement” in higher education, and, for the moment of a few years, some were hopeful that faculty and administrators would take advantage of the connections between their regular assignments and underlying student behaviors in such a way as to improve those connections in one direction, improve their effects on instruction in another direction, and provide evidence of impact to overseers public and private. There were buds on the trees.

But the buds did not fully blossom. Throughout the 1990s, “assessment” became mired in scores of restricted response examinations, mostly produced by external parties, and, with those examinations, “value added” effect size metrics that had little to do with competence and even less impact on the academic lives of students. The hands of faculty -- and their connecting stitching of instruction, learning objectives, and evidence -- largely disappeared. The educati took over; and when another spring wind brought in business models of TQM and CQI and Deming Awards, assessment got hijacked, for a time, by corporate approaches to organizational improvement which, for better or worse, nudged more than a few higher education institutions to behave in corporate ways.

Then cometh technology, and in four forms:

First, as a byproduct of the dot-com era, the rise of industry and vendor IT certifications. We witnessed the births of at least 400 of these, ranging from the high-volume Microsoft Certified Systems Engineer to documentation awards by the International Web Masters Association and the industrywide COMPTia.  It was not only a parallel postsecondary universe, but one without borders, and based in organizations that didn’t pretend to be institutions of higher education.  Over 2 million certifications (read carefully: I did not call them “certificates”) by such organizations had been issued worldwide by 2001, and, no doubt, some multiple of that number since. No one ever kept records as to how many individuals this number represented, where they were located, or anything about their previous levels of education. Credits were a foreign commodity in this universe: demonstrated competence was everything. Examinations delivered by third parties (I flunked 3 of them in the course of writing an analysis of this phenomenon) documented experience, and an application process run by the vendor determined who was anointed.

No one knows whether institutions of higher education recognized these achievements, because no one ever asked.  The only question we knew how to ask was whether credit was granted for different IT competencies, and, if so, how much. Neither governments nor foundations were interested. The IT certification universe was primarily a corporate phenomenon, marked in minor ways, and forgotten.

Second, the overlapping expansion of online course and partial-course delivery by traditional institutions of higher education. This was once known as “distance education,” delivered by a combination of television and written mail-in assignments, administered typically by divisions on the periphery of most IHEs. Only when computer network systems moved into large or multicampus institutions could portions of courses be broadly accessed, but principally by resident or on-site students. Broadband and wireless access in the mid-1990s broke the fence of residency, though in some disciplines more than others. Some chemistry labs, case study analyses, cost accounting problems, and computer programming simulations could be delivered online. These were partial deliveries in that they constituted those slices of courses that could be technologically encapsulated and accessed at the student’s discretion. “Distance education” was no longer the exclusive purvey of continuing education or extension divisions: it was everywhere. 

Were the criteria for documenting acceptable student performance expressed as “competencies,” with threshold performance levels? Some were; most were not. They were pieces of course completion, and with completion, the standard award of credits and grades. They came to constitute the basis for more elaborated “hybrid” courses, and what is now called “blended” delivery.

Third, the rise of the for-profit, online providers of full degree programs. If we could do pieces of courses on-line, why not whole courses? Why not whole degree programs -- and sell them? Take a syllabus and digitize its contents, mix in some digital quizzes and final exams (maintain a rotating library of both).  Acquire enough syllabuses, and you have a degree.  But not in every field, of course. You aren’t going to get a B.S. in physics online -- or biology, agricultural science, chemistry, engineering of any kind, art, or music (pieces, yes; whole degrees, no). 

But business, education, IT, accounting, finance, marketing, health care administration, and psychology? No problem! Add online advisers, e-mail exchanges both with instructor and small groups of students labeled a “section,” and the enterprise begins to resemble a full operation. The growing market of space-and-time mobile adults makes it easy to avoid questions about high school preparation and SAT scores. A lot of self-pacing and flexibility for those space-time mobile students. Adding a few optional hybrid courses means leasing some brick-and-mortar space, but that is not a burden. Make sure a majority of faculty who write the content that gets translated into courseware hold Ph.D.s or other appropriate terminal degrees, obtain provisional accreditation, market and enroll, start awarding paper, become fully accredited and, with it, Title IV eligibility for enrollees, and ... voila! But degree criteria were still expressed in terms of courses/credits.

Fourth, the MOOCs, a natural extension of combinations of the above. “Distance education” for whoever wants it and whenever they want it; lecture sets, except this time principally by the “greats,” delivered almost exclusively from elite universities, big audiences, no borders (like IT certifications), and standard quizzes and tests -- if you wish to document your own learning, regardless of whether credit would ever be granted by anybody. You get what you came for -- a classic lecture series. Think about what’s missing here: papers, labs, fieldwork, exhibits, performances. In other words, the assignments through which students demonstrate competency are absent because they cannot be implemented or managed for crowds of 30,000, let alone 100,000 -- unless, of course, the framework organization (not a university) limits attendees (and some have) to a relatively elite circle.

Everyone will learn something, no doubt, whether or not they finish the course. The courses offered are of a limited range, and dependent on the interests (teaching as well as themes of research) of the “greats” or the rumblings of state legislators to include a constricted set of “gateways” so as to relieve enrollment pressures. These are signature portraits, and as the model expands to other countries and in other languages, we’ll see more signatures. But signatures cannot be used as proxies for competencies, any more than other courses can be used that way. There is nothing wrong with them otherwise. They serve the equivalent of all those kids who used to sit on the floor of the former Borders on Saturdays, reading for the Java2 platform exam.

This time, though, we sit on the floor for the insights of a great mind or for basic  understanding of derivatives and integrals. If this is what learners and legislators want, fine! But let’s be clear: there are no competencies here. And since degrees are not at issue, there are no summative comprehensive judgments of competence, either.

The Discontents

Obviously missing across all of the technologies, culminating in the current fad for MOOCs, are the mass of faculty, including all our adjuncts, hence potential within-course assignments linked to student-centered learning behaviors that demand and can document competencies of different ranges.  Missing, too: within-institutional collaboration, connections, and control.  However a MOOC twists and turns, those advocating formal credit relationships with the host organizations of such entities are handing over both instruction and its assessment to third parties -- and sometimes fourth parties. There is no organic set of interactions we can describe as teaching-and-learning-and-judgment-and-learning again-and teaching again-and judging again.  At the bottom line, there are, at best, very few people on the teaching and judging side. Ah, technology!  It leaves us no choice but to talk about credits.

And then there is that word on every 2013 lip of higher education, “competence.” Just about everyone in our garden uses the word as a default, but nobody can tell you what it is. In both academic and non-academic discourse, “competence” seems to mean everything and hence nothing. We have cognitive, social, performance, specialized, procedural, motivational, and emotional competencies. We have one piled on top of another in the social science literature, and variation upon variation in the psychological literature.

OECD ran a four-year project to sort through the thickets of economic, social, civil, emotional, and functional competencies. The related literature is not very rewarding, but OECD was not wrong in its effort: what we mean and want by way of competence is not an idle topic. Life, of course, is not higher education, and one’s negotiation of life in its infinite variety of feeling and manifestation does not constitute the set of criteria on which degrees are awarded. Our timeline is more constrained, and our variables closer at hand.  So what are all the enthusiasts claiming for the “competence base” of online degrees or pieces, such as MOOCs, that may become part of competence-based degrees (whatever that may mean)?  And is there any place that one can find a true example?

We are not talking about simple invocations of tools such as language (just about everyone uses language) and “technology” (the billion people buried in iPhones or tweeting certainly are doing that, and have little trouble figuring out the mechanics and reach of the next app).        

Neither are the competencies required for the award of credentials those of becoming an adult.  We don’t teach “growing up.”  At best, higher education institutions may facilitate, but that doesn’t happen online, where authentic personal interactions (hence major contributors to growing up) are limited to e-mails, occasional videos, and some social media.  Control in online environments is exercised by whoever designed the interaction software, and one doesn’t grow up with third-party control.

At the core of the conundrum is the level of abstraction with which we define a competence. For students, current and prospective, that level either locks or unlocks understanding of what they are expected to do to earn a credential.  For faculty, that level either locks or unlocks the connection between what they teach or facilitate and their assignments.  Both connections get lost at high levels of abstraction, e.g., “critical thinking” or “teamwork,” that we read in putative statements of higher education outcomes that wind up as vacuous wishlists.  Tell us, instead, what students do when they “think critically,” what they do in “teamwork,” and perhaps we can unlock the gate using verbs and verb phrases such as “differentiate,” “reformulate,” “prioritize,” and “evaluate” for the former, and “negotiate,” “exchange,” and “contribute” for the latter.  Students understand such verbs; they don’t understand blah.

How “Competence” in Higher Education Should be Read

How will we know it if we see it?  One clue will be statements describing documented execution of either related cognitive tasks or related cognitive-psychomotor tasks. To the extent to which these related statements are not discipline-specific (though they may be illustrated in the context of disciplines and fields) they are generic competencies.  To the extent to which these related statements are discipline- or field-specific, they are contextual competencies.  In educational contexts, the former are benchmarks for the award of credentials, the latter are benchmarks for the award of credentials in a particular field.  All such statements should be grounded in such active verbs as assemble, retrieve, differentiate, aggregate, create, design, adapt, calibrate, and evaluate. These language markers allow current and prospective students to understand what they will actually do. These action verbs lead directly and logically to assignments that would elicit student behaviors that allow faculty to judge whether competencies have been achieved.  Such verbs address both cognitive and psychomotor activities, hence offer a universe that addresses both generic performance benchmarks for degrees and subject-specific benchmarks in both occupationally-oriented and traditional arts and sciences fields.

Competencies are not wishlists: they are learned, enhanced, expanded; they mark empirical performance, and a competency statement either directly — or at a slant — posits a documented execution.  Competencies are not “abilities,” either.  In American educational discourse, “ability” should be a red-flag word (it invokes both unseemly sides of genetics and contentious Bell curves), and, at best, indicates only abstract potential, not actualization.  One doesn’t know a student has the “ability” or “capacity” to do something until the student actually does it, and the “it” of the action is the core of competence.

What pieces of the various definitions of competence fit in a higher education setting where summative judgments are levied on individuals’ qualifications for degrees?

  • the unit of analysis is the individual student;
  • the time frame for the award of degrees is sometimes long and often uneven;
  • the actions and proof of a specific competence can be multiple and take place in a variety of contexts over that long and uneven time frame;
  • cognitive and/or psychomotor prerequisites of action and application are seen and defined in actions and applications, and not in theories, speculations, or goals;
  • the key to improving any configuration of competencies lies in feedback, clarification questions, and guidance, i.e., multiple information exchange;
  • there is a background hum of intentionality in a student’s motivation and disposition to prove competence; faculty do not teach motivation, intentionality, and disposition — these qualities emerge in the environment of a formal enterprise dedicated to the generation and distribution of knowledge and skills; they are in the air you breath in institutions of higher education;
  • competencies can be described in clusters, then described again in more discrete learning outcome statements;
  • the competencies we ascribe to students in higher education are exercised and documented only in the context of discipline-based knowledge and skills, hence in courses or learning experiences conducted or authorized by academic units;
  • that is, the Kantian maxim applies: forms without intuitions are empty; we can describe the form, the generic competence, without reference to field-specific knowledge, but the competence is only observed and documented in field-specific contexts;
  • the Kantian maxim works in the other direction, too: intuitions without forms are blind, i.e., if we think about it carefully, we don’t walk into a laboratory and simply learn the sequence of proper titration processes, nor are the lab specifications simply assigned.  Rather, there is an underlying set of cognitive forms for that sequence — planning, selection, timing, observation, recording, abstracting — that, together, constitute the prerequisite competencies that allow the student to enact the Kantian sentence.                                    

When Technology and Competence Intersect

How does all this interact with current technological environments?  First, acknowledge that institutions, independent sponsors, vendors, and students will use the going technologies in the normal course of their work in higher education.  That’s a given, and, in a society, economy, and culture that surrounds our daily life with such technologies, students know how to use them long before they enter higher education.  They are like musical instruments, yes, in that it takes practice to use them sufficiently well, but unless you are writing code or designing Web navigation systems, there’s a cap on what “sufficiently well” means, and abetted by peer interactions, most students hit that cap fairly easily.

Second, there are a limited number of contexts in which competencies can be demonstrated online.  For example, laboratory science simulations can’t get to stages at which smell or texture comes into play (try Benzene, characterized as an aromatic compound for a good reason); studio art is limited in terms of texture and materials; plants do not grow for you in simulations to measure for firmness in agricultural science. Culinary arts?  When was the last time you tasted a Beef Wellington online? Forget it!

Third, if improvement of competency involves a process of multiple information-exchange, with the student contributing clarification questions, there are few forms of technological communication that allow for this flexibility, with all its customary pauses and tones. Students cannot be assisted in the course of assignments that take place beyond the broadband classroom, e.g., ethnographic field work. Those students who have attained a high degree of autonomy might be at home in a digital environment and can fill in the ellipses; most students are not in that position, and require conversation and consultation in the flesh.  And since when did an online restricted response exam provide more than a feedback system that explains why your incorrect answer was incorrect, but you may not understand two of the four explanations -- and there is no further loop to help you out other than sending you back to a basal level that lies far outside the exam.

All of that is part of the limited universe of assessment and assignments in digital environments, and hence part of the disconnect between what is assumed to be taught, what is learned, and whether underlying competencies are elicited, judged, and linked.  People do all these jobs; circuits don’t.

So much for what we should see. But what do we see. Not much. Not from the MOOC business; not from the online providers of full degree programs; not from most traditional institutions of higher education.  Pretend you are a prospective student, go online to your sample of these sources, and see if you can find any competency statements -- let alone those that tell you precisely what you are going to do in order to earn a degree.  You are more likely to see course lists, offerings, credit blocks, and sequences as proxies for competence. You are more likely to read dead-end mush nouns such as “awareness,” “appreciation,” and the champion mush of them all -- “critical thinking.” None of these are operational cognitive or psychomotor tasks. None of these indicate the nature of the execution that will document your attainment. The recitations, if and when you find them, fall like snow, obliterating all meaningful actions and distinctions.

So Where Do We Turn in Higher Education?

There’s only one document I know that can get us halfway there, and it is more an iterative process than a document, and a process that will take a decade to reach a modicum of satisfaction. Departing from both customary practice and language is the Degree Qualifications Profile (DQP) set in an iterative motion by the Lumina Foundation in early 2011, and for which, in the interests of full disclosure, I was one of four authors. What does it do? What did we have in mind? And how does it address the frailties of both technology and the language of competence?

Its purposes are to provide an alternative to metric-driven “accountability” statements of IHEs, and to clarify what degrees mean using statements of specific generic competencies. Its roots are in what other countries call “qualification frameworks,” as well as in a discipline-specific cousin called tuning (in operation in 60 countries, including five state systems in the U.S.). The first edition DQP includes 19 competencies at the associate level, 24 for the bachelor’s, and 15 for the master’s -- all irrespective of field. The competencies are organized in five archipelagos of knowledge, intellectual skills, and applications, and all set up in a ratcheting of challenge level from one degree to the next.  They are summative learning statements, describing the documented execution of cognitive tasks -- not credits and GPAs -- as conditions for the award of degrees. The documented execution can take place at any time in a student’s degree-level career, but principally through assignments embedded in course-based instruction (though that does not exclude challenge examinations or other non-course based assessments). However course-based the documentation might be, the DQP is a degree-level statement and courses cannot be used as proxies for what it specifies. Competencies as expressed here, after all, can be demonstrated in multiple courses.

The DQP is neither set in stone nor sung in one key. Don’t like the phrasing of a competency task? Change it! Think another archipelago of criteria should be included? Add it! Does the DQP miss competencies organic to the mission of yours and similar institutions? Tell the writers, and you will see those issues addressed in the next edition, due out by the end of 2013. 

For example, the writers know that the document needs a stronger account of the relation between discipline-based and generic degree requirements, so you will see more of tuning (Lumina's effort to work with faculty to define discipline-based knowledge and skills) in the second edition. They also know that the DQP needs a more muscular account of the relation between forms of documentation (assignments), competencies, and learning outcomes, accounting for current and future technologies in the process, as well as for potential systems of record-keeping (if credits here they are only in the back office as engines of finance for the folks with the green eye shades). 

All of this -- and more -- comes from the feedback of 200 institutions currently exploring the DQP, and testifies to what “iteration” can accomplish. This is not a short-term task, nor is it one that is passed to corporate consultants or test developers outside the academy. I would not be surprised if, after a decade of work, we saw 50 or 60 analogous but distinct applications of the DQP living in the public environment, and, as appropriate to the U.S., outside of any government umbrella. That sure is better than what we have now and what has been scrambled even more by MOOCs -- something of a zero.

It has been a long road from the competence-based visions of the 1970s, but unraveling discontents will help us see its end. We know that technologies and delivery systems will change again. That, in itself, argues for the stability of a competence-referenced set of criteria for the award of at least three levels of degrees. Some of the surface features of the DQP will change, too, but its underlying assumptions, postulates, and language will not. Its grounding in continuing forms of human learning behavior guarantees that reference point. All the more reason to stand firm with it.

Cliff Adelman is a senior associate at the Institute for Higher Education Policy.

Editorial Tags: 

New ETS test on non-academic skills

Smart Title: 

ETS releases a new test to measure students' non-academic skills. Colleges want to use test for advising and finding remedial students with "grit."

Pages

Subscribe to RSS - Assessment
Back to Top