Assessment

The risks of assessing only what students know and can do (essay)

A central tenet of the student learning outcomes "movement" is that higher education institutions must articulate a specific set of skills, traits and/or dispositions that all of its students will learn before graduation. Then, through legitimate means of measurement, institutions must assess and publicize the degree to which its students make gains on each of these outcomes.

Although many institutions have yet to implement this concept fully (especially regarding the thorough assessment of institutional outcomes), this idea is more than just a suggestion. Each of the regional accrediting bodies now requires institutions to identify specific learning outcomes and demonstrate evidence of outcomes assessment as a standard of practice.

This approach to educational design seems at the very least reasonable. All students, regardless of major, need a certain set of skills and aptitudes (things like critical thinking, collaborative leadership, intercultural competence) to succeed in life as they take on additional professional responsibilities, embark (by choice or by circumstance) on a new career, or address a daunting civic or personal challenge. In light of the educational mission our institutions espouse, committing ourselves to a set of learning outcomes for all students seems like what we should have been doing all along.

Yet too often the outcomes that institutions select to represent the full scope of their educational mission, and the way that those institutions choose to assess gains on those outcomes, unwittingly limit their ability to fulfill the mission they espouse. For when institutions narrow their educational vision to a discrete set of skills and dispositions that can be presented, performed or produced at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision that would cultivate in students a self-sustaining approach to learning. What we measure dictates the focus of our efforts to improve.

As such, it’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. Instead of redesigning the college learning experience to alter the lifetime trajectory of an individual, we allow the whole to be nothing more than the sum of the parts -- because all we have done is swap one collection of parts for another. Although there may be value in establishing and implementing a threshold of competence for a bachelor’s degree (for which a major serves a legitimate purpose), limiting ourselves to this framework fails to account for the deeply held belief that a college experience should approach learning as a process -- one that is cumulative, iterative, multidimensional and, most importantly, self-sustaining long beyond graduation.

The disconnect between our conception of a college education as a process and our tendency to track learning as a finite set of productions (outcomes) is particularly apparent in the way that we assess our students’ development as lifelong learners. Typically, we measure this construct with a pre-test and a post-test that tracks learning gains between the years of 18 and 22 -- hardly a lifetime (the fact that a few institutions gather data from alumni 5 and 10 years after graduation doesn’t invalidate the larger point).

Under these conditions, trying to claim empirically that (1) an individual has developed and maintained a perpetual interest in learning throughout their life, and that (2) this lifelong approach is directly attributable to one’s undergraduate education probably borders on the delusional. The complexity of life even under the most mundane of circumstances makes such a hypothesis deeply suspect. Yet we all know of students that experienced college as a process through which they found a direction that excited them and a momentum that carried them down a purposeful path that extended far beyond commencement.

I am by no means suggesting that institutions should abandon assessing learning gains on a given set of outcomes. On the contrary, we should expect no less of ourselves than substantial growth in all of our students as a result of our efforts. Designed appropriately, a well-organized sequence of outcomes assessment snapshots can provide information vital to tracking student learning over time and potentially increasing institutional effectiveness. However, because the very act of learning occurs (as the seminal developmental psychologist Lev Vygotsky would describe it) in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve.

If you think that assessing learning outcomes effectively is difficult, then assessing the quality of the learning process ought to send chills down even the most skilled assessment coordinator’s spine. Defining and measuring the nature of process requires a very different conception of assessment – and for that matter a substantially more complex understanding of learning outcomes.

Instead of merely measuring what is already in the rearview mirror (i.e., whatever has already been acquired), assessing the college experience as a process requires a look at the road ahead, emphasizing the connection between what has already occurred and what is yet to come. In other words, assessment of the learning that results from a given experience would include the degree to which a student is prepared or “primed” to make the most of a future learning experience (either one that is intentionally designed to follow immediately, or one that is likely to occur somewhere down the road). Ultimately, this approach would substantially improve our ability to determine the degree to which we are preparing students to approach life in a way that is thoughtful, pro-actively adaptable, and even nimble in the face of both unforeseen opportunity and sudden disappointment.

Of course, this idea runs counter to the way that we typically organize our students’ postsecondary educational experience. For if we are going to track the degree to which a given experience “primes” students for subsequent experiences -- especially subsequent experiences that occur during college -- then the educational experience can’t be so loosely constructed that the number of potential variations in the order of a student experiences virtually equals the number of students enrolled at our institution.

This doesn’t mean that we return to the days in which every student took the same courses at the same time in the same order, but it does require an increased level of collective commitment to the intentional design of the student experience, a commitment to student-centered learning that will likely come at the expense of an individual instructor’s or administrator’s preference for which courses they teach or programs they lead and when they might be offered.

The other serious challenge is the act of operationalizing a concept of assessment that attempts to directly measure an individual’s preparation to make the most of a subsequent educational experience. But if we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes – whether these outcomes are somehow connected or entirely independent of each other – then we have to expand our approach to include process as well as product. 

Only then can we actually demonstrate that the whole is greater than the sum of the parts, that in fact the educational process is the glue that fuses those disparate parts into a greater -- and qualitatively distinct -- whole.

Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. He blogs at Delicious Ambiguity.

Editorial Tags: 
Image Source: 
Getty Images

Performance funding isn't perfect, but a recent study shortchanges it (essay)

A recent research paper published by the Wisconsin Center for the Advancement of Postsecondary Education and reported on by Inside Higher Ed criticized states' efforts to fund higher education based in part on outcomes, in addition to enrollment. The authors, David Tandberg and Nicholas Hillman, hoped to provide a "cautionary tale" for those looking to performance funding as a "quick fix."

While we agree that performance-based funding is not the only mechanism for driving change, what we certainly do not need are impulsive conclusions that ignore positive results and financial context. With serious problems plaguing American higher education, accompanied by equally serious efforts across the country to address them, it is disheartening to see a flawed piece of research mischaracterize the work on finance reform and potentially set back one important effort, among many, to improve student success in postsecondary education.

As two individuals who have studied performance funding in depth, we know that performance funding is a piece of the puzzle that can provide an intuitive, effective incentive for adopting best practices for student success and encourage others to do so. Our perspective is based on the logical belief that tying some funding dollars to results will provide an incentive to pursue those results. This approach should not be dismissed in one fell swoop. 

We are dismayed that the authors were willing to assert an authoritative conclusion from such simplistic research. The study compares outcomes of states "where the policy was in force" to those where it was not -- as if "performance funding" is a monolithic policy everywhere it has been adopted.

The authors failed to differentiate among states in terms of when performance funding was implemented, how much money is at stake, whether performance funds are "add ins" or part of base funding formulas, the metrics used to define and measure "performance," and the extent to which "stop loss" provisions have limited actual change in allocations. These are critical design issues that vary widely and that have evolved dramatically over the 20-year period the authors used to decide if "the policy was in force" or not.

Treating this diverse array of unique approaches as one policy ignores the thoughtful work that educators and policy makers are currently engaged in to learn from past mistakes and to improve the design of performance funding systems. Even a well-designed study would probably fail to reveal positive impacts yet, as states are only now trying out new and better approaches -- certainly not the "rush" to adopting a "quick fix" that the authors assert. It could just as easily be argued that more traditional funding models actually harm institutions trying to make difficult and necessary changes in the best interest of students and their success (see here and here).

The simplistic approach is exacerbated by two other design problems. First, we find errors in the map indicating the status of performance funding. Texas, for example, has only recently implemented (passed in spring 2013) a performance funding model for its community colleges; it has yet to affect any budget allocations. The recommended four-year model was not passed. Washington has a small performance funding program for its two-year colleges but none for its universities. Yet the map shows both states with performance funding operational for both two-year and four-year sectors.

Second, the only outcome examined by the authors was degree completions as it "is the only measure that is common among all states currently using performance funding." While that may be convenient for running a regression analysis, it ignores current thinking about appropriate metrics that honor different institutional missions and provide useful information to drive institutional improvement. The authors make passing reference to different measures at the end of the article but made no effort to incorporate any realism or complexities into their statistical model.

On an apparent mission to discredit performance funding, the authors showed a surprising lack of curiosity about their own findings. They found eight states where performance funding had a positive, significant effect on degree production but rather than examine why that might be, they found apparent comfort in the finding that there were "far more examples" of performance funding failing the significance tests.

"While it may be worthwhile to examine the program features of those states where performance funding had a positive impact on degree completions," they write, "the overall story of our state results serves as a cautionary tale." Mission accomplished.

In their conclusion they assert that performance funding lacks "a compelling theory of action" to explain how and why it might change institutional behaviors.

We strongly disagree. The theory of action behind performance funding is simple: financial incentives shape behaviors. Anyone doubting the conceptual soundness of performance funding is, in effect, doubting that people respond to fiscal incentives. The indisputable evidence that incentives matter in higher education is the overwhelming priority and attention that postsecondary faculty and staff have placed, over the years, on increasing enrollments and meeting enrollment targets, with enrollment-driven budgets.

The logic of performance funding is simply that adding incentives for specified outcomes would encourage individuals to redirect a portion of that priority and attention to achieving those outcomes. Accepting this logic is to affirm the potential of performance funding to change institutional behaviors and student outcomes. It is not to defend any and all versions of performance funding that have been implemented, many of which have been poorly done. And it is not to criticize the daily efforts of faculty and staff, who are committed to student success but cannot be faulted for doing what matters to maintain budgets.

Surely there are other means -- and more powerful means -- to achieve state and national goals of improving student success, as the authors assert. But just as surely it makes sense to align state investments with the student success outcomes that we all seek.
 

Nancy Shulock is executive director of the Institute for Higher Education Leadership & Policy at California State University at Sacramento, and Martha Snyder is senior associate at HCM Strategists.

Editorial Tags: 

A campus official assesses how zombie students are faring (essay)

TO: Senior Administrative Staff

FROM: Institutional Research

RE: Student Engagement among Zombie Students

In an effort to better-understand differences among student subgroups, the institutional leadership requested an analysis of engagement levels among Zombie students.

Analysis of institutional data indicates that students who self-report as Zombies also report statistically significant lower levels of engagement across a wide range of important student experiences. Many of these lower levels of engagement on specific student experience items are also negative predictors of Zombie student satisfaction.

Zombie students report lower levels of participation in class discussion despite higher satisfaction with faculty feedback. Further investigation found that these students often find it difficult to raise their hand above their heads in response to the instructor’s questions.

Zombie students also report that their co-curricular experiences had less impact on their understanding of how they relate to others. Additional analysis of focus group transcripts suggests a broad lack of self-awareness.

Zombie students indicate that they have fewer serious conversations with students who differ by race, ethnicity, socioeconomic status, or social values. Instead, Zombie students seem to congregate and rarely extend themselves out of their comfort zone.

Interestingly, our first- to second-year retention rate of Zombie students is 100 percent, despite high reports of tardiness and absences. Yet our six year graduation rate is 0 percent. While some have expressed concern over these conflicting data points, the Commencement Committee has suggested that the graduation ceremony is long enough already without having Zombie students shuffling aimlessly across the stage.

Finally, Zombie students report an increased level of one-on-one student/faculty interaction outside of class. However, we found no correlation between the substantial drop in the number of evening faculty from last year (108) to this year (52) and the number of Zombie students enrolled in night courses. Strangely, the Zombie students in these courses did indicate an unusually high level of satisfaction with the institution’s meal plan.

Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. He blogs at Delicious Ambiguity, where a version of this essay first appeared.

Editorial Tags: 

Students, faculty sign pledge for college completion

Smart Title: 

Students are asking faculty members to pledge to create a culture of completion.

Institutional Effectiveness and Assessment

Date: 
Mon, 11/04/2013

Location

1330 Eisenhower Pl
48108 Ann Arbor
United States

Obama's ratings system may be difficult to pull off (essay)

On his education bus tour, President Obama is urging, among other suggestions, a new rating system to ensure that more families are able to afford higher education. I think we can all (well, almost all of us) agree that the rising costs of a bachelor’s degree need to be constrained, and we must find ways that facilitate middle- and lower-income students entering and graduating from college. The value proposition matters, and “debt without diploma” is unacceptable.

What is vastly harder to agree upon is how to address the problem, rather than just wringing our hands over it -- which we have been doing for far too long.

Let’s start with the president’s idea of rating colleges based on graduation rates and prospective earnings, among other variables.  To be sure, given the president’s reference to U.S. News rankings in his speech today at the University of Buffalo, one wonders whether “ratings” are similar to or different from rankings – apart from using different variables.

More on the Obama Plan

Performance Funding Goes Federal: Obama gets aggressive in proposing college rating system. Making it happen won't be easy, but higher ed leaders said they'll play ball.

Disappointed, Not Surprised: Professors overwhelmingly voted for Obama twice. But five years into his presidency, few faculty leaders are surprised that they disagree strongly with his plan for higher ed reform.

Enjoying White House Attention: For colleges and other organizations promoting alternative paths to degrees, the president's speech was validation they have wanted.

The Problematic Pell Plan: The president's approach is likely to spell trouble for community colleges, Matt Reed argues.

On the surface, these two data points may seem easy to calculate. And advising families on how to compare and contrast college offers seems wise. But devising a quality rating system will require deep insight into how the world of higher education actually works – on the ground, in the trenches. As the president noted, Secretary Duncan needs to garner suggestions from a wide range of educational constituencies.

Here’s why.

First, we know that more-elite institutions that serve Pell-eligible students have higher graduation rates than open-access institutions that enroll Pell eligible students. What accounts for this disparity is subject to debate, but arguably, part of the answer is that the richer institutions “cream skim” and only take the “best” among the low-income students. 

For example, students who are selected to be Posse Scholars graduate from college (largely highly selective institutions) at a rate of 90 percent -- which is stunningly good. But, it is worth remembering that the 640 Posse Scholars enrolling each year are selected from approximately 15,000 applicants.

This means that elite institutions, absent some adjustment, would rank higher than non-elite institutions on graduation rates without any explanation as to why that is occurring.  And the lower graduation rate of less-elite institutions may be at least partially explained by the lack of preparedness of their students. For some students and their colleges, a graduation rate of 40 percent is success, not failure.

Second, if we only calculate graduation rates of true first-year, full-time cohorts, we will be missing the mark in terms of who is actually enrolling in college today. Students with previous credits, transfer students, adults returning, part-time students and veterans would not be counted in the calculation, although at least some of these data points will be included as IPED’s data are improved over time.

Third, earnings are certainly occupation-based.  Graduates who become teachers and nurses and police officers earn less than students who are employed by investment banks or hedge funds.  Clearly, success in higher education cannot be measured based on earnings alone. 

Yes, college graduates should not be underemployed or employed in fields that do not take advantage of their education.  But how we calculate “sufficient” earnings is critically important, and more earnings are not necessarily better for the public good.

Finally, there is a built-in assumption that students and their parents will pay attention to and use the ratings effectively.  Experience suggests otherwise.  Despite transparency in the realm of consumer protection, consumers still make irrational and unwise choices, as behavioral economists have noted. 

Indeed, as scholars point out, consumption decision-making is often based on non-economic determiners.  And we already have early evidence that the current scorecard has not worked as expected – despite best efforts to share its availability. Moreover, the income-based repayment program – also publicized – has not had the expected uptake among students who could benefit from it, as the president himself noted.  We need to make disclosure “smart.”  We also need to focus on how to engage families in conversations about money.  And educational institutions need to see that their obligations to advise students about loan repayment extends beyond graduation, particularly since initial payments often commence six months post-degree.   

So if we proceed with graduation rates and earnings as indicators, we need to be cautious in terms of how we calculate both and be aware that even the best ratings may not help the very audiences we seek to persuade. 

Indeed, possible key users of the ranking system are high school guidance counselors. But, as a recent report from the Public Agenda notes, this group of professionals is struggling to counsel students for college effectively.  Thus, their caseload and training may make their uptake of any new ratings problematic, absent major changes in their education and training.

As an additive or alternative to the president’s suggestions, I think we would be wise to make change where the “default” position benefits students and their families.  So, as one example, what about enacting legislation, through an amendment to the Bankruptcy Code, that enables students and parents to discharge burdensome private and public loans through bankruptcy?  

A recent study by the Center for American Progress suggested the dischargeability of select public and private loans (with a robust definition of what constitutes nondischargeable qualified student loans.) The Consumer Financial Protection Bureau and the Department of Education issued a report in 2012 suggesting reconsideration of the nondischargeability of private student loans.

To anticipate the suggestion that easing bankruptcy’s discharge will create a moral hazard, my experience over 30 years of working with debtors and consumer finance suggests that this common concern is not supported by the evidence. 

The availability of bankruptcy and the opportunity for dischargeability of specified debt has not led to a wave of abusive bankruptcy filings.  As I always have said, most people do not wake up in the morning and say, “Yippee. I get to file bankruptcy today, having failed at America’s rags-to-riches dream.”

Surely the president has latched onto an issue that matters – a college education for the betterment of individuals and their families and society at large.  This is because, at the end of the day, we need an educated citizenry to preserve our democracy. The real issue is how we make that accurate idea a reality. As with most difficult issues, the devil remains in the details.

Karen Gross is president of Southern Vermont College. She served as a senior policy adviser to the U.S. Department of Education during 2012 and is now a consultant to the department. The views presented here are her own and do not represent the position of the government, including the Department of Education.

Ireland International Conference on Education (IICE-2013)

Date: 
Mon, 10/21/2013 to Wed, 10/23/2013

Location

Merrion Road Bewleys Hotel Ballsbridge
Dublin
Ireland

Embracing Accelerated Evolution and Redefining Viability

Date: 
Sun, 10/13/2013 to Wed, 10/16/2013

Location

1330 Eisenhower Pl
48108 Ann Arbor
United States

Higher ed discovers competency, again (essay)

In every spring, it seems, higher education finds something attractive in the flower pollen. This year, it is the discovery of competence as superior to course credits, and in an embrace of that notion in ways suitable to the age and its digital environments  This may be all well and good for the enterprise, as long as we acknowledge its history and key relationships over many springs.

Alverno offered authentic competency-based degrees in the 1970s (as did a few others at the periphery of our then institutional universe), and, for those who noticed, started teaching us what assessing competence means. Competence vaulted over credits in the 1984 higher education follow-up to "A Nation at Risk," blandly titled "Involvement in Learning." In fact, 9 of the 27 recommendations in that federal document addressed competence and assessment (though the parameters of the assessments recommended were fuzzy). Nonetheless, "Involvement" gave birth to the “assessment movement” in higher education, and, for the moment of a few years, some were hopeful that faculty and administrators would take advantage of the connections between their regular assignments and underlying student behaviors in such a way as to improve those connections in one direction, improve their effects on instruction in another direction, and provide evidence of impact to overseers public and private. There were buds on the trees.

But the buds did not fully blossom. Throughout the 1990s, “assessment” became mired in scores of restricted response examinations, mostly produced by external parties, and, with those examinations, “value added” effect size metrics that had little to do with competence and even less impact on the academic lives of students. The hands of faculty -- and their connecting stitching of instruction, learning objectives, and evidence -- largely disappeared. The educati took over; and when another spring wind brought in business models of TQM and CQI and Deming Awards, assessment got hijacked, for a time, by corporate approaches to organizational improvement which, for better or worse, nudged more than a few higher education institutions to behave in corporate ways.

Then cometh technology, and in four forms:

First, as a byproduct of the dot-com era, the rise of industry and vendor IT certifications. We witnessed the births of at least 400 of these, ranging from the high-volume Microsoft Certified Systems Engineer to documentation awards by the International Web Masters Association and the industrywide COMPTia.  It was not only a parallel postsecondary universe, but one without borders, and based in organizations that didn’t pretend to be institutions of higher education.  Over 2 million certifications (read carefully: I did not call them “certificates”) by such organizations had been issued worldwide by 2001, and, no doubt, some multiple of that number since. No one ever kept records as to how many individuals this number represented, where they were located, or anything about their previous levels of education. Credits were a foreign commodity in this universe: demonstrated competence was everything. Examinations delivered by third parties (I flunked 3 of them in the course of writing an analysis of this phenomenon) documented experience, and an application process run by the vendor determined who was anointed.

No one knows whether institutions of higher education recognized these achievements, because no one ever asked.  The only question we knew how to ask was whether credit was granted for different IT competencies, and, if so, how much. Neither governments nor foundations were interested. The IT certification universe was primarily a corporate phenomenon, marked in minor ways, and forgotten.

Second, the overlapping expansion of online course and partial-course delivery by traditional institutions of higher education. This was once known as “distance education,” delivered by a combination of television and written mail-in assignments, administered typically by divisions on the periphery of most IHEs. Only when computer network systems moved into large or multicampus institutions could portions of courses be broadly accessed, but principally by resident or on-site students. Broadband and wireless access in the mid-1990s broke the fence of residency, though in some disciplines more than others. Some chemistry labs, case study analyses, cost accounting problems, and computer programming simulations could be delivered online. These were partial deliveries in that they constituted those slices of courses that could be technologically encapsulated and accessed at the student’s discretion. “Distance education” was no longer the exclusive purvey of continuing education or extension divisions: it was everywhere. 

Were the criteria for documenting acceptable student performance expressed as “competencies,” with threshold performance levels? Some were; most were not. They were pieces of course completion, and with completion, the standard award of credits and grades. They came to constitute the basis for more elaborated “hybrid” courses, and what is now called “blended” delivery.

Third, the rise of the for-profit, online providers of full degree programs. If we could do pieces of courses on-line, why not whole courses? Why not whole degree programs -- and sell them? Take a syllabus and digitize its contents, mix in some digital quizzes and final exams (maintain a rotating library of both).  Acquire enough syllabuses, and you have a degree.  But not in every field, of course. You aren’t going to get a B.S. in physics online -- or biology, agricultural science, chemistry, engineering of any kind, art, or music (pieces, yes; whole degrees, no). 

But business, education, IT, accounting, finance, marketing, health care administration, and psychology? No problem! Add online advisers, e-mail exchanges both with instructor and small groups of students labeled a “section,” and the enterprise begins to resemble a full operation. The growing market of space-and-time mobile adults makes it easy to avoid questions about high school preparation and SAT scores. A lot of self-pacing and flexibility for those space-time mobile students. Adding a few optional hybrid courses means leasing some brick-and-mortar space, but that is not a burden. Make sure a majority of faculty who write the content that gets translated into courseware hold Ph.D.s or other appropriate terminal degrees, obtain provisional accreditation, market and enroll, start awarding paper, become fully accredited and, with it, Title IV eligibility for enrollees, and ... voila! But degree criteria were still expressed in terms of courses/credits.

Fourth, the MOOCs, a natural extension of combinations of the above. “Distance education” for whoever wants it and whenever they want it; lecture sets, except this time principally by the “greats,” delivered almost exclusively from elite universities, big audiences, no borders (like IT certifications), and standard quizzes and tests -- if you wish to document your own learning, regardless of whether credit would ever be granted by anybody. You get what you came for -- a classic lecture series. Think about what’s missing here: papers, labs, fieldwork, exhibits, performances. In other words, the assignments through which students demonstrate competency are absent because they cannot be implemented or managed for crowds of 30,000, let alone 100,000 -- unless, of course, the framework organization (not a university) limits attendees (and some have) to a relatively elite circle.

Everyone will learn something, no doubt, whether or not they finish the course. The courses offered are of a limited range, and dependent on the interests (teaching as well as themes of research) of the “greats” or the rumblings of state legislators to include a constricted set of “gateways” so as to relieve enrollment pressures. These are signature portraits, and as the model expands to other countries and in other languages, we’ll see more signatures. But signatures cannot be used as proxies for competencies, any more than other courses can be used that way. There is nothing wrong with them otherwise. They serve the equivalent of all those kids who used to sit on the floor of the former Borders on Saturdays, reading for the Java2 platform exam.

This time, though, we sit on the floor for the insights of a great mind or for basic  understanding of derivatives and integrals. If this is what learners and legislators want, fine! But let’s be clear: there are no competencies here. And since degrees are not at issue, there are no summative comprehensive judgments of competence, either.

The Discontents

Obviously missing across all of the technologies, culminating in the current fad for MOOCs, are the mass of faculty, including all our adjuncts, hence potential within-course assignments linked to student-centered learning behaviors that demand and can document competencies of different ranges.  Missing, too: within-institutional collaboration, connections, and control.  However a MOOC twists and turns, those advocating formal credit relationships with the host organizations of such entities are handing over both instruction and its assessment to third parties -- and sometimes fourth parties. There is no organic set of interactions we can describe as teaching-and-learning-and-judgment-and-learning again-and teaching again-and judging again.  At the bottom line, there are, at best, very few people on the teaching and judging side. Ah, technology!  It leaves us no choice but to talk about credits.

And then there is that word on every 2013 lip of higher education, “competence.” Just about everyone in our garden uses the word as a default, but nobody can tell you what it is. In both academic and non-academic discourse, “competence” seems to mean everything and hence nothing. We have cognitive, social, performance, specialized, procedural, motivational, and emotional competencies. We have one piled on top of another in the social science literature, and variation upon variation in the psychological literature.

OECD ran a four-year project to sort through the thickets of economic, social, civil, emotional, and functional competencies. The related literature is not very rewarding, but OECD was not wrong in its effort: what we mean and want by way of competence is not an idle topic. Life, of course, is not higher education, and one’s negotiation of life in its infinite variety of feeling and manifestation does not constitute the set of criteria on which degrees are awarded. Our timeline is more constrained, and our variables closer at hand.  So what are all the enthusiasts claiming for the “competence base” of online degrees or pieces, such as MOOCs, that may become part of competence-based degrees (whatever that may mean)?  And is there any place that one can find a true example?

We are not talking about simple invocations of tools such as language (just about everyone uses language) and “technology” (the billion people buried in iPhones or tweeting certainly are doing that, and have little trouble figuring out the mechanics and reach of the next app).        

Neither are the competencies required for the award of credentials those of becoming an adult.  We don’t teach “growing up.”  At best, higher education institutions may facilitate, but that doesn’t happen online, where authentic personal interactions (hence major contributors to growing up) are limited to e-mails, occasional videos, and some social media.  Control in online environments is exercised by whoever designed the interaction software, and one doesn’t grow up with third-party control.

At the core of the conundrum is the level of abstraction with which we define a competence. For students, current and prospective, that level either locks or unlocks understanding of what they are expected to do to earn a credential.  For faculty, that level either locks or unlocks the connection between what they teach or facilitate and their assignments.  Both connections get lost at high levels of abstraction, e.g., “critical thinking” or “teamwork,” that we read in putative statements of higher education outcomes that wind up as vacuous wishlists.  Tell us, instead, what students do when they “think critically,” what they do in “teamwork,” and perhaps we can unlock the gate using verbs and verb phrases such as “differentiate,” “reformulate,” “prioritize,” and “evaluate” for the former, and “negotiate,” “exchange,” and “contribute” for the latter.  Students understand such verbs; they don’t understand blah.

How “Competence” in Higher Education Should be Read

How will we know it if we see it?  One clue will be statements describing documented execution of either related cognitive tasks or related cognitive-psychomotor tasks. To the extent to which these related statements are not discipline-specific (though they may be illustrated in the context of disciplines and fields) they are generic competencies.  To the extent to which these related statements are discipline- or field-specific, they are contextual competencies.  In educational contexts, the former are benchmarks for the award of credentials, the latter are benchmarks for the award of credentials in a particular field.  All such statements should be grounded in such active verbs as assemble, retrieve, differentiate, aggregate, create, design, adapt, calibrate, and evaluate. These language markers allow current and prospective students to understand what they will actually do. These action verbs lead directly and logically to assignments that would elicit student behaviors that allow faculty to judge whether competencies have been achieved.  Such verbs address both cognitive and psychomotor activities, hence offer a universe that addresses both generic performance benchmarks for degrees and subject-specific benchmarks in both occupationally-oriented and traditional arts and sciences fields.

Competencies are not wishlists: they are learned, enhanced, expanded; they mark empirical performance, and a competency statement either directly — or at a slant — posits a documented execution.  Competencies are not “abilities,” either.  In American educational discourse, “ability” should be a red-flag word (it invokes both unseemly sides of genetics and contentious Bell curves), and, at best, indicates only abstract potential, not actualization.  One doesn’t know a student has the “ability” or “capacity” to do something until the student actually does it, and the “it” of the action is the core of competence.

What pieces of the various definitions of competence fit in a higher education setting where summative judgments are levied on individuals’ qualifications for degrees?

  • the unit of analysis is the individual student;
  • the time frame for the award of degrees is sometimes long and often uneven;
  • the actions and proof of a specific competence can be multiple and take place in a variety of contexts over that long and uneven time frame;
  • cognitive and/or psychomotor prerequisites of action and application are seen and defined in actions and applications, and not in theories, speculations, or goals;
  • the key to improving any configuration of competencies lies in feedback, clarification questions, and guidance, i.e., multiple information exchange;
  • there is a background hum of intentionality in a student’s motivation and disposition to prove competence; faculty do not teach motivation, intentionality, and disposition — these qualities emerge in the environment of a formal enterprise dedicated to the generation and distribution of knowledge and skills; they are in the air you breath in institutions of higher education;
  • competencies can be described in clusters, then described again in more discrete learning outcome statements;
  • the competencies we ascribe to students in higher education are exercised and documented only in the context of discipline-based knowledge and skills, hence in courses or learning experiences conducted or authorized by academic units;
  • that is, the Kantian maxim applies: forms without intuitions are empty; we can describe the form, the generic competence, without reference to field-specific knowledge, but the competence is only observed and documented in field-specific contexts;
  • the Kantian maxim works in the other direction, too: intuitions without forms are blind, i.e., if we think about it carefully, we don’t walk into a laboratory and simply learn the sequence of proper titration processes, nor are the lab specifications simply assigned.  Rather, there is an underlying set of cognitive forms for that sequence — planning, selection, timing, observation, recording, abstracting — that, together, constitute the prerequisite competencies that allow the student to enact the Kantian sentence.                                    

When Technology and Competence Intersect

How does all this interact with current technological environments?  First, acknowledge that institutions, independent sponsors, vendors, and students will use the going technologies in the normal course of their work in higher education.  That’s a given, and, in a society, economy, and culture that surrounds our daily life with such technologies, students know how to use them long before they enter higher education.  They are like musical instruments, yes, in that it takes practice to use them sufficiently well, but unless you are writing code or designing Web navigation systems, there’s a cap on what “sufficiently well” means, and abetted by peer interactions, most students hit that cap fairly easily.

Second, there are a limited number of contexts in which competencies can be demonstrated online.  For example, laboratory science simulations can’t get to stages at which smell or texture comes into play (try Benzene, characterized as an aromatic compound for a good reason); studio art is limited in terms of texture and materials; plants do not grow for you in simulations to measure for firmness in agricultural science. Culinary arts?  When was the last time you tasted a Beef Wellington online? Forget it!

Third, if improvement of competency involves a process of multiple information-exchange, with the student contributing clarification questions, there are few forms of technological communication that allow for this flexibility, with all its customary pauses and tones. Students cannot be assisted in the course of assignments that take place beyond the broadband classroom, e.g., ethnographic field work. Those students who have attained a high degree of autonomy might be at home in a digital environment and can fill in the ellipses; most students are not in that position, and require conversation and consultation in the flesh.  And since when did an online restricted response exam provide more than a feedback system that explains why your incorrect answer was incorrect, but you may not understand two of the four explanations -- and there is no further loop to help you out other than sending you back to a basal level that lies far outside the exam.

All of that is part of the limited universe of assessment and assignments in digital environments, and hence part of the disconnect between what is assumed to be taught, what is learned, and whether underlying competencies are elicited, judged, and linked.  People do all these jobs; circuits don’t.

So much for what we should see. But what do we see. Not much. Not from the MOOC business; not from the online providers of full degree programs; not from most traditional institutions of higher education.  Pretend you are a prospective student, go online to your sample of these sources, and see if you can find any competency statements -- let alone those that tell you precisely what you are going to do in order to earn a degree.  You are more likely to see course lists, offerings, credit blocks, and sequences as proxies for competence. You are more likely to read dead-end mush nouns such as “awareness,” “appreciation,” and the champion mush of them all -- “critical thinking.” None of these are operational cognitive or psychomotor tasks. None of these indicate the nature of the execution that will document your attainment. The recitations, if and when you find them, fall like snow, obliterating all meaningful actions and distinctions.

So Where Do We Turn in Higher Education?

There’s only one document I know that can get us halfway there, and it is more an iterative process than a document, and a process that will take a decade to reach a modicum of satisfaction. Departing from both customary practice and language is the Degree Qualifications Profile (DQP) set in an iterative motion by the Lumina Foundation in early 2011, and for which, in the interests of full disclosure, I was one of four authors. What does it do? What did we have in mind? And how does it address the frailties of both technology and the language of competence?

Its purposes are to provide an alternative to metric-driven “accountability” statements of IHEs, and to clarify what degrees mean using statements of specific generic competencies. Its roots are in what other countries call “qualification frameworks,” as well as in a discipline-specific cousin called tuning (in operation in 60 countries, including five state systems in the U.S.). The first edition DQP includes 19 competencies at the associate level, 24 for the bachelor’s, and 15 for the master’s -- all irrespective of field. The competencies are organized in five archipelagos of knowledge, intellectual skills, and applications, and all set up in a ratcheting of challenge level from one degree to the next.  They are summative learning statements, describing the documented execution of cognitive tasks -- not credits and GPAs -- as conditions for the award of degrees. The documented execution can take place at any time in a student’s degree-level career, but principally through assignments embedded in course-based instruction (though that does not exclude challenge examinations or other non-course based assessments). However course-based the documentation might be, the DQP is a degree-level statement and courses cannot be used as proxies for what it specifies. Competencies as expressed here, after all, can be demonstrated in multiple courses.

The DQP is neither set in stone nor sung in one key. Don’t like the phrasing of a competency task? Change it! Think another archipelago of criteria should be included? Add it! Does the DQP miss competencies organic to the mission of yours and similar institutions? Tell the writers, and you will see those issues addressed in the next edition, due out by the end of 2013. 

For example, the writers know that the document needs a stronger account of the relation between discipline-based and generic degree requirements, so you will see more of tuning (Lumina's effort to work with faculty to define discipline-based knowledge and skills) in the second edition. They also know that the DQP needs a more muscular account of the relation between forms of documentation (assignments), competencies, and learning outcomes, accounting for current and future technologies in the process, as well as for potential systems of record-keeping (if credits here they are only in the back office as engines of finance for the folks with the green eye shades). 

All of this -- and more -- comes from the feedback of 200 institutions currently exploring the DQP, and testifies to what “iteration” can accomplish. This is not a short-term task, nor is it one that is passed to corporate consultants or test developers outside the academy. I would not be surprised if, after a decade of work, we saw 50 or 60 analogous but distinct applications of the DQP living in the public environment, and, as appropriate to the U.S., outside of any government umbrella. That sure is better than what we have now and what has been scrambled even more by MOOCs -- something of a zero.

It has been a long road from the competence-based visions of the 1970s, but unraveling discontents will help us see its end. We know that technologies and delivery systems will change again. That, in itself, argues for the stability of a competence-referenced set of criteria for the award of at least three levels of degrees. Some of the surface features of the DQP will change, too, but its underlying assumptions, postulates, and language will not. Its grounding in continuing forms of human learning behavior guarantees that reference point. All the more reason to stand firm with it.

Cliff Adelman is a senior associate at the Institute for Higher Education Policy.

Editorial Tags: 

Data show increasing pace of college enrollment declines

Smart Title: 

Colleges enrolled 2.3 percent fewer students this spring than last, a steeper drop than the 1.8 percent decline reported by the National Student Clearinghouse for the fall.

Pages

Subscribe to RSS - Assessment
Back to Top