Assessment

Website recognizes military skills with digital badges

New website creates digital badges for veterans, aiming to recognize military skills and training. Will it be the first badging experiment to catch on?

Carnegie Foundation considers a redesign for the credit hour

The Carnegie Foundation, which created the credit hour, considers a redesign so the standard could better fit with emerging approaches to higher education.

Saylor Foundation's Free Courses Offer Path to Credit

Saylor Foundation's 240 free online courses now offer a pathway to college credit, thanks to new partnerships with Excelsior College and StraighterLine. But will students follow that path?

Prior learning assessment catches on, quietly

Prior learning assessment could be higher education's next big disruptive force, and ACE and CAEL are poised to catch that potential gold rush. But many remain skeptical about academic credit for work experience.

Improving Graduation Rates Is Job One at City Colleges of Chicago

City Colleges of Chicago have a 7 percent graduation rate. If that number doesn't go up, the system's chancellor, presidents and trustees could lose their jobs.

An inside look at why regional accreditation works (opinion)

In response to a recent signal that U.S. Education Secretary Betsy DeVos may be exploring alternatives to our present higher education accreditation practices, let’s take a look at what it’s like to be on the inside of a regional accrediting team.

Several weeks ago, I spent four enlightening, engaging, intimate, collaborative, debate-filled -- and exhausting -- days as a member of a team at a nearby university. (Sorry -- since deliberations are confidential, I’m not at liberty to reveal the name of the school, nor who was on my team.)

It wasn’t my first. Impulsively, I had agreed to participate in five others over 20 years at modest and grand institutions -- some with deep pockets, others hanging by a thread; some with meager enrollments, others with tens of thousands -- but all required to go through it. All forced to run an academic marathon every 10 years, hoping at the finish line to get a thumbs-up by one of the seven U.S. regional bodies, concluding that your school has been anointed, censured -- or, rarely, denied the laurel crown of accreditation.

Regional accreditation is a really big deal. It’s the gold standard guarantee that a school can announce on its website to students and their families that it clinched its final exam. Once a higher education institution is accredited, students can enroll with confidence, unafraid it will suddenly fold or be revealed as just another scam, a shabby diploma mill. It assures the public that universities and colleges are legitimate, so reliable that the federal government recognizes them as worthy enough for enrolled students to receive U.S.-backed grants and loans. While institutions enter into accreditation voluntarily, without it, Uncle Sam won’t give you a nickel to go college there.

Our team’s credentials were not at all shabby. It included the president of a high-profile university, the provost of another notable school, distinguished professors and other reputable scholars and top staff drawn from highly ranked colleges and universities, public and private. While most of us were from nearby schools within an hour or two, one flew in all the way from California. Impressively, their specialties covered every aspect of university life -- accounting, finance and data analysis; curriculum, instructional technology and course design; assessment, accreditation, institutional effectiveness, governance, planning and student affairs. A serious group with solid experience, no less than any of my previous teams.

Our chair, the president of a peer university, mused aloud over dinner one night, “I feel it’s my obligation to serve. Other presidents take the time to visit my school. I feel I must do the same.”

Evenings in our hotel conference room, the nine of us -- five women and four men -- would sit, hunched over our black laptops around a long table, writing our reports like graduate students in a library. Sometimes the room was so still, except for the clacking of keyboards, you’d think we were writing our dissertations. Together for four days, in classrooms, over meals, during interviews, engaging in our deliberations, we grew very close to each other, similar to a time, long ago on a vacation in the Caribbean, over a long weekend, when I became fast friends with others lounging on the beach -- but this time without surf and palm trees. The feeling of closeness bound us together, not only this time, but at all my previous team evaluations.

Months before our visit, a thick packet of brochures, documents and reports arrived on my office desk. Inside, the principal item was the university’s self-study report, a dense, 174-page, spiral-bound book, wrapped in a glossy, clear plastic cover, adorned with a montage of color photos of campus. One showed a young woman wearing blue rubber gloves, performing an experiment under a lab hood. Another depicted a romantic, snow-covered scene, framed by an antique iron gate; a very dignified classic college bell tower stood under a moonlit sky in the distance. Placed in the center of the report was the school’s shield and logo. A digital version arrived separately by email.

During my visit to campus, I overheard a faculty member say that the self-study took the school three years to compile. Together with the regional commission’s standards for accreditation and requirements of affiliation, the self-study formed the basis of our team’s on-campus evaluation. Representing the institution’s own assessment of its programs and services, positive and problematic -- a unique higher education intellectual exercise, not performed anywhere outside the U.S. -- it focused especially on student learning and achievement, a relatively recent emphasis in response to public criticism that higher education is failing to educate its students effectively.

Flipping through the report, I came upon dozens of single-spaced pages, illustrated with colorful charts and graphs. One showed a series of stacked boxes calling out the school’s “aspiration,” “strategic priorities” and other key goals. In an email, soon after the document arrived, we were asked to read the report carefully, making notes in the margins about themes we may not have understood or items that may have concerned us. Like facing a mirror, the self-study is a close-up. Looking outside the frame, the team has a wider view.

We were then asked to propose names of faculty and staff or chiefs of particular academic departments or services, say, the head of athletics, or in my specialty, online learning, to flesh out the text with questions we would raise during interviews. We were also encouraged to ask for a deeper dive into data, asking for additional evidence to get a better feel for what may not have been fully illuminated by the text. For example, I asked for data on online enrollment, retention and graduation rates.

On the evening before we were to meet with assembled university faculty and staff, we were asked to draft reports on what we learned from the self-study, deciding what we thought before we ran the gauntlet of interviews with faculty, staff and students. In advance of engaging with the university community, we were to reveal what we felt; what we needed to note; what we might conclude; which things could be applauded as significant accomplishments; what matters could be assigned as a recommendation or suggestion; which features must be attended to as requirements.

Interviews tested our initial insights against what we learned. Among other groups, I participated in Q&As with trustees, medical school officials, faculty and students. Colleagues -- not detectives -- we weren’t out to grill them, but to help guide them. Students were the most rewarding and most exciting.

To our surprise, we discovered that the conclusions reached in the self-study lacked a full-throated acknowledgment of the school’s impressive successes. We also uncovered blind spots we needed to call to their attention.

In our deliberations over what we might conclude, our team shifted between judging the school’s past performance on the one hand and recognizing on the other that teaching and learning and associated support services are constantly emerging, like headlights beaming out of a tunnel. In the end, we withheld certainty in favor of offering suggestions for improvement. Wisdom won out over discipline.

“Accreditation teams are being asked to make an argument that assures academic peers and the public about current and near-term promise,” remarked Daniel J. Royer in a recent paper. “They are not being asked to give an award or make a judgment related to past achievement or failures.”

Unhappiness with the state of American higher education -- poor student learning, rising college costs, serious student loan indebtedness and lack of work-force preparation, among other troubles -- often leads observers on the right and left to propose alternatives, some so severe they call for shutting down regional accreditors in favor of imposing state or federal rules, moving toward increased bureaucratization and compliance, snuffing out the democratic spirit that animates our system.

Unwisely, critics blame the blameless, pointing a finger at regional accreditation, rather than recognizing serious social dysfunction outside the gates of the university -- economic inequality and racism -- that deeply trouble many of our vulnerable students. Disturbingly, our team learned about homeless students going hungry and how the institution struggles to find ways to care for them.

Going back to late nineteenth century, regional accreditation in the U.S. is an uncommon practice. This is the only country in the world that engages institutions in their own scrutiny. In Europe, Asia and elsewhere, ministries of education and similar government agencies are solely responsible. Some call the American way an exercise in “deliberative” democracy, an idea that reaches as far back as Aristotle, contending that scholars, together with their peers, are the most competent judges of academic quality.

* * *

On our final morning -- ready for departure, our suitcases stacked in the corner of a large assembly hall -- our team chair stood at a lectern facing about 100 or so senior staff and faculty. Solemnly, the president of the university was seated in the front row. In a relaxed and friendly talk that lasted no more than five or 10 minutes, our chair smiled, announcing that everything was just fine. Our report would recommend, happily, that the school fulfilled the requirements of regional accreditation. Complimenting the assembled on having done a fine job -- so good, in fact, he said that our report praised the school on achieving five major accomplishments -- he noted that our team also found a number of recommendations and suggestions that they should take in the collegial spirit in which they were offered.

You could feel the tension easing out of the room, like a puffed-up cushion deflating as you take your seat.

Later, when the faculty and staff get to dive into our report, after the regional commission approves it, they will find not just a handful but dozens of proposals for improvement, some responding respectfully to their needs; others it will do well for them to take very seriously. Like good friends, we didn’t tell them only what they wanted to hear; we also told them things that must be said. Many of our recommendations supported changes for improvement that they had insightfully and revealingly proposed themselves in their self-study, a confident result of deliberative academic democracy.

At its best, the American style of accreditation, while recognizing the government’s interest in it, does not act as a police force, demanding compliance. Instead, regional accreditation, just like the members of our team, enters into a dialogue with faculty and staff in a collaborative effort to raise the bar of American higher education.

Robert Ubell is vice dean emeritus of online learning at NYU’s Tandon School of Engineering and author of the collection Going Online: Perspectives on Digital Learning.

Editorial Tags: 
Image Source: 
Istockphoto.com/PeopleImages
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Info-rich course planning apps may lower grades

New findings: college students actually perform worse with access to digital course-planning platforms that show how previous students performed.

Kuh and Kinzie respond to essay questioning 'high-impact' practices (opinion)

The phrase “high-impact practice,” or HIP, found its way into the higher education lexicon more than a decade ago. The words signal the unusually positive benefits that accrue to students who participate in such an educational practice, including enhanced engagement in a variety of educationally purposeful tasks; gains in deep, integrative learning; salutary effects for students from historically underserved populations (that is, students get a boost in their performance); and higher persistence and graduation rates.

Most of the individual activities that appear on the HIPs list promulgated by the Association of American College and Universities are familiar to faculty and staff members, as almost all of the HIPs have been available on most college campuses in one form or another for decades.

Some -- such as study abroad -- are considered transformative and life changing, according to testimonials by those fortunate enough to have done them. Multiple studies of the effects of service learning over the past quarter century yielded empirical evidence of the positive effects on desired outcomes of these courses designed for students to have meaningful community service experiences that are integrated with instruction and induce them to apply what they are learning and to reflect on what they have learned and their performance in messy, unscripted situations.

And other HIPs, such as learning communities and internships, long have had enthusiastic champions. Indeed, there was enough evidence in 1999 to persuade the design team that created the first iteration of the National Survey of Student Engagement to include many of the 11 high-impact practices on its questionnaire, asking students in their first and last years of college, “Did you participate in this?”

In 2006, a systematic analysis of several years of NSSE data showed that students who reported doing one or more of these practices benefited in various desired ways. In fact, the differences between those who did an HIP and those who did not were so large that we reanalyzed the data to be sure the results were accurate. The same pattern of results advantaging students participating in HIPs over their peers emerged in subsequent analyses.

A few years later, Ashley Finley and Tia McNair affirmed that historically underserved students benefited significantly from engaging in HIPs, and that participating in multiple HIPs had cumulative, accentuating effects.

And then California State University Northridge reported that its Latino students were about 10 percent more likely to earn a baccalaureate degree in six years than their counterparts were if they did just one HIP. The cumulative effects were also evident, for Latino and other students.

These promising reports and numerous others from individual campuses along with a growing body of literature on service learning, college writing and undergraduate research, and additional research by AAC&U, NSSE and others, propelled HIPs into something of a national juggernaut.

HIPs’ work is featured at regional and national meetings of various associations. And the National Association of System Heads partnered with California State University Dominguez Hills to sponsor the first national convening of the HIPs in the States initiative.

So, imagine the surprise and perhaps dismay of enthusiasts of high-impact practices who saw the recent lead story in Inside Higher Ed “Maybe Not So ‘High Impact’?”

What? HIPs don’t matter to graduation rates at public universities? Apparently so, according to the results of a study published in the well-respected Journal of Higher Education.

As with many research studies, one could quibble with the quality of the data or the analytical methods used, and some of these challenges apply to this paper.

What is worth pondering is the study’s animating purpose: Is the mere availability of HIPs at public universities related to institutional graduation rates? There are many reasons why expecting positive findings from such an inquiry are unrealistic; central among them are that a student’s precollege academic preparation and family socioeconomic status account for the largest share of explained variance when predicting completion.

A more compelling and actionable approach to determining the value of participating in an HIP is whether the experience is linked to desired outcomes/performance/behavior (including persistence and graduation) of students who have actually done one or more HIPs compared with that of their peers who have not had such experiences.

The study featured in the article relied on aggregated institution-level data that do not match individual students, HIP participation and whether they graduated. The study used two HIP measures, one related to the extent of specific HIP offerings and one summing the measure across HIPs. This approach overlooks implementation fidelity and masks the accentuating effects of multiple HIPs on individual student outcomes.

These limitations could not be overcome by the researchers who used a small arsenal of standard statistical approaches to analyze the information accessible to them.

Indeed, the Inside Higher Ed article brings to the fore a most important but often overlooked consideration: simply offering and labeling an activity an HIP does not necessarily guarantee that students who participate in it will benefit in the ways much of the extant literature claims.

Over the past few years, we’ve emphasized that implementation quality is critical in terms of realizing the benefits of HIP participation. This is not a surprise as the caveat applies to every effort a college or university makes to engage students in meaningful, relevant learning experiences inside and outside the classroom, on and off the campus.

Campus practitioners know firsthand that some service-learning courses and internships are better designed and implemented than others. This holds for every type of HIP and just about any other college experience that matters to student learning and personal development.

For example, soon-to-be-published NSSE data about the effects of learning communities on engagement and self-reported gains show great variation between institutions. So institutional context and implementation quality matter.

Scaling HIPs effectively through curricular or graduation requirements is one way to induce widespread participation. Indiana University-Purdue University at Indianapolis uses this approach in its RISE initiative to broaden access to a quality educational experience by expecting students to participate in research, international experiences, service learning and experiential learning.

To this end, IUPUI faculty and administrators have thoughtfully crafted experiences, supported faculty development and studied the effects of RISE experiences. Requiring student participation in one or more HIPs should be an intentional, evidence-based decision and tailored to the institutional context and its students. Simply increasing the number of available HIPs is not an effective approach to scaling.

There is much more to learn about HIPs and other college experiences that could or should have similar positive effects. Especially welcome are efforts to confirm the conditions that are associated with the depth and range of desired effects.

We’ve described many of these features elsewhere. But it is possible that some of these features, such as peer interaction, are more or less important to a certain HIP, such as an internship or other type of field experience. And while the positive effects of HIPs hold for all students when aggregated at the national level, perhaps certain students will benefit more from particular HIPs compared with others in different campus contexts.

This emphasizes the importance of being equity minded when scaling HIP participation. Which students are experiencing HIPs, and who is left out? Are underrepresented students having high-quality experiences? Access to HIPs without equitable participation is a hollow achievement.

The most recent National Institute of Learning Outcomes Assessment survey of provosts found that hundreds of colleges and universities are working to scale currently existing HIPs and add others so more students can participate in an HIP.

We owe it to our students to ensure HIPs and other innovations intended to enhance the quality of undergraduate education are implemented equitably and with fidelity so that students realize the promised benefits.

George D. Kuh is Chancellor’s Professor Emeritus at Indiana University and senior scholar at the National Institute for Learning Outcomes Assessment. Jillian Kinzie is associate director of the Indiana University Center for Postsecondary Research and senior scholar at the National Institute for Learning Outcomes Assessment.

Image Source: 
istockphoto.com/Ellagrin
Multiple Authors: 
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Assessment isn't about bureaucracy but about teaching and learning (opinion)

The text came in when my cell signal returned, just as our car crossed over the eastern slope of the Allegheny Mountains. My mother’s message read simply: “On the front page of the opinion section … below the fold and half of page 6. Mailing to you Monday.” My husband, daughter and I were on our way home from a weekend in the mountains of West Virginia. My mother, 11 hours north of us in Boston, was enjoying her Sunday routine, which always involved the print version of The New York Times. I had told her to keep an eye out for an opinion piece that was posted electronically at the start of the weekend, one that had those of us in higher education buzzing about it since it hit.

This was a first for me: my vocation was being called on the carpet in the Times by a fellow academic. My chosen profession, higher education assessment, was reduced to pithy descriptions like “bureaucratic behemoth” and “supposedly data-driven” and “expensive administrative bloat.” I had to laugh at the sudden fame bestowed upon my rather inside-baseball profession. In a family full of cops and lawyers, I often struggled to say, precisely and concisely, what I did for a living. In contrast, my brother gets to tell people that he is the real-life inspiration for Agent Callen on NCIS: Los Angeles. My story? Much less exciting.

For years my shorthand answer to the “And what do you do for a living?” question was that I helped colleges and universities make sure that they fulfill the promise of their brochures to students and parents. According to Molly Worthen, in her piece entitled “The Misguided Drive to Measure ‘Learning Outcomes,’” I was at best a well-intentioned if unwitting collaborator with for-profit technology companies, reactionary academic leadership and demanding employers, against which she -- an assistant professor of history at the University of North Carolina at Chapel Hill -- and others like her stood ready to defend the life of the mind. While my colleagues on the assessment professionals’ email list dissected her argument line by line, my heart went out to the individuals who work in assessment at Chapel Hill. Though I do not know them personally, it is not a large stretch of the imagination to think that the coming workweek would be a difficult one.

Worthen’s op-ed covered an ambitious amount of territory under the guise of addressing measuring student learning: perceived cracks in the regional accreditation system, states’ divestment in public education, larger societal ills thwarting the ability of institutions of higher learning to educate, and even former education secretary Margaret Spellings herself, providing perhaps unintended proof of the beautiful, important significance and continuing power of academic freedom, considering Spellings’s current position -- president of the University of North Carolina system, and therefore Worthen’s boss. I will leave it to others in a better position than me to address what I see as Worthen’s false dichotomy between the life of the mind and the dignity of work, the deficit-model positioning of socioeconomic status in her argument, and issues of institutional inequity, and focus exclusively on her conceptions of assessing student learning.

As someone who spent the better part of the last 15 years working on campuses in assessment and evaluation, I know firsthand the joys and challenges inherent to that role. I fully recognize that there are places where “assessment” remains a dirty word and faculty expertise is not included as part of the process.

That said, such examples should not define assessment writ large. In my current job as senior director for research and assessment at the Association of American Colleges and Universities, I have the unique privilege of working with faculty and assessment professionals (individuals who, frankly, are often one and the same) across the spectrum of institutions, from the flagship state institutions and elite private colleges and universities that dominate any number of prestige rankings to the community colleges, four-year regional comprehensives and less-than-elite regional private institutions, a.k.a. the rest of higher education, which happen to be the institutions that actually educate the majority of today’s students.

And what do I work with them to do? Precisely the opposite of the kind of assessment described by Worthen. Well before I joined the organization, at a time when simplistic quantification of learning was the coin of the realm, AAC&U championed the role of faculty expertise in teaching, learning and assessment and created an alternative approach to standardized tests, the VALUE (Valid Assessment of Learning in Undergraduate Education) rubrics. For the uninitiated, rubrics are simply an explicit articulation of (1) faculty expectations of students vis-à-vis their learning, as well as (2) descriptions of what student work looks like at progressively higher levels of performance.

And how did AAC&U create the VALUE rubrics? By engaging interdisciplinary teams of faculty members from across the country to author the rubrics, and then making them available to everyone, for free, via a simple Word or PDF download from our website.

The rubrics themselves are now almost a decade old and have proven to be an essential resource locally to campuses as well as the foundation of national-level experiments in assessing student learning. Philosophically, pedagogically and methodologically, VALUE is designed to afford faculty the opportunity to flex their creative muscles and capture evidence that the curriculum they own and the courses they teach do indeed promote students’ development of the very learning outcomes that are essential to a liberal, and liberating, education.

Far from a reductionist tool, research has demonstrated that the VALUE rubrics empower faculty members to help translate the learning that takes place when a student completes an assignment they crafted, one that aligns with and promotes disciplinary knowledge, and -- at its best -- gives students not just the requisite skills for the single assignment, but also advances the ultimate purpose of college teaching: long-term retention of knowledge, skills and abilities and the ability to transfer those skills to a completely new or novel situation. Translation: no “one-off,” single exam question should ever “count” as a proxy for student learning along complex constructs like critical thinking. The educational psychologist in me rails against such simplistic conceptions of learning, and our approaches to assessment must do so as well.

But the elephant in the room is this: doing so requires that faculty be all in when it comes to undergraduate teaching. Threaded throughout Worthen’s piece is a vision of students coming to our campuses (if not her own) laden with baggage, whose deficits, when coupled with unreasonable demands from callous lawmakers or corrupt capitalists, doom them to failure. Intentional or not, Worthen’s opposition to assessing student learning reads as but a strawman for a much more harmful argument: protecting the life of the mind by writing off entire segments of our society from the intellectual and, yes, economically transformative power of higher education.

It is time faculty fully adopt the mantle of educator and demand of themselves the same rigorous standards for ascertaining student learning as they do to establishing the credibility of their own disciplinary research. And yet …

Worthen’s perceptions do not come out of a vacuum. Whether the result of her own lived experience or the powerful anecdote shared by her colleague at Arkansas State University, those of us who represent the field of assessment must not dismiss her concerns out of hand. As someone representing a national organization, I am now in a position to say certain things to faculty and administrators that I would not necessarily have been empowered to say when working on a campus. Of late, that includes truth telling to members of my own tribe.

Last fall, I was invited to give the closing keynote for the 30th anniversary of the Virginia Assessment Group, the state’s association for assessment professionals, for which I twice served as president when I was still working at Virginia Tech. In my talk, I challenged my Virginia friends -- all of whom care deeply about student learning at the individual level and a high-quality educational experience at the institutional level -- to look in the mirror and have an honest conversation with the person staring back at them.

My thinking on this has evolved and sharpened over the past few months -- months that included attending at least one regional accreditation meeting as well as AAC&U’s annual meeting, aptly focused on whether or not higher education can recapture the elusive American dream. With all of this in mind, I say to my fellow travelers working to measure student learning:

  • If your definition of quality is methodologically reductionist, then assessment is not for you.
  • If your conception of learning does not encompass the inherent complexities of making meaning within and integrating across disciplines, then assessment is not for you.
  • If you see black and white when the world of the mind radiates color and nuance, assessment is not for you.
  • If your sole claim to fame is memorization of accreditation standards, then assessment is not for you.
  • If you cannot reflect on your own path as a learner, then assessment is not for you.
  • If you cannot stretch to be what your faculty, institutions and students need you to be, then assessment is not for you.
  • If you cannot speak truth to power, including your provost and president, then assessment is not for you.
  • If you cannot promote collaborative processes on your campus, have no tolerance for ambiguity or cannot listen and really hear the concerns of the likes of Worthen, then assessment is not for you.

It is incumbent upon us -- those of us with responsibility for measuring and then sharing what we know about student learning on our campuses -- to belie the easy stereotype of the bureaucratic bean counter, and to avail ourselves of every opportunity to center our work within the teaching enterprise, just as it is our responsibility to counter any and all strawman arguments about what it is that we value.

As we descended in elevation to our home in the Blue Ridge, despite the tone of Worthen’s piece, I found myself excited that the assessment narrative has evolved to its current state, and looking forward to continuing the work into the future.

Kate Drezek McConnell is senior director for research and assessment at the Association of American Colleges and Universities.

Editorial Tags: 
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Skills disconnected from academic programs shouldn't matter to colleges (opinion)

Skills do not matter.

Let me say that again. On their own, skills do not matter.

This is worth saying in response to Thursday’s Inside Higher Ed story stating that the American Council on Education will “team up with the digital credential provider Credly to help people put a value on skills they have learned outside college courses.” The initiative, funded by the Lumina Foundation, is, in the words of ACE’s Ted Mitchell, “about creating a new language for the labor market” in which skills-based competencies are valued and credited.

It’s wonderful and important for employers to develop their employees’ skills, but colleges and universities need not take notice, because these efforts are irrelevant to collegiate education’s goals and purposes.

One way to think about why skills do not matter is by analogizing to other kinds of education. Imagine your employer provided you a manual dexterity class where you learned to move your fingers about effectively. Now imagine that you came to a guitar teacher and asked for credit. Certainly, guitar players need to have manual dexterity, but the guitar teacher would wonder why you deserved credit. Learning dexterity absent actually playing guitar is not particularly valuable. It certainly does not mean that one can play guitar, nor that one has understood guitar nor embraced the purpose of studying guitar. It’s a meaningless skill from the perspective of a guitar teacher.

The same can be said of a karate teacher. Imagine that your employer had taught you to kick but had never introduced you to the specifics of karate. Do you have a “karate competency” because karate also requires kicking? Of course not.

Instead, a good karate instructor will point out that kicking abstracted from the context of learning karate is not particularly relevant to the task at hand. It will not teach one how to kick within karate, nor embody the values and discipline that a karate instructor intends to develop in her or his students.

The same is true for college professors committed to ensuring that students graduate with a liberal education. Certainly, being successful in the arts and sciences requires high-level cognitive and academic skills. But those skills are meaningless unless they are learned within and devoted to the purposes of liberal education.

In short, offering college credit for disembodied skills is as much a mistake as a guitar instructor offering credit for manual dexterity.

How, then, should colleges and universities understand skills? For starters, they should always see them in relation to the specific ends of the programs that they offer. This is as true for vocational as for liberal education. The skills of a carpenter or a nurse or a car mechanic are not isolated but are interconnected and oriented to the end of wood construction, providing health care or repairing engines, just as a guitar teacher’s goal is to impart knowledge and techniques in relation to playing the guitar.

For four-year colleges and universities, on the other hand, the skills that matter should be related to their primary mission of offering every undergraduate a liberal education. At such institutions, academic skills should be developed in the context of, for example, reading and writing about literature or history or engaging in scientific inquiry.

A liberal education is not just any kind of education. Like carpentry, nursing or guitar playing, it has content. It seeks to cultivate specific virtues through specific practices. For example, the goal of a historian is not to teach abstract skills (such as parsing evidence or writing papers) but to help students engage in intellectual inquiry about the past. This means that skills are developed within the context of reading and writing history. The end is historical perspective, and the skills are means to that end. From the perspective of a historian, it matters little whether someone has good skills unless they also have learned to value history and to develop historical insight.

In addition, skills, from the perspective of four-year colleges and universities, are meaningless outside studying specific subject matter. If colleges and universities want students to care about and think with the arts and sciences, students need to spend their time studying the arts and sciences.

Indeed, scholars of teaching and learning have made clear that critical thinking skills cannot be abstracted from the material that one studies. As James Lang writes in his book Small Teaching: Everyday Lessons From the Science of Learning (2016), “Knowledge is foundational: we won’t have the structures in place to do deep thinking if we haven’t spent time mastering a body of knowledge related to that thinking.” That is because the ability to ask sophisticated questions and to evaluate potential answers is premised on what one already knows, not just on skills abstracted from context.

Thus, if the goal of four-year college education is liberal education, we need students to study subject matter in the humanities, social sciences and natural sciences. Students need to engage seriously with history and politics, or economics and physics, before they will be able to think critically about history or politics or economics or physics. This takes time. Assessing skills cannot, and certainly should not, be done outside the context of the subjects one ought to study in college.

This is not to deny that employers should invest more resources in developing their employees’ skills, nor to suggest that those skills don’t matter within the context of specific employment markets. There are many reasons to celebrate public and private efforts to develop Americans’ work-force skills, and doing so can benefit both employers and individuals.

It simply matters little to the kinds of things that one should earn college credit for. Employers’ goals are not to graduate liberally educated adults, but to generate human capital. Generating human capital may also be a byproduct of a good liberal education, but it is certainly not the goal of it.

In fact, a good liberal education asks students to put aside, even if just for a while, their pecuniary goals in order to experience the public and personal value of gaining insight into the world by studying the arts and sciences. This is the end, the purpose, the reason for a college education. Whatever other purposes students might bring to their education, and whatever valuable byproducts emerge as a result of their time in college, colleges and universities should remain true to their academic mission.

Johann N. Neem is a professor of history at Western Washington University. He is the author of Democracy’s Schools: The Rise of Public Education in America (2017). The ideas in this essay draw from "What Is College For?" in Colleges at the Crossroads: Taking Sides on Contested Issues (2018).

Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Pages

Subscribe to RSS - Assessment
Back to Top