Assessment

Career services and faculty must work together to help humanities students (essay)

After years of being on the back foot, the humanities have launched a counterattack. A shelf of new books, including Scott Hartley’s The Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World (Houghton Mifflin, 2017) and Gary Saul Morson and Morton Schapiro’s Cents and Sensibility: What Economics Can Learn From the Humanities (Princeton, 2017), attest to the usefulness of the humanities for the 21st-century job market. Their fresh message makes the old creed that the humanities are a “mistake” or not “relevant” seem out of touch. Surveying these works in the July-August 2017 issue of Harvard Business Review, J. M. Olejarz dubs this countermovement “the revenge of the film, history and philosophy nerds.”

But what exactly makes the humanities useful? Many of the new studies attest to the significance of the humanities by drawing on biographies. How could the humanities not be useful if countless CEOs in Silicon Valley, as Hartley points out, have humanities degrees? Stewart Butterfield, Slack, philosophy; Jack Ma, Alibaba, English; Susan Wojcicki, YouTube, history and literature; Brian Chesky, Airbnb, fine arts. The list goes on.

But where we go from here requires the hard work of identifying just what is the common denominator being learned in the humanities and how to parlay that knowledge and those skills into professional success. How do you apply Virginia Woolf to write better code or marshal your skills conjugating Latin verbs to execute an IPO?

At the University of North Carolina Greensboro, we have taken the next step of improving career outcomes for our students in the humanities by implementing the Liberal Arts Advantage, a strategy that articulates the value of the humanities to students, their parents and the community.

Directors of career development are realizing that they can’t do this work alone. They must engage faculty as their partners.

Jeremy Podany, founder, CEO, and senior consultant of the Career Leadership Collective, a global solutions group and network of innovators inside and near to university career services, says that helping faculty teach career development is part of the job. “I actually think we need to go to the faculty and say, ‘Let me teach you how to have a great career conversation,’” said Podany. The relationship between faculty members and career development offices -- experts in the humanities and careers -- is essential to preparing students for the job market.

Why? Because the central issue in realizing a long-term strategy for student career development is translation. That is, how students translate the skills they learn in the classroom into workplace success. This is particularly true in the case of the metacognitive skills that professors in the humanities can, and should, help contribute in their students.

On campuses nationwide, career services teams are moving to the center -- physically and educationally. Many directors now report to advancement and alumni offices. In their widely read manifesto on the future of career development, Christine Y. Cruzvergara and Farouk Dey identified elevating career services and customized networks as necessary changes for improving career outcomes for students. The money is following. One of the biggest donations of 2016 was a $25 million gift for humanities-oriented St. John’s College, partially earmarked for career services.

Missing in the recent reports, however, is the change that is most needed for institutions to integrate career development into the college experience: faculty involvement. Without collaboration between the faculty and career services, these developments can only have a tangential impact.

Want to meaningfully improve career outcomes for students? Get faculty members on board.

Troy Markowitz and Ryan Craig wrote recently that students aren’t suffering from a skills gap but an “awareness gap” that leads to underemployment, debt and diminished career options. This problem is starkest in the humanities, where many lack the ability to articulate skills to employers. Our strategy focuses on the skills humanities students learn, and helps them translate those skills for career impact. At UNCG we are building a program that helps students take the critical steps toward identifying skills, pursuing them and translating them into professional success.

To make the humanities more accessible, we must show our students the path from their studies to meaningful work. Humanities faculty are mistakenly resistant to integrating career development into their courses. A revelatory recent article in The Atlantic that showed how first-generation students were finding “personal and professional fulfillment in the humanities and social sciences” underscores the power of this strategy. Of course, many students do discover the inherent value of the humanities, yet not all students have that luxury. We must emphasize competencies as much as content, and help the majority of students translate and apply them across the education-to-employment divide.

Humanities education is a long-term investment in future leaders. To prepare graduates for the challenges of the job market, our departments are developing three skill sets that meet employers’ needs: critical thinking, communication and collaboration. We call these the three C’s skills. Together with a task force of faculty members from across the college, we organized UNCG’s first professional development day for students in the humanities. Attended by more than 250 students, the event included breakout sessions co-led by representatives of the faculty and career services.

Our keynote speaker, Laurin Titus, senior vice president in consumer marketing at Bank of America, enumerated how critical thinking, communication and collaboration contribute to success at the entry level and in senior leadership. In breakout sessions that followed, career services worked side by side with instructors from humanities departments to offer examples of exercises and assignments from their classes and how they could contribute to career development. In exit surveys, 99 percent of students surveyed indicated they will use what they learned in the sessions.

This month we are leveraging these workshops at UNCG to pilot online career development modules. Unlike stand-alone career courses, faculty members will embed these modules in their courses and integrate them into their curricula. This approach brings translation to the moment of skill creation. It is only possible through close collaboration among faculty and career development offices.

Many faculty know their value in contributing to the three C’s, as the Partnership for 21st Century Learning’s “Framework for 21st Century Learning” suggests. They know critical thinking when they see it. Presenting the humanities as an obstacle to career advancement understandably puts faculty on the defensive. Yet faculty should be willing to mine curricula for assignments and stories that bring translatable skills to the fore.

The fact is that students are already learning many necessary skills and competencies for the professional work force in humanities courses. The awareness gap has distorted students’ perception of what employers want and clouded us to the value of the humanities. We as educators can correct this misconception by making the skills already learned explicit. We can close the awareness gap and make the humanities truly accessible.

To do this career services staff must reimagine themselves as educators, and professors must embrace their role in professional development. Only translation will show how what is valuable can also be useful.

Emily J. Levine is an associate professor of history, and Nicole Hall is director of career services, at the University of North Carolina at Greensboro.

Editorial Tags: 
Multiple Authors: 
Is this diversity newsletter?: 
Is this Career Advice newsletter?: 

Learning outcomes that help students translate classroom learning into life tools (essay)

Just the other day, a friend of mine, a superb cultural anthropology professor, was railing against her university’s imposition of a requirement that every faculty member provide “learning outcomes” for their courses. It was the end of the semester, and she’d worked hard to provide a meaningful class for her students, and it felt cynical to then tack on a bunch of meaningless outcomes.  Who hasn’t felt anger at this increasingly frequent, seemingly cynical tendency of institutions to reduce the complexity of learning to a metric, productivity and outcomes? 

That was certainly my response when, some years ago now, my own institution debated requiring faculty members to include such outcomes on their syllabus. I protested. Then I happened to be keynoting a conference that included a workshop for beginning faculty members, intended to help them design a syllabus, including identifying meaningful learning outcomes. I asked if I, a senior faculty member, could attend.

One of the young professors leading the workshop read out loud from a student course evaluation where the student noted that, until her professor had included learning outcomes on a syllabus, she had no idea why she was taking a given class or why her university thought this course (but not some other) should be required for general education distribution or for a major. She compared college to a child asking “Why?” and the parent responding, “Because I told you so.”

You don’t need to go very deep in the pedagogical research to know that the key to successful learning is for the learner to be aware of what the given knowledge will add to their goals and their life.  As professors, departments and institutions, we tend to do a poor job connecting the lofty language of our “mission statements” to our actual practices: what we require, how we organize knowledge, how we facilitate learning and what we hope our students will gain from what they learn-- not just as job preparation (a shortsighted goal in a changing world) but also as preparation for a complex world where nothing is stable. We do a poor job helping students translate the specific content or knowledge gained in our classrooms into a tool (informational, conceptual, methodological, epistemological or affective) that will help them thrive in life. If higher education doesn’t do that -- if it isn’t geared to helping students succeed beyond the final exam and after graduation -- then why bother?

That workshop for beginning instructors helped me understand how I could turn learning outcomes from a cynical exercise into a key component of institutional change, starting in the realm over which I and other faculty members have control: how we run our classrooms. Borrowing from the long tradition of progressive education that extends from John Dewey and Paulo Freire to bell hooks and Carol Dweck, I challenge my students to take the lead in their learning. In the case of learning outcomes, I now often leave that section blank on the syllabus and use part of the first or second class meeting to have students challenge themselves, thinking up the most aspirational, world-changing outcomes they can imagine.

I do this with a simple, traditional think-pair-share exercise. First, I ask students to take 90 seconds to jot out responses to an open-ended question: “What are the three most important things you hope to take away from this class and into the rest of your life?” That’s the “think” part of the exercise. I then give them another 90 seconds to turn to “pair” with the student nearest them, introduce themselves, and take turns, with one person reading her three things and the other listening. This allows everyone a chance to express an original opinion without interruption or critique.

Once they have heard one another, I ask them to then work together to choose or craft one item that they will “share” with the class. In a small group, I have them read those out loud.  In a large one, they might add them to a Google Doc. I once did a Think-Pair-Share with 6,000 international teachers in the Philadelphia ‘76ers arena.  I try to do one TPS (as it’s known in the pedagogy business) every class period in every class.

It is my conviction that we need thoughtful, active collective engagement and participation -- by both students and faculty members -- to transform not just our classrooms but all of higher education.  We don’t need more edicts from on high or technocratic solutions, but we desperately need engaged, participatory rethinking about what we really want for and from our students -- and for and from ourselves and our institutions.

Aspirational Learning Outcomes

Here are 10 of my favorite learning outcomes, including some used by various other students and colleagues over the last several years. 

 “In this course I hope that we will  . . . “

  1. Learn to respect intellectual life and education as a precious gift that no one can steal from us.

  2. Be challenged by a scholar who maintains the highest standards of her profession to succeed educationally to our own highest standards in college and beyond.

  3. Learn to absorb and transfer knowledge and wisdom from lectures, readings and class discussion into own cogent thinking and writing.

  4. Form an appreciation of the importance of critical and creative thinking and problem-solving and use these to guide my future life and work.

  5. Gain the highest respect for intellectual rigor, including self-respect.

  6. Fight for the dignity and justice of all peoples, regardless of race, religion, national background, gender, ability or sexuality. We’re all learning together.

  7. Come to understand how everyday incidents -- the small victories as well as the constant abrasions of life and politics -- are grounded in histories and cultural practices, including those of racism or other inherited and structural forms of discrimination that are sometimes invisible to those who perpetuate them.

  8. Become a lifelong advocate for public higher education that can change lives and improve society.

  9. Learn to masterfully control chaos whenever we are faced with a complex web of ideas and results.

  10. Stay alert to surprise. Many times -- in class and out -- the best learning outcomes are the ones we never expected.

What are your aspirations for learning, in the classroom and out?  What’s missing here? If you are inclined, I hope you will use the “Comments” section below to add your own aspirations for learning.  Everybody learns when everybody is learning.

 

Cathy N. Davidson directs the Futures Initiative at the Graduate Center of the City University of New York and is author of The New Education: How to Revolutionize the University To Prepare Students for a World in Flux being publish next month by Basic Books.

Editorial Tags: 
Image Source: 
Istock/fandijki
Is this diversity newsletter?: 
Is this Career Advice newsletter?: 

A professor's invitation: Senator, please come visit my classroom (essay)

Dear Senator,

I’ll try to make you feel comfortable, if not totally inconspicuous. I assume of course there wouldn’t be a coterie of administrators anxious to produce a dog-and-pony show. And no cameras. Just a visit to an unvarnished college physics class, warts and all.

You’ll want to see for yourself students who come totally unprepared for class, and the strategies and energy I (and by extension, my colleagues) use to pull them out of their IT-induced stupor and begin to focus on the chalkboard. You’ll also realize the unmatched effectiveness of an active classroom led by a concerned human being. You’ll understand why technology-based learning never captured the 20 million or so ordinary students who continue to fill our classrooms.

At class's end, you’ll watch as students come by to get a point cleared up -- or just exchange a few encouraging words. Then you’ll begin to see why students taught by adjuncts anxious to leave for their next assignment are being shortchanged.

Afterward, we might spend some time in my office, where I’ll let you leaf through a range of textbooks to see for yourself the dumbing down of college-level material, one outcome of the “increase the graduation rate at all costs” movement. You’ll hear my comment that colleges and universities aren’t vocational schools, and that the faculty -- the people legislators never get to see -- are interested in learning, not loan repayment or career success, as college outcomes.

Not that I blame anyone. You and your staff are inundated by calls to "improve" or change accreditation by having accreditors take quantitative student outcomes into account. You probably don’t know that even though we’ve been hearing the "outcome" story for over 30 years, there has yet to be found a quantitative outcome that is reliable and valid.

If you have time, I would explain to you how the emphasis on numerical outcomes led directly to the Corinthian Colleges disaster, and I would also explain why the direct, hands-on peer review of traditional accreditation is unsurpassed at evaluating a school.

By the way, you’re listed as a supporter of legislation calling for innovative approaches to accreditation. Have you walked through how they would work? I have, and I’ll be happy to explain to your staff why these approaches would generate future Corinthians.

I would also lend a word of caution: students are not widgets. They are human beings who can suffer harm. If they are asked to participate in an innovative (there’s that word again!) approach to learning, shouldn’t they be warned that they might acquire less content, fewer skills, delayed intellectual growth in this new untested scheme? Here’s an idea for legislation: Why not require informed consent from students who are about to enroll in an experimental, pilot or untested program?

I digress. While in my office, I’ll show you some graded exams and my grade book -- names covered, of course. You’ll be relieved to know that nobody earns college credit for "seat time." Splinters are all one gets from simply sitting in a classroom.

And a point of pride: you’ll look at the range of grades and you won’t be able to correlate race, sex or ethnicity with the A’s and the F’s. Everybody works for good grades. Or not.

We’ll get a chance to talk to students -- randomly selected -- in the hall, or downstairs in the lobby. You might want to ask them about the College Scorecard, and the basis on which they made their choice of colleges. I hope your new insight won’t cause you to want to cut funding for the Department of Education’s statistics efforts. They do marvelous work -- even though few people seem to care.

And as the pièce de résistance, you’ll meet other faculty members. Just don’t mention the effort to increase data collection now being proposed. Can I tell you how one person I know would react?

He would theatrically slap his forehead and probably say, “Of course! How stupid of me! More data is just what we need to solve the problems of higher education! We’ll successfully address the racial gap, the rising costs, the disengaged students and students who are hungry. Before we go off on another harebrained scheme, why not test any new hypothesis against the longitudinal data we own in virtually every state? Nothing ever came of 30 years of gathering data, and another dose will add nothing more than expense and headaches.”

I’m sure you’ll forgive him, because in your career you’ve met other exceedingly bright people who are no respecters of personages. But I hope you’ll listen to his words.

Your day will be full -- but at the end you will be able to address higher education from a position of direct knowledge, rather than from experts. You might want to tell these experts that they, too, would benefit immensely from a day or two in a college classroom, or on an accreditation visit.

You, your colleagues and your staff are always welcome to come by -- unannounced -- to visit, and to experience a reality so important to our nation.

Bernard Fryshman is a professor of physics at New York Institute of Technology.

Image Source: 
Getty Images
Is this diversity newsletter?: 
Is this Career Advice newsletter?: 

California State University looks to end placement exams

Section: 

Move is part of a goal to significantly raise graduation rates.

Nontenured faculty should not be assessed by student evaluations in this politically charged era (essay)

Now that more that 75 percent of the instructors teaching in higher education in the United States do not have tenure, it is important to think about how the current political climate might affect those vulnerable teachers. Although we should pay attention to how all faculty are being threatened, nontenured faculty are in an especially vulnerable position because they often lack any type of academic freedom or shared governance rights. In other words, they are a class without representation, and they usually can be let go at any time for any reason. That type of precarious employment, which is spreading all over the world to all types of occupations, creates a high level of professional insecurity and helps to feed the power of the growing managerial class.

In the case of higher education, we need to recognize that this new faculty majority often relies on getting high student evaluations in order to keep their jobs or earn pay increases. The emphasis on pleasing students not only can result in grade inflation and defensive teaching, but it also places the teacher in an impossible situation when dealing with political issues in a polarized environment. While some students want teachers to talk about political issues, many students will turn against an instructor who does not share their own ideological perspective. Sometimes that type of political disagreement is transformed in student evaluations into vague complaints about the teacher’s attitude or personality.

In this fraught cultural environment, practically everyone feels that they are being censored or silenced or ignored. For example, some of my conservative students have told me that they feel like they are the real minorities on campus, and even though Donald Trump won the U.S. presidency, they still think they cannot express their true opinions. On the other side, some of my self-identified progressive activist students believe that political correctness makes it hard to have an open discussion: from their perspective, since anything can be perceived as a microaggression, people tend to silence themselves. Moreover, the themes of political correctness, safe spaces, trigger warnings and free speech have become contentious issues on both the right and the left.

What I am describing is an educational environment where almost everyone is afraid to speak. The nontenured faculty members are fearful of losing their jobs, the conservative students see themselves as a censored minority and the progressive students are afraid of being called out for their privilege or lack of political correctness. Making matters worse is that students are often socialized by their large lecture classes to simply remain passive and silent.

It appears that we are facing a perfect storm where free speech and real debate are no longer possible. One way of countering this culture is to stop relying on student evaluations to assess nontenured faculty. If we want teachers to promote open dialogue in their classes, they should not have to be afraid that they will lose their jobs for promoting the free exchange of ideas. Therefore, we need to rely more on the peer review of instruction, and we have to stop using the easy way out. In short, we have to change how nontenured faculty members are evaluated.

Non-tenure-track faculty should be empowered to observe and review one another’s courses using established review criteria. It is also helpful to have experienced faculty with expertise in pedagogy involved in the peer-review process of teaching. By examining and discussing effective instructional methods, all faculty members can participate in improving the quality of education.

It is also essential that to protect free speech and open academic dialogue, we should realize that the majority of faculty members no longer have academic freedom or the right to vote in their departments and faculty senates. In order to change this undemocratic situation, tenured professors should understand that it is to their advantage to extend academic freedom and shared governance to all faculty members, regardless of their tenure status. If we do not work together to fight back against the current political climate, we will all suffer together.

Robert Samuels teaches writing at the University of California, Santa Barbara, and is the president of UC-AFT. His forthcoming book is The Politics of Writing Studies.

Editorial Tags: 
Image Source: 
iStock/AndSim
Is this diversity newsletter?: 

Three important questions to ask about credentialing (essay)

Few topics in higher education are getting more attention than credential innovation: making credentials digital, introducing new credential types and communicating more information about learning outcomes.

Credential innovation moves transcripts, certificates and diplomas beyond accounting and verification records for transfer and graduate admissions, or mechanisms for validating completion of a university’s degree program. For many institutions, a clear driver of innovating the form and function of their credentials is a belief that today’s transcripts do not communicate what employers care about.

There are two ways to think about this issue. It’s possible that credentials don’t communicate what employers care about because colleges don’t actually provide what the labor market wants. And plenty of people say that. But I (very emphatically) believe that’s generally not true. In fact, credential innovation is so important because colleges do provide much of what employers are looking for. The problem instead is that they just don’t assess, document and communicate those outcomes. So the information is lost, and students are left to their own devices to effectively represent their collegiate experience. And the impact of colleges and universities is left implicit, not explicit.

If higher education institutions do equip students with much of what employers want, and credentials are the mechanism for communicating those outcomes per student to an employer, then the foundational question animating credential innovation projects should be: What do employers want to know? A vice provost of academic affairs recently asked me this question, and it’s stuck with me ever since.

In fact, I heard the same question, frequently, at a recent Lumina Foundation gathering where community colleges, universities and third-party vendors shared how they are experimenting with a comprehensive student record. It’s a national question for higher education as a whole, but even more so a geographically local and industry-specific question that every institution needs to consider distinctly. Let’s face it, there is no such thing as the unitary actor “employer” any more than there is such a thing as the unitary actor “higher education.”

Surprisingly, too many credential innovation projects lack grounding in clear answers to this simple question. Like the proverbial drunk looking for car keys where the light is brightest, they start with a clear conception of what the college or university can say beyond courses and credits, not a clear conception of what their employer market has asked to understand about their graduates. That makes sense insofar as colleges and universities (should) each have a distinct educational mission and program, and it is the outcome of that program or mission that credentials should communicate. But if the goal is better communication and alignment, leaving the question of employer interest unasked makes it much less likely that the underlying goal of credential innovation will be achieved.

In 2014, Hart Research Associates conducted an online survey among 400 employers on behalf of the Association of American Colleges and Universities. The majority of employers said that possessing both field-specific knowledge and a broad range of knowledge and skills is important for recent college graduates to achieve long-term career success. In fact, 80 percent of employers said it would be very or fairly useful to see an electronic portfolio that summarizes and demonstrates a candidate’s accomplishments in key skill and knowledge areas, including effective communication, knowledge in their field, applied skills, evidence-based reasoning and ethical decision making. It is important to note that, along with course work, those are the very types of capabilities that institutions help students build through activities like co-curricular leadership, study abroad and faculty research collaborations.

Today, community colleges and regional four-year schools often work closely with a particular employer or set of employers that define their local economy. The Kentucky Community and Technical College system, the North Carolina Community College system and many other institutions and systems align programs and assess outcomes in a way that’s highly aligned with what those employers are looking for.

What appears less common is a national dialogue between institutions and employers, particularly regarding white-collar jobs -- the kind of jobs where writing well, speaking well, thinking analytically and being comfortable with numbers are important skills to have. (Those are the types of skills that I describe as desired outcomes for my students as an assistant research professor of sociology at Arizona State University.)

Of course to ground credential innovation in an understanding of what employers want, institutions must begin the credential innovation journey by determining which employers matter to them and their students -- in particular those who actually consume their credentials today -- so they can ask how they use credential documents and what they think about them.

Digital credential platforms like Parchment, where I work, enable registrars to see where their credentials are going, which can then be the pathway for opening up this discussion. For example, we observe in Parchment’s data that Ernst & Young collects academic credentials, as does the Department of Homeland Security. Other national employers that receive transcripts from applicants via Parchment include Boeing, Deloitte, Hewlett-Packard, NASA, the U.S. Army, the U.S. State Department and Wells Fargo, to name a few.

Asking what employers want to know inevitably sparks a second question: Given what employers are looking for, does our educational environment develop that in students? While I think it does (and far more than we’re given credit for), the experiences are not necessarily tracked or assessed and can span multiple information systems. They may not have the rigor and integrity that the recording of course work involves, and by choosing to organize course work into a priori majors and minors, clusters of courses that reflect particular skills or ways of thinking can be lost. While program innovation is no doubt needed to meet employers’ needs, introducing a better and more formal assessment of what we’re already doing becomes low-hanging fruit we can harvest without redesigning programs or creating new ones to meet employers’ needs.

This brings me to a third and final question: How do we document and communicate this information? The answer is providing a credential in a certified and succinct way, in an operationally efficient way, and in a way that has reflective/scaffolding value to the learner so they can maximize their time in college to prepare themselves for the labor market.

For example, Elon University has provided a co-curricular, or experiential, credential for many years as part of its educational experience, which includes undergraduate research, global engagement, leadership, service and internship. Although the experiences were not documented originally with employers in mind (they reflect Elon’s distinctive educational philosophy), the university recently began surveying employers who received their new experiential transcript about what was useful and what was not. They found that 75 percent of employers agreed that the experiential transcript provided useful information for the hiring process, while 44 percent agreed that the experiential transcript increased the chances that an applicant would get an interview.

Communicating credential information effectively means supporting the development of data standards, as today’s national employers increasingly rely on applicant tracking systems with algorithms that use a wide variety of data sources to evaluate prospective employees. According to a 2016 report from Gartner, an information technology research and advisory firm, those algorithms will replace both manual processing of CVs (résumés) by recruiters and automated CV ranking based on word matching. The beauty of data is that you can convey a superset of information and package it into different types of credentials for different audiences (transfer transcript, employer college experiences report), and align with the data-processing practices of employers.

If you are a higher education leader beginning the credential innovation conversation, consider these three questions. What do our employers want to know about our learners? What do we need to do programmatically to accurately and reliably communicate those outcomes? And, recognizing the various audiences and purposes that credentials serve, what form of credential can best communicate it?

I encourage you to look at your credentials with fresh eyes as a currency for opportunity and the key to reaffirming a transformative value of a postsecondary education, especially in a knowledge economy. By taking a new approach to academic credentials, higher education can give employers what they are looking for and help students turn those credentials into opportunities.

Matthew Pittinsky is an assistant research professor at Arizona State University and CEO of Parchment Inc., a digital transcript company in K-12 and higher education.

Image Source: 
iStock/scibak
Is this diversity newsletter?: 

How assessment falls significantly short of valid research (essay)

In a rare moment of inattention a couple of years ago, I let myself get talked into becoming the chair of my campus’s Institutional Review Board. Being IRB chair may not be the best way to endear oneself to one’s colleagues, but it does offer an interesting window into how different disciplines conceive of research and the many different ways that scholarly work can be used to produce useful knowledge.

It has also brought home to me how utterly different research and assessment are. I have come to question why anyone with any knowledge of research methods would place any value on the results of typical learning outcomes assessment.

IRB approval is required for any work that involves both research and human subjects. If both conditions are met, the IRB must review it; if only one is present, the IRB can claim no authority. In general, it’s pretty easy to tell when a project involves human subjects, but distinguishing nonresearch from research, as it is defined by the U.S. Department of Health and Human Services, is more complicated. It depends in large part on whether the project will result in generalizable knowledge.

Determining what is research and what is not is interesting from an IRB perspective, but it has also forced me to think more about the differences between research and assessment. Learning outcomes assessment looks superficially like human subjects research, but there are some critical differences. Among other things, assessors routinely ignore practices that are considered essential safeguards for research subjects as well as standard research design principles.

A basic tenet of ethical human subjects research is that the research subjects should consent to participate. That is why obtaining informed consent is a routine part of human subject research. In contrast, students whose courses are being assessed are typically not asked whether they are willing to participate in those assessments. They are simply told that they will be participating. Often there is what an IRB would see as coercion. Whether it’s 20 points of extra credit for doing the posttest or embedding an essay that will be used for assessment in the final exam, assessors go out of their way to compel participation in the study.

Given that assessment involves little physical or psychological risk, the coercion of assessment subjects is not that big of a deal. What is more interesting to me is how assessment plans ignore most of the standard practices of good research. In a typical assessment effort, the assessor first decides what the desired outcomes in his course or program are. Sometimes the next step is to determine what level of knowledge or skill students bring with them when they start the course or program, although that is not always done. The final step is to have some sort of posttest or “artifact” -- assessmentspeak for a student-produced product like a paper rather than, say, a potsherd -- which can be examined (invariably with a rubric) to determine if the course or program outcomes have been met.

On some levels, this looks like research. The pretest gives you a baseline measurement, and then, if students do X percent better on the posttest, you appear to have evidence that they made progress. Even if you don’t establish a baseline, you might still be able to look at a capstone project and say that your students met the declared program-level outcome of being able to write a cogent research paper or design and execute a psychology experiment.

From an IRB perspective, however, this is not research. It does not produce generalizable knowledge, in that the success or, more rarely, failure to meet a particular course or program outcome does not allow us to make inferences about other courses or programs. So what appears to have worked for my students, in my World History course, at my institution, may not provide any guidance about what will work at your institution, with your students, with your approach to teaching.

If assessment does not offer generalizable knowledge, does assessment produce meaningful knowledge about particular courses or programs? I would argue that it does not. Leaving aside arguments about whether the blunt instrument of learning outcomes can capture the complexity of student learning or whether the purpose of an entire degree program can be easily summed up in ways that lend themselves to documentation and measurement, it is hard to see how assessment is giving us meaningful information, even concerning specific courses or programs.

First, the people who devise and administer the assessment have a stake in the outcome. When I assess my own course or program, I have an interest in the outcome of that assessment. If I create the assessment instrument, administer it and assess it, my conscious or even unconscious belief in the awesomeness of my own course or program is certain to influence the results. After all, if my approach did not already seem to be the best possible way of doing things, as a conscientious instructor, I would have changed it long ago.

Even if I were the rare human who is entirely without bias, my assessment results would still be meaningless, because I have no way of knowing what caused any of the changes I have observed. I have never seen a control group used in an assessment plan. We give all the students in the class or program the same course or courses. Then we look at what they can or cannot do at the end and assume that the course work is the cause of any change we have observed. Now, maybe this a valid assumption in a few instances, but if my history students are better writers at the end of the semester than they were at the beginning of the semester, how do I know that my course caused the change?

It could be that they were all in a good composition class at the same time as they took my class, or it could even be the case, especially in a program-level assessment, that they are just older and their brains have matured over the last four years. Without some group that has not been subjected to my course or program to compare them to, there is no compelling reason to assume it’s my course or program that’s causing the changes that are being observed.

If I developed a drug and then tested it myself without a control group, you might be a bit suspicious about my claims that everyone who took it recovered from his head cold after two weeks and thus that my drug is a success. But these are precisely the sorts of claims that we find in assessment.

I suspect that most academics are either consciously aware or at least unconsciously aware of these shortcomings and thus uneasy about the way assessment is done. That no one says anything reflects the sort of empty ritual that assessment is. Faculty members just want to keep the assessment office off their backs, the assessment office wants to keep the accreditors at bay and the accreditors need to appease lawmakers, who in turn want to be able to claim that they are holding higher education accountable.

IRBs are not supposed to critique research design unless it affects the safety of human subjects. However, they are supposed to weigh the balance between the risks posed by the study and the benefits of the research. Above all, you should not waste the time or risk the health of human subjects with research that is so poorly designed that it cannot produce meaningful results.

So, acknowledging that assessment is not research and not governed by IRB rules, it still seems that something silly and wasteful is going on here. Why is it acceptable that we spend more and more time and money -- time and money that have real opportunity costs and could be devoted to our students -- on assessment that is so poorly designed that it does not tell us anything meaningful about our courses or students? Whose interests are really served by this? Not students. Not faculty members.

It’s time to stop this charade. If some people want to do real research on what works in the classroom, more power to them. But making every program and every faculty member engage in nonresearch that yields nothing of value is a colossal, frivolous waste of time and money.

Erik Gilbert is a professor of history at Arkansas State University.

Editorial Tags: 
Image Source: 
iStock/applesimon
Is this diversity newsletter?: 

Developing metrics and models that are vital to student learning and retention (essay)

Is English 101 really just English 101? What about that first lab? Is a B or C in either of those lower-division courses a bellwether of a student’s likelihood to graduate? Until recently, we didn’t think so, but more and more, the data are telling us yes. In fact, insights from our advanced analytics have helped us identify a new segment of at-risk students hiding in plain sight.

It wasn’t until recently that the University of Arizona discovered this problem. As we combed through volumes of academic data and metrics with our partner, Civitas Learning, it became evident that students who seemed poised to graduate were actually leaving at higher rates than we could have foreseen. Why were good students -- students with solid grades in their lower-division foundational courses -- leaving after their first, second or even third year? And what could we do to help them stay and graduate from UA?

There’s a reason it’s hard to identify which students fall into this group; they simply don’t exhibit the traditional warning signs as defined by the retention experts. These students persist into the higher years but never graduate despite the fact that they’re strong students. They persist past their first two years and over 40 percent have GPAs above 3.0 -- so how does one diagnose them as at risk when all metrics indicate that they’re succeeding? Now we’re taking a deeper look at the data from the entire curriculum to find clues about what these students really need and even redefine our notion of what “at risk” really means.

Lower-division foundational courses are a natural starting point for us. These are the courses where basic mastery -- of a skill like writing or the scientific process -- begins, and mastery of these basics increases in necessity over the years. Writing, for instance, becomes more, not less, important over students’ academic careers. A 2015 National Survey of Student Engagement at UA indicated that the number of pages of writing assigned in the academic year to freshmen is 55, compared to 76 pages for seniors. As a freshman or sophomore, falling behind even by a few fractions can hurt you later on.

To wit, when a freshman gets a C in English 101, it doesn’t seem like a big deal -- why would it? She’s not at risk; she still has a 3.0, after all. But this student has unintentionally stepped into an institutional blind spot, because she’s a strong student by all measures. Our data analysis now shows that this student may persist until she hits a wall, usually during her major and upper-division courses, which is oftentimes difficult to overcome.

Let’s fast forward two years, then, when that same freshman is a junior enrolled in demanding upper-level classes. Her problem, a lack of writing command, has compounded into a series of C’s or D’s on research papers. A seemingly strong student is now at risk to persist, and her academic life becomes much less clear. We all thought she was on track to graduate, but now what? From that point, she may change her major, transfer to another institution or even exit college altogether. In the past, we would never have considered wraparound support services for students who earned a C in an intro writing course or a B in an intro lab course, but today we understand that we have to be ready and have to think about a deeper level of academic support across the entire life cycle of an undergrad.

Nationally, institutions like ours have developed many approaches to addressing the classic challenges of student success, developing an infrastructure of broad institutional interventions like centralized tutoring, highly specialized support staff, supplemental classes and more. Likewise, professors and advisers have become more attuned to responding to the one-on-one needs of students who may find themselves in trouble. There’s no doubt that this high/low approach has made an impact and our students have measurably benefited from it. But to assist students caught in the middle, those that by all measurement are already “succeeding,” we have to develop a more comprehensive institutional approach that works at the intersections of curricular innovation and wider student support.

Today, we at UA are adding a new layer to the institutional and one-to-one approaches already in place. In our courses, we are pushing to ensure that mastery matters more than a final grade by developing metrics and models that are vital to student learning. This, we believe, will lead to increases in graduation rates. We are working hand in hand with college faculty members, administrators and curriculum committees, arming those partners with the data necessary to develop revisions and supplementary support for the courses identified as critical to graduation rather than term-over-term persistence. We are modeling new classroom practices through the expansion of student-centered active classrooms and adaptive learning to better meet the diverse needs of our students.

When mastery is what matters most, the customary objections to at-risk student intervention matter less. Grade inflation by the instructor and performance for grade by the student become irrelevant. A foundational course surrounded by the support that a student often finds in lower-division courses is not an additional burden to the student, but an essential experience. Although the approach is added pressure on the faculty and staff, it has to be leavened with the resources that help both the instructor and the students succeed.

This is a true universitywide partnership to help a population of students who have found themselves unintentionally stuck in the middle. We must be data informed, not data driven, in supporting our students, because when our data are mapped with a human touch, we can help students unlock their potential in ways even they couldn’t have imagined.

Angela Baldasare is assistant provost for institutional research. Melissa Vito is senior vice president for student affairs and enrollment management and senior vice provost for academic initiatives and student success. Vincent J. Del Casino Jr. is provost of digital learning and student engagement and associate vice president of student affairs and enrollment management at the University of Arizona.

Image Source: 
iStock/jaker5000
Is this diversity newsletter?: 

Essay on flawed assumptions behind digital badging and alternative credentialing

Inside Higher Ed recently checked up on adoption of badges specifically, and alternative credentialing generally, with a look at early adopter Illinois State University’s rollout of a badge platform. The overarching goal of badging and alternative credentialing initiatives is very valuable: to better communicate the value and variety of people’s skills to employers so that it’s easier to connect with and improve job outcomes. Yet the focus on badges and alternative credentials is like trying to facilitate global trade by inventing Esperanto.

The conception, theory and adoption of badge-based alternative credentialing initiatives starts as far back as 2011, when Mozilla announced the launch of its Open Badge Initiative and HASTAC simultaneously made “Digital Badges for Lifelong Learning” the theme of its fourth Digital Meaning & Learning competition. In the five years since, much has been written and even more time spent developing the theory and practice of alternative credentialing via badges -- from Mozilla and its support by the MacArthur Foundation to Purdue University’s Passport, to BadgeOS and Badge Alliance. Lately, the Lumina Foundation has taken the lead promoting alternative credentialing, most recently participating in a $2.5 million investment in badge platform Credly and a $1.3 million initiative to help university registrars develop a “new transcript.”

The premise behind all of the badge and alternative credential projects is the same: that if only there were a new, unified way to quantify, describe and give evidence of student learning inside the classroom and out, employers would be able to appropriately value those skills and illuminate a path to job outcomes. These kinds of premises often lead to utopian, idealized solutions that imagine transforming society itself. From Lumina’s “Strategy 8” overview:

To maximize our collective potential as a society, we need a revamped system of postsecondary credentials -- a fully integrated system that is learning based, student centered, universally understood and specifically designed to ensure quality at every level.

The problem for Lumina, Mozilla, Credly and the rest is that they’re proposing to replace a rich variety of credential “languages” with a universal one that’s not just unnecessary, but that’s modeled on fundamentally flawed analogies and observations.

I’ll start with the flaws of badges as a credentialing solution. Early on, digital badges often used Boy and Girl Scout badges as an analogy, but the more direct precursor of the current generation of badge solutions is video games. Indeed, attaining badges for completing certain tasks or reaching certain milestones is such a core feature of video game design and experience that the whole practice of rewarding behavior within software is referred to as “gamification.” This approach became widespread (with the launch of Foursquare, Gowalla, GetGlue and dozens more) in the years just preceding the launch of digital badges.

Yet video game badges -- and the badges employed by gamification companies -- are not truly credentials, but behaviorist reward systems designed to keep people on task. As credentials, their only useful meaning was within the systems in which they were earned, specifically within a given video game or bar-hopping app. Scout badges have a similar limitation: whatever their value in motivating attainment toward a worthy skill or outcome, the meaning of those badges is difficult to assess for nonscouts, or those not trained in the visual language of scouting badges.

Badge adherents aim to address the “value” and portability of badges by attaching proof of skills to the badges themselves. This is the same idea behind e-portfolios: that evidence of each skill is not just demonstrable, verifiable and universally understood, but useful to employers. Yet outside of specific fields, portfolios simply don’t matter to employers. As Anthony Carnevale, director of Georgetown University’s Center on Education and the Workforce, told The Chronicle of Higher Education earlier this year about the New Transcript portfolio, “Employers don’t want to take time to go through your portfolio -- they just don’t.” Where evidence of skills is important and useful, solutions already exist: GitHub for software developers; Behance for designers; transcripts, essays and recommendations for graduate school.

The idea of replacing university “dialects” with a new language of skills and outcomes is less metaphorical when think tanks and ed-tech companies talk about alternative credentials as a category. There, advocates propose an entirely new vocabulary: microcredentials, nanodegrees, stackable badges and more, all meant to convey (to employers primarily) the body of skills and knowledge that a student possesses. But they are redefining concepts that already exist, and that exist productively for the marketplace of students, educators and employers.

Consider the stackable badge, the idea that learning competencies should be assessed and verified in a progression that comprises and leads to a certified credential. But stackable credentials already exist in ways that everyone understands. In the undergraduate major, a student completes a series of related and escalating levels of mastery in a given subject area, assessed by experts in that field. Upon completion of those microcredentials -- i.e., classes -- the student is awarded a degree with a focus in that field and with an indication of attainment (honors). The same goes for hundreds of areas of expertise inside and outside higher education: in financial analysis (the extremely demanding and desirable CFA designation), entry-level and advanced manufacturing (the National Association of Manufacturers MSCS system), specific IT areas of focus like ISACA and (ISC)2, bar exams, medical boards, and more.

Credentials, in and of themselves, are a solved problem. I know this because my own company, Merit, launched the biggest, most comprehensive badge experiment that no one has heard of. Between 2011 and 2014 we tested a variation of the scout model -- a badge-based visual language of college milestones and credentials analogous to a military officer’s dress uniform -- that could be quickly read to convey a person’s skills, accomplishments and level of achievement. Nearly 500 colleges granted more than three million students almost 10 million badges that included academic honors, notable cocurriculars, experiential learning, internships and more. We tested interest by employers, educators and students (and continue to). What’s clear is this: it’s far, far more important to simply document existing credentials than to invent new ones, or a new language to describe them. Stakeholders in the high-school-to-college-to-career pipeline understand and value credentials as they exist now, and rarely need or want a new way to understand them. They just want to see them.

Connecting students’ skills and ambitions to the pathways to a career is a big deal, but it doesn’t require a new language that’s based on techno-solutionist fantasies. LinkedIn, the “economic graph” that many hold up as a model, needed more than $100 million of private capital for something as simple as convincing managers and a certain professional class to keep updated résumés online. Doing something similar for every single student is both more valuable and more difficult -- and doesn’t need to reinvent the entire language of credentials to complicate the effort.

My biggest frustration with badges and alternative credentials isn’t that they are an ivory tower solution to a real world problem. It’s that helping students succeed means more than figuring out a new language. Higher education is a demanding, high-stakes endeavor for the vast majority of students. Proposing that they -- and the institutions educating them and the employers who might hire them -- learn a new lingua franca for conveying the value of that learning, every year, over the very short time that they’re mastering the skills and knowledge that they need isn’t just impractical. It’s unfair.

Colin Mathews is founder and president of Merit, a technology company focused on creating and sharing stories about students’ successes.

Section: 
Editorial Tags: 
Image Source: 
iStock/DavidGoh

Initiative fatigue, lack of accountability preventing colleges from improving student outcomes

Report finds initiative fatigue and a lack of accountability, among other obstacles, are preventing colleges from improving student outcomes.

Pages

Subscribe to RSS - Assessment
Back to Top