Assessment

Headlines About Gallup Survey on College Worth Are Misleading (essay)

It’s no secret that higher education in America is in a tight spot.

The cost and worth of college is a hot topic -- from dinner parties to political debates. The majority opinion is that college graduates are significantly dissatisfied with what they are receiving for the price of the “product” they receive.

Gallup released its most recent poll data of college and university alumni through its “Gallup-Purdue Index 2015 Report,” which is based on interviews with more than 30,000 graduates. This year, the survey included new questions concerning the “worth” of college. It’s time to step beyond anecdotal evidence and get our hands dirty with some data.

For those of us who fastidiously follow the headlines of Inside Higher Ed and The Chronicle of Higher Education, we initially found that all of our hand-wringing over how the public views higher education might be justified.

Inside Higher Ed led with: “Not Worth It?”

The Chronicle ran: “Just Half of Graduates Strongly Agree Their College Education Was Worth the Cost.” (Note: The article title was changed. The piece was originally entitled, “Just Half of Graduates Say Their College Education Was Worth the Cost.”)

And on Sept. 30, Jeffrey Selingo, former editor of The Chronicle, wrote a piece for The Washington Post entitled, “Is College Worth the Cost? Many Recent Graduates Don’t Think So.”

Yikes. The sky is falling, right?

Well, not really. Each of these headlines seems to insinuate that college grads are disgruntled by the cost of their education. However, if we read beyond the headlines, and take even a quick look at the numbers, we find that the sky isn’t falling.

In fact, maybe things are actually better than we imagined.

Gallup’s chart shows alumni responses to the statement: “My education from [university name] was worth the cost.” Respondents answered on a scale from one (strongly disagree) to five (strongly agree). While the headlines suggest that alumni are dissatisfied, I find myself reading these numbers differently.

Even if we assume that an answer of three (3) is indicative of “neutral,” we still find that 77 percent of recent alumni either agree or strongly agree with the statement that their college or university education was worth the cost.

I read the data this way: most grads believe that their education was worth the cost. That is good news. Even better news is that only 10 percent disagree or strongly disagree. Some additional good news is that, even though the recent graduates who participated in the survey were less likely to think their education was worth the cost, as they get farther and farther away from commencement -- as they are promoted out of entry-level positions -- their satisfaction regarding the cost of education will probably get better (as the Gallup report indicates).

The Gallup report includes significant data -- including factors that lead to student thriving.

But here is my real point: headlines matter.

In our current context bent on scrutinizing higher education, as we look ahead to report cards, and as we struggle to make a case for the import of this sector of society that has been educating citizens in America for nearly four centuries, let’s at least lead with more accurate headlines -- even if crisis sells.

Here’s what the headlines could have been:

“Is College Worth the Cost? Only 10 Percent of Grads Don’t Think So.”

Same numbers.

Entirely different story.

Keith R. Martel is director of the Master of Arts in Higher Education at Geneva College in Beaver Falls, Penn. He is the co-author of the newly released Storied Leadership, a faith-based, narrative approach to leadership.

Editorial Tags: 

The Waning of the Carnegie Unit (essay)

For a century, the Carnegie Unit -- or credit hour -- served American education very well. Created by the Carnegie Foundation for the Advancement of Teaching in 1906, it is now the nearly universal accounting unit for colleges and schools. It brought coherence and common standards to the chaotic 19th-century high school and college curriculum, established a measure for judging student academic progress, and set the requirements for high school graduation and college admission. But today it has grown outdated and less useful.

A time-based standard, one Carnegie Unit (or credit) is awarded for every 120 hours of class time. The foundation translated this into one hour of instruction five days a week for 24 weeks. Students have been expected to take four such courses a year for four years in high school, with a minimum of 14 Carnegie Units required for college admission. The Carnegie Unit perfectly mirrored its times and the design of the nation’s schools.

An industrialized America created schools modeled on the technology of the times: the assembly line. With the Carnegie Unit as a basis, schools nationwide adopted a common process for schooling groups of children, sorted by age for 13 years, 180 days a year in Carnegie unit-length courses. Students progressed according to seat time -- how long they were exposed to teaching.

At colleges and universities across the nation, the Carnegie Unit became more commonly referred to as the credit hour. The common semester-long class became three credit hours. The average four-year degree was earned after completing 120 credit hours. Time and process were fixed, and outcomes of schooling were variable. All students were expected to learn the same things in the same period of time. The Carnegie Unit provided the architecture to make this system work.

But in the United States’ transition from an industrial to an information economy, the Carnegie Unit is becoming obsolete. The information economy focuses on common, fixed outcomes, yet the process and the time necessary to achieve them are variable. The concern in colleges and schools is shifting from teaching to learning -- what students know and can do, not how long they are taught. Education at all levels is becoming more individualized, as students learn different subjects at different rates and learn best using different methods of instruction.

As a result, educational institutions need a new accounting to replace the Carnegie Unit. A 2015 report by the Carnegie Foundation made this clear, stating the Carnegie Unit “sought to standardize students’ exposure to subject material by ensuring they received consistent amounts of instructional time. It was never intended to function as a measure of what students learned.” States have responded by adopting outcome- or learning-based standards for schools. They are now detailing the skills and knowledge students must attain to graduate and implementing testing regimens, such as fourth- and eighth-grade reading and math exams, to assess whether students have met those standards, testing regimens to assess student progress and attainment of outcomes.

This evolution is causing two problems. First, both the industrial and information economy models of education are being imposed on our educational institutions at the same time. At the moment, the effect is more apparent in our schools than colleges, but higher education can expect to face the same challenges. Today, schools and colleges are being required to use the fixed-process, fixed-calendar and Carnegie Unit accounting system of the industrial era. They are also being required to achieve the information economy’s fixed outcomes and follow its testing procedures. The former is true of higher education, and government is increasingly asking colleges and universities for the latter.

Doing both is not possible, by definition. Instead, states need to move consciously and systematically to the information economy’s emerging and increasingly dominant model of education, which will prevail in the future. The Carnegie Unit will pass into history.

The second problem is that the steps states have taken to implement standards, outcomes and associated testing are often incomplete and unfinished. They are at best betas quickly planned and hurriedly implemented, which like all new initiatives demand significant rethinking, redesign and refinement. In the decades to come, today's tests will appear primitive by comparison to the assessment tools that replace them. Think of the earliest cell phones -- they needed development and refinement.

Unfortunately, however, states’ mandates go beyond the capacity and capabilities of their standards, tests, data systems and existing curricula. For example, despite growing state and federal pressure to evaluate faculty and institutions based on student performance, most states do not have the data or data systems to make this possible.

If Information Age accounting systems for education are to work as well as the Carnegie Unit did, the tasks ahead are these:

  • Define the outcomes or standards students need to achieve to graduate from school and college. While the specific outcomes or standards adopted are likely to vary from state to state, the meaning of each standard or outcome should be common to all states. A current example is coding. Today states, cities and institutions differ profoundly in their requirements in this area, however, it is essential that the meaning of competence in this area be common.
  • Create curricula that mirror each standard and that permit students to advance according to mastery.
  • Develop assessments that measure student progress and attainment of standards or outcomes. Over time, build upon current initiatives in analytics and adaptive learning, to embed assessment into curricula to function like a GPS, discovering students’ misunderstandings in real time and providing guidance to get them back on track.

These three key steps will lay the groundwork for the education demanded by the Information Age. They will provide the clarity, specificity, standardization, reliability and adoptability that made the Carnegie Unit successful. It will create an educational accounting system for the information economy that is as strong as the Carnegie Unit was for industrial America.

I do not pretend doing this will be easy or quick. It is nothing less than the reinvention of the American education system. It will require bold institutions to lead, as universities like Carnegie Mellon University, the Massachusetts Institute of Technology, Southern New Hampshire University and Western Governors University are doing, to create and test the new models of education for the Information Age. It will take a coalition of state government, educational institutions and professional associations like accreditors to turn the innovations into policy.

We don't have the luxury of turning away from this challenge. Our education system is not working. In contrast to the industrial era, in which national success rested on physical labor and natural resources, information economies require brains and knowledge. The future demands excellent schools and colleges.

Arthur Levine is the president of the Woodrow Wilson National Fellowship Foundation in Princeton, N.J. He served as the president of Teachers College, Columbia University, from 1994 to 2006.

Editorial Tags: 

Nonacademic skills test from ETS fills in blanks on student's likelihood of success

Smart Title: 

Colleges are using a nonacademic skills test from ETS to try to boost graduation rates and in remedial course placement. One university gives the test to all its athletes.

Group of seven major universities seeks to offer online microcredentials

Smart Title: 

Seven major universities plan to create the University Learning Store, a joint web portal for microcredentials, featuring online content, assessments and tutoring.

Baylor professor turns exams into celebrations to keep students more engaged

Smart Title: 

One professor has banned exams in the classroom in favor of "celebrations," placing the emphasis on how much students have learned and away from scores they've earned.

Essay on how to talk about assessment in faculty job interviews

At many interviews for faculty jobs these days, you'll be asked about assessment. Melissa Dennihy offers some ideas on how to answer.

Job Tags: 
Section: 
Topic: 
Editorial Tags: 
Show on Jobs site: 

Professors should seize chance to use data to improve learning (essay)

When Rowland Hussey Macy opened his namesake store in 1858, understanding consumer behavior was largely a matter of guessing. Retailers had little data to assess what customers wanted or how variables like store hours, assortment or pricing might impact sales. Decision making was slow: managers relied on manual sales tallies, compiled weekly or annually. Dozens of stores failed, including several of Macy’s original stores.

Predictive analytics, in the early days of retail, were rudimentary. Forward-thinking retailers combined transactional data with other types of information -- the weather, for example -- to understand the drivers of consumer behavior. In the 1970s, everything changed. Digital cash registers took hold, allowing companies to capture data and spot trends more quickly. They began A/B testing, piloting ideas in a test vs. control model, at the store level to understand the impact of strategy in near real time.

In the early days of AOL, where I worked in the 1990s and early 2000s, we were quick to recognize the risk to brick-and-mortar stores, as online retailers gathered unprecedented data on consumer behavior. Companies like Amazon could track a customer’s movements on their site using click-stream data to understand which products a customer was considering, or how long they spent comparing products before purchasing. Their brick-and-mortar counterparts, meanwhile, were stuck in the 1800s.

Unexpected innovations, however, have a funny way of leveling the playing field. Today, broadband ubiquity and the proliferation of mobile devices are enabling brick-and-mortar stores to track cell phone signals or use video surveillance to understand the way consumers navigate a store, or how much time they spend in a particular aisle. Sophisticated multichannel retailers now merge online behavior with in-person information to piece together a more holistic picture of their consumers, generating powerful data that drive changes in layout, staffing, assortment and pricing. A recent study found that 36 percent of in-store retail purchases -- worth a whopping $1.1 trillion -- are now influenced by the use of digital devices. Retailers who leverage online research to drive brick-and-mortar sales are gaining a competitive advantage.

The use of big data and predictive analytics in higher education is nascent. So-called disrupters often claim that the lecture hasn’t changed in 150 years, and that only online learning can drive transformative, game-changing outcomes for students. Of course, these claims ring hollow among today’s tech-savvy professors.

Since my transition into higher education, I have been struck by the parallel journey retailers and educators face. Both have been proclaimed obsolete at various points, but the reality is that the lecture, like the retail experience, has and will continue to evolve to meet the new demands of 21st-century users.

Like brick-and-mortar stores, lectures were once a black box -- but smart faculty members are beginning to harness the presence of mobile devices to capture unprecedented levels of data in traditional classrooms. And smart institutions are combining real-time engagement data with historic information to spot challenges early and change the academic trajectory for students.

Historical sources of student data (FAFSA, GPA, SAT, etc.) have predictive validity, but they are a bit like the year-over-year data retailers used: limited in depth and timeliness. The heart of a higher education institution is its professors -- and its classes. In addition to professors being experts in their fields, providing unique learning opportunities to their students, studies have shown that when professors have positive relationships with students, it leads to greater student success.

Some of the most interesting early data are coming from the big, first-year lecture courses. While most students experience these as a rite of passage, they also hold great potential as models of how behavioral data can improve engagement and completion rates for students. Faculty are no longer powerless in the face of larger classes and limited insight into their students' learning behavior. They can track how well students are engaging in traditional lecture classes and intervene with students who aren’t engaged in the behaviors (note taking, asking questions and attendance) that correlate with success.

Historically, professors have relied on piecemeal solutions to gather insights on student behavior. So-called student-response systems and learning management software, like digital cash registers in the ’70s, provide useful data -- but they don’t provide the sort of real-time analytics that can inform an instructor’s practice or to identify students in need of additional support and coaching.

A more recent brand of solutions -- in full disclosure, including ours at Echo360 -- are designed to work in conjunction with great teaching, while providing instructors with the tools to track and measure student engagement: Are students taking notes? Are they asking questions? These tools give administrators and instructors insight into how students are interacting and participating both in class, as well as with content or readings before and after class. No more waiting for summative tests to demonstrate that a student misunderstood a concept weeks or months earlier.

The analogy between retail and education has its limitations. The mission and objectives in education are more nuanced, and frankly, more important. However, education, like every sector, has what we call a moment of truth.

For retailers, that moment of truth is centered around the purchase decision. Sophisticated marketers and retailers have used behavioral data to become incredibly skilled at understanding and shaping that purchase decision to achieve extraordinary results.

It’s time to use those learnings for a higher calling. The explosion of digital devices in the classroom allows us to understand the learning process wherever it is happening on campus, and to support education’s vital moment of truth -- a transaction of knowledge between professors and students.

Frederick Singer is CEO and founder of Echo360, which provides active learning and lecture capture services to more than 650 higher ed clients in 30 countries.

Section: 
Editorial Tags: 

Faculty members should drive efforts to measure student learning (essay)

Lumina Foundation recently released an updated version of its Degree Qualifications Profile (D.Q.P.), which helps define what students should know and what skills they should master to obtain higher education degrees.

This revised framework marks a significant step in the conversation about measuring students’ preparedness for the workforce and for life success based on how much they've learned rather than how much time they’ve spent in the classroom. It also provides a rare opportunity for faculty members at colleges and universities to take the lead in driving long-overdue change in how we define student success.

The need for such change has never been stronger. As the economy evolves and the cost of college rises, the value of a college degree is under constant scrutiny. No longer can we rely on piled-up credit hours to prove whether students are prepared for careers after graduation. We need a more robust -- and relevant -- way of showing that our work in the classroom yields results.

Stakeholders ranging from university donors to policy makers have pushed for redefining readiness, and colleges and universities have responded to their calls for action. But too often the changes have been driven by the need to placate those demanding reform and produce quick results. That means faculty input has been neglected.

If we’re to set up assessment reform for long-term success, we need to empower faculty members to be the true orchestrators.  

The D.Q.P. provides an opportunity to do that, jelling conversations that have been going on among faculty and advisers for years. Lumina Foundation developed the tool in consultation with faculty and other experts from across the globe and released a beta version to be piloted by colleges and universities in 2011. The latest version reflects feedback from the field, based on their experience with the beta version -- and captures the iterative, developmental processes of education understood by people who work with students daily.

Many of the professionals teaching in today’s college classrooms understand the need for change. They’re used to adapting to ever-changing technologies, as well as evolving knowledge. And they want to measure students’ preparedness in a way that gives them the professional freedom to own the changes and do what they know, as committed professionals, works best for students.

As a tool, the D.Q.P. encourages this kind of faculty-driven change. Rather than a set of mandates, the D.Q.P. is a framework that invites them to be change agents. It allows faculty to assess students in ways that are truly beneficial to student growth. Faculty members don't care about teaching to the assessment; they want to use what they glean from assessments to help improve student learning.

We’ve experienced the value of using the D.Q.P. in this fashion at Utah State University. In 2011, when the document was still in its beta version, we adopted it as a guide to help us rethink general education and its connection to our degrees and the majors within them. 

We began the process by convening disciplinary groups of faculty to engage them in a discussion about a fundamental question: “What do you think your students need to know, understand and be able to do?” This led to conversations about how students learn and what intellectual skills they need to develop.

We began reverse engineering the curriculum, which forced us to look at how general education and the majors work together to produce proficient graduates. This process also forced us to ask where degrees started, as well as ended, and taught us how important advisers, librarians and other colleagues are to strong degrees.

The proficiencies and competencies outlined in the D.Q.P. provided us with a common institutional language to use in navigating these questions. The D.Q.P.’s guideposts also helped us to avoid reducing our definition of learning to course content and enabled us to stay focused on the broader framework of student proficiencies at various degree milestones.

Ultimately the D.Q.P. helped us understand the end product of college degrees, regardless of major: citizens who are capable of thinking critically, communicating clearly, deploying specialized knowledge and practicing the difficult soft skills needed for a 21st-century workplace.

While establishing these criteria in general education, we are teaching our students to see their degrees holistically. In our first-year program, called Connections, we engage students in becoming "intentional learners" who understand that a degree is more than a major. This program also gives students a conceptual grasp of how to use their educations to become well prepared for their professional, personal and civic lives. They can explain their proficiencies within and beyond their disciplines and understand they have soft skills that are at a premium.

While by no means a perfect model, what we’ve done at Utah State showcases the power of engaging faculty and staff as leaders to rethink how a quality degree is defined, assessed and explained. Such engagement couldn’t be more critical.

After all, if we are to change the culture of higher learning, we can't do it without the buy-in from those who perform it. Teachers and advisers want their students to succeed, and the D.Q.P. opens a refreshing conversation about success that focuses on the skills and knowledge students truly need.

The D.Q.P. helps give higher education practitioners an opportunity to do things differently. Let’s not waste it.

Norm Jones is a professor of history and chairman of general education at Utah State University. Harrison Kleiner is a lecturer of philosophy at Utah State.

Editorial Tags: 

Assessment (of the right kind) is key to institutional revival

Today, leaders of colleges and universities across the board, regardless of size or focus, are struggling to meaningfully demonstrate the true value of their institution for students, educators and the greater community because they can't really prove that students are learning.

Most are utilizing some type of evaluation or assessment mechanism to keep “the powers that be” happy through earnest narratives about goals and findings, interspersed with high-level data tables and colorful bar charts. However, this is not scientific, campuswide assessment of student learning outcomes aimed at the valid measure of competency.

The "Grim March" & the Meaning of Assessment

Campuswide assessment efforts rarely involve the rigorous, scientific inquiry about actual student learning that is aligned from program to program and across general education. Instead, year after year, the accreditation march has trudged grimly on, its participants working hard to produce a plausible picture of high “satisfaction” for the whole, very expensive endeavor.

For the past 20-plus years, the primary source of evidence for a positive impact of instruction has come from tools like course evaluation surveys. Institutional research personnel have diligently combined, crunched and correlated this data with other mostly indirect measures such as retention, enrollment and grade point averages.

Attempts are made to produce triangulation with samplings of alumni and employer opinions about the success of first-time hires. All of this is called “institutional assessment,” but this doesn’t produce statistical evidence from direct measurement that empirically demonstrates that the university is directly responsible for the students’ skill sets based on instruction at the institution. Research measurement methods like Chi-Square or Inter-rater reliability combined with a willingness to assess across the institution can demonstrably prove that a change in student learning is statistically significant over time and is the result of soundly delivered curriculum. This is the kind of “assessment” the world at large wants to know about.

The public is not satisfied with inferentially derived evidence. Given the cost, they yearn to know if their sons and daughters are getting better at things that matter to their long-term success. Employers routinely stoke this fire by expressing doubt about the out-of-the-box skills of graduates.

Who Owns Change Management

Whose responsibility is it to redirect the march to provide irrefutable reports that higher education is meeting the needs of all its stakeholders? Accreditors now wring their hands and pronounce that reliance on indirect measures will no longer suffice. They punish schools with orders to fix the shortfalls in the assessment of outcomes and dole out paltry five-year passes until the next audit. They will not, however, provide sound, directive steps for the marchers about how to systematically address learning outcomes.

How about the government? The specter of more third-party testing is this group’s usual response. They did it to K-12 and it has not worked there either. Few would be happy with that center of responsibility.

Back to the campus. To be fair, IR or offices of institutional effectiveness have been reluctant to get involved with direct measures of student performance for good reasons. Culture dictates that such measures belong to program leaders and faculty. The traditions and rules of “academic freedom” somehow demand this. The problem is that faculty and program leaders are indeed content experts, but they are no more versed in effective assessment of student outcomes than anyone else on campus.

This leaves us with campus leaders who have long suspected something is very wrong or at least misdirected. To paraphrase one highly placed academic officer, “We survey our students and a lot of other people and I’m told that our students are ‘happy.’ I just can’t find anyone who can tell me for sure if they’re ‘happy-smarter’ or not!” Their immersion in the compliance march does not give them much clue about what to do about the dissonance they are feeling.

The Assessment Renaissance

Still, the intelligent money is on higher ed presidents first and foremost, supported by their provosts and other chief academic officers. If there is to be deep change in the current culture they are the only ones with the proximal power to make it happen. The majority of their number has declared that “disruption” in higher education is now essential.

Leaders looking to eradicate the walking dead assessment march in a systematic way need to:

  1. Disrupt. This requires a college or university leader to see beyond the horizon and ultimately have an understanding of the long-term objective. It doesn’t mean they need to have all the ideas or proper procedures, but they must have the vision to be a leader and a disrupter. They must demand change on a realistic, but short timetable.
  2. Get Expertise. Outcomes/competency-based assessment has been a busy field of study over the past half-decade. Staff development and helping hands from outside the campus are needed.
  3. Rally the Movers and Shakers. In almost every industry, there are other leaders without ascribed power but whose drive is undeniable. They are the innovators and the early adopters. Enlist them as co-disruptors. On campuses there are faculty/staff that will be willing to take risks for the greater good of assessment and challenge the very fabric of institutional assessment. Gather them together and give them the resources, the authority and the latitude to get the job done. Defend them. Cheerlead at every opportunity.
  4. Change the Equation. Change the conversation from GPAs and satisfaction surveys to one essential unified goal: are students really learning and how can a permanent change in behavior be measurably demonstrated?
  5. Rethink your accreditation assessment software. Most accreditation software systems rely on processes that are narrative, not a systematic inquiry via data. Universities are full of people who research for a living. Give them tools (yes, like Chalk & Wire, which my company provides) to investigate learning and thereby rebuild a systematic approach to improve competency.
  6. Find the Carrots. Assume a faculty member in engineering is going to publish. Would a research-based study about teaching and learning in their field stand for lead rank and tenure? If disruption is the goal, then the correct answer is yes.

Assessment is complex, but it’s not complicated. Stop the grim march. Stand still for a time. Think about learning and what assessment really means and then pick a new proactive direction to travel with colleagues.

Geoff Irvine is CEO and founder of Chalk & Wire.

Editorial Tags: 

Essay criticizes state of assessment movement in higher education

In higher education circles, there is something of a feeding frenzy surrounding the issue of assessment. The federal government, due to release a proposed rating system later this fall, wants assessments to create ways to allow one to compare colleges and universities that provide “value”; accrediting organizations want assessments of student learning outcomes; state agencies want assessments to prove that tax dollars are being spent efficiently; institutions want internal assessments that they can use to demonstrate success to their own constituencies.

By far the main goal of this whirlwind of assessment is trying to determine whether an institution effectively delivers knowledge to its students, as though teaching and learning were like a commodity exchange. This view of education very much downplays the role of students in their own education, placing far too much responsibility on teachers and institutions, and overburdening everyone with a never-ending proliferation of paperwork and bureaucracy.

True learning requires a great deal of effort on the part of the learner. Much of this effort must come in the form of self-inquiry, that is, ongoing examination and reexamination of one’s beliefs and habits to determine which ones need to be revised or discarded. This sort of self-examination cannot be done by others, nor can the results of it be delivered by a teacher. It is work that a student must do for himself or herself.

Because of this, most of the work required in attaining what matters most in education is the responsibility of the student. A teacher can make suggestions, point out deficiencies, recommend methods, and model the behavior of someone who has mastered self-transformation. But no teacher can do the work of self-transformation for a student.

Current assessment models habitually and almost obsessively understate the responsibility of the student for his or her own learning, and, what is more consequential, overstate the responsibility of the teacher. Teachers are directed to provide clear written statements of observable learning outcomes; to design courses in which students have the opportunity to achieve those outcomes; to assess whether students achieve those outcomes; and to use the assessments of students to improve the courses so that attainment of the prescribed outcomes is enhanced.  The standards do not entirely remove the student as an agent — the course provides the opportunity, while the student must achieve the outcomes. But the assessment procedures prescribe in advance the outcome for the student; the student can achieve nothing of significance, as far as assessment goes, except what the professor preordains.

This is a mechanical and illiberal exercise. If the student fails to attain the end, is it because the professor has not provided a sufficient opportunity? Or because, despite the opportunity being perfectly designed, the student, in his freedom, hasn’t acted? Or maybe the student attains the designed outcome due to her own ingenuity even when the opportunity is ill-designed. Or, heaven forbid, the student has after reflection rejected the outcome desired by the teacher in favor of another. The assessment procedure accurately measures the effectiveness of the curriculum precisely to the extent that the student’s personal freedom is discounted. To the extent that student’s freedom is acknowledged, the assessment procedure has to fail.

True learning belongs much more to the student than to the teacher. Even if the teacher spoon-feeds facts to the students, devises the best possible tests to determine whether students are retaining the facts, tries to fire them up with entertaining excitement, and exhibits perfectly in front of them the behavior of a self-actuated learner, the students will learn little or nothing important about the subject or about themselves if they do not undertake the difficult discipline of taking charge of their own growth. This being the case, obsessing about the responsibility of the teacher without paying at least as much attention to the responsibility of the student is hardly going to produce helpful assessments.

True learning is not about having the right answer, so measuring whether students have the right answers is at best incidental to the essential aims of education. True learning is about mastering the art of asking questions and seeking answers, and applying that mastery to your own life. Ultimately, it is about developing the power of self-transformation, the single most valuable ability one can have for meeting the demands of an ever-changing world. Meaningful assessment measures attainment in these areas, rather than in the areas most congenial to the economic metaphor.

How best to judge whether students have attained the sort of freedom that can be acquired by study? Demand that they undertake and successfully complete intellectual investigations on their own. The independence engendered by such projects empowers students to meet the challenges of life and work. It helps them shape lives worth living, arrived at through thoughtful exploration of the question: What kind of life do I want to make for myself?

What implications does this focus have for assessors? They should move away from easy assessments that miss the point to more difficult assessments that try to measure progress in self-transformation. The Gallup-Purdue Index Report "Great Jobs, Great Lives" found six crucial factors linking the college experience to success at work and overall well-being in the long term:

1. At least one teacher who made learning exciting.
2. Personal concern of teachers for students.
3. Finding a mentor
4. Working on a long-term project for at least one semester.
5. Opportunities to put classroom learning into practice through internships or jobs.
6. Rich extracurricular activities.

Assessors should thus turn all their ingenuity toward measuring the quality of the students’ learning environment, toward measuring students’ engagement with their teachers and their studies, and toward measuring activities in which students practice the freedom they have been working to develop in college. The results should be used to push back against easy assessments based on the categories of economics.

Higher education, on the other hand, would do well to repurpose most of the resources currently devoted to assessment. Use them instead to do away with large lecture classes — the very embodiment of education-as-commodity — so that students can have serious discussions with teachers, and teachers can practice the kind of continuous assessment that really matters.

 

Christopher B. Nelson is president of St. John's College, in Annapolis.

Editorial Tags: 
Image Source: 
Getty Images

Pages

Subscribe to RSS - Assessment
Back to Top