Assessment

Improving Graduation Rates Is Job One at City Colleges of Chicago

Smart Title: 

City Colleges of Chicago have a 7 percent graduation rate. If that number doesn't go up, the system's chancellor, presidents and trustees could lose their jobs.

Assessment (of the right kind) is key to institutional revival

Today, leaders of colleges and universities across the board, regardless of size or focus, are struggling to meaningfully demonstrate the true value of their institution for students, educators and the greater community because they can't really prove that students are learning.

Most are utilizing some type of evaluation or assessment mechanism to keep “the powers that be” happy through earnest narratives about goals and findings, interspersed with high-level data tables and colorful bar charts. However, this is not scientific, campuswide assessment of student learning outcomes aimed at the valid measure of competency.

The "Grim March" & the Meaning of Assessment

Campuswide assessment efforts rarely involve the rigorous, scientific inquiry about actual student learning that is aligned from program to program and across general education. Instead, year after year, the accreditation march has trudged grimly on, its participants working hard to produce a plausible picture of high “satisfaction” for the whole, very expensive endeavor.

For the past 20-plus years, the primary source of evidence for a positive impact of instruction has come from tools like course evaluation surveys. Institutional research personnel have diligently combined, crunched and correlated this data with other mostly indirect measures such as retention, enrollment and grade point averages.

Attempts are made to produce triangulation with samplings of alumni and employer opinions about the success of first-time hires. All of this is called “institutional assessment,” but this doesn’t produce statistical evidence from direct measurement that empirically demonstrates that the university is directly responsible for the students’ skill sets based on instruction at the institution. Research measurement methods like Chi-Square or Inter-rater reliability combined with a willingness to assess across the institution can demonstrably prove that a change in student learning is statistically significant over time and is the result of soundly delivered curriculum. This is the kind of “assessment” the world at large wants to know about.

The public is not satisfied with inferentially derived evidence. Given the cost, they yearn to know if their sons and daughters are getting better at things that matter to their long-term success. Employers routinely stoke this fire by expressing doubt about the out-of-the-box skills of graduates.

Who Owns Change Management

Whose responsibility is it to redirect the march to provide irrefutable reports that higher education is meeting the needs of all its stakeholders? Accreditors now wring their hands and pronounce that reliance on indirect measures will no longer suffice. They punish schools with orders to fix the shortfalls in the assessment of outcomes and dole out paltry five-year passes until the next audit. They will not, however, provide sound, directive steps for the marchers about how to systematically address learning outcomes.

How about the government? The specter of more third-party testing is this group’s usual response. They did it to K-12 and it has not worked there either. Few would be happy with that center of responsibility.

Back to the campus. To be fair, IR or offices of institutional effectiveness have been reluctant to get involved with direct measures of student performance for good reasons. Culture dictates that such measures belong to program leaders and faculty. The traditions and rules of “academic freedom” somehow demand this. The problem is that faculty and program leaders are indeed content experts, but they are no more versed in effective assessment of student outcomes than anyone else on campus.

This leaves us with campus leaders who have long suspected something is very wrong or at least misdirected. To paraphrase one highly placed academic officer, “We survey our students and a lot of other people and I’m told that our students are ‘happy.’ I just can’t find anyone who can tell me for sure if they’re ‘happy-smarter’ or not!” Their immersion in the compliance march does not give them much clue about what to do about the dissonance they are feeling.

The Assessment Renaissance

Still, the intelligent money is on higher ed presidents first and foremost, supported by their provosts and other chief academic officers. If there is to be deep change in the current culture they are the only ones with the proximal power to make it happen. The majority of their number has declared that “disruption” in higher education is now essential.

Leaders looking to eradicate the walking dead assessment march in a systematic way need to:

  1. Disrupt. This requires a college or university leader to see beyond the horizon and ultimately have an understanding of the long-term objective. It doesn’t mean they need to have all the ideas or proper procedures, but they must have the vision to be a leader and a disrupter. They must demand change on a realistic, but short timetable.
  2. Get Expertise. Outcomes/competency-based assessment has been a busy field of study over the past half-decade. Staff development and helping hands from outside the campus are needed.
  3. Rally the Movers and Shakers. In almost every industry, there are other leaders without ascribed power but whose drive is undeniable. They are the innovators and the early adopters. Enlist them as co-disruptors. On campuses there are faculty/staff that will be willing to take risks for the greater good of assessment and challenge the very fabric of institutional assessment. Gather them together and give them the resources, the authority and the latitude to get the job done. Defend them. Cheerlead at every opportunity.
  4. Change the Equation. Change the conversation from GPAs and satisfaction surveys to one essential unified goal: are students really learning and how can a permanent change in behavior be measurably demonstrated?
  5. Rethink your accreditation assessment software. Most accreditation software systems rely on processes that are narrative, not a systematic inquiry via data. Universities are full of people who research for a living. Give them tools (yes, like Chalk & Wire, which my company provides) to investigate learning and thereby rebuild a systematic approach to improve competency.
  6. Find the Carrots. Assume a faculty member in engineering is going to publish. Would a research-based study about teaching and learning in their field stand for lead rank and tenure? If disruption is the goal, then the correct answer is yes.

Assessment is complex, but it’s not complicated. Stop the grim march. Stand still for a time. Think about learning and what assessment really means and then pick a new proactive direction to travel with colleagues.

Geoff Irvine is CEO and founder of Chalk & Wire.

Editorial Tags: 

Essay criticizes state of assessment movement in higher education

In higher education circles, there is something of a feeding frenzy surrounding the issue of assessment. The federal government, due to release a proposed rating system later this fall, wants assessments to create ways to allow one to compare colleges and universities that provide “value”; accrediting organizations want assessments of student learning outcomes; state agencies want assessments to prove that tax dollars are being spent efficiently; institutions want internal assessments that they can use to demonstrate success to their own constituencies.

By far the main goal of this whirlwind of assessment is trying to determine whether an institution effectively delivers knowledge to its students, as though teaching and learning were like a commodity exchange. This view of education very much downplays the role of students in their own education, placing far too much responsibility on teachers and institutions, and overburdening everyone with a never-ending proliferation of paperwork and bureaucracy.

True learning requires a great deal of effort on the part of the learner. Much of this effort must come in the form of self-inquiry, that is, ongoing examination and reexamination of one’s beliefs and habits to determine which ones need to be revised or discarded. This sort of self-examination cannot be done by others, nor can the results of it be delivered by a teacher. It is work that a student must do for himself or herself.

Because of this, most of the work required in attaining what matters most in education is the responsibility of the student. A teacher can make suggestions, point out deficiencies, recommend methods, and model the behavior of someone who has mastered self-transformation. But no teacher can do the work of self-transformation for a student.

Current assessment models habitually and almost obsessively understate the responsibility of the student for his or her own learning, and, what is more consequential, overstate the responsibility of the teacher. Teachers are directed to provide clear written statements of observable learning outcomes; to design courses in which students have the opportunity to achieve those outcomes; to assess whether students achieve those outcomes; and to use the assessments of students to improve the courses so that attainment of the prescribed outcomes is enhanced.  The standards do not entirely remove the student as an agent — the course provides the opportunity, while the student must achieve the outcomes. But the assessment procedures prescribe in advance the outcome for the student; the student can achieve nothing of significance, as far as assessment goes, except what the professor preordains.

This is a mechanical and illiberal exercise. If the student fails to attain the end, is it because the professor has not provided a sufficient opportunity? Or because, despite the opportunity being perfectly designed, the student, in his freedom, hasn’t acted? Or maybe the student attains the designed outcome due to her own ingenuity even when the opportunity is ill-designed. Or, heaven forbid, the student has after reflection rejected the outcome desired by the teacher in favor of another. The assessment procedure accurately measures the effectiveness of the curriculum precisely to the extent that the student’s personal freedom is discounted. To the extent that student’s freedom is acknowledged, the assessment procedure has to fail.

True learning belongs much more to the student than to the teacher. Even if the teacher spoon-feeds facts to the students, devises the best possible tests to determine whether students are retaining the facts, tries to fire them up with entertaining excitement, and exhibits perfectly in front of them the behavior of a self-actuated learner, the students will learn little or nothing important about the subject or about themselves if they do not undertake the difficult discipline of taking charge of their own growth. This being the case, obsessing about the responsibility of the teacher without paying at least as much attention to the responsibility of the student is hardly going to produce helpful assessments.

True learning is not about having the right answer, so measuring whether students have the right answers is at best incidental to the essential aims of education. True learning is about mastering the art of asking questions and seeking answers, and applying that mastery to your own life. Ultimately, it is about developing the power of self-transformation, the single most valuable ability one can have for meeting the demands of an ever-changing world. Meaningful assessment measures attainment in these areas, rather than in the areas most congenial to the economic metaphor.

How best to judge whether students have attained the sort of freedom that can be acquired by study? Demand that they undertake and successfully complete intellectual investigations on their own. The independence engendered by such projects empowers students to meet the challenges of life and work. It helps them shape lives worth living, arrived at through thoughtful exploration of the question: What kind of life do I want to make for myself?

What implications does this focus have for assessors? They should move away from easy assessments that miss the point to more difficult assessments that try to measure progress in self-transformation. The Gallup-Purdue Index Report "Great Jobs, Great Lives" found six crucial factors linking the college experience to success at work and overall well-being in the long term:

1. At least one teacher who made learning exciting.
2. Personal concern of teachers for students.
3. Finding a mentor
4. Working on a long-term project for at least one semester.
5. Opportunities to put classroom learning into practice through internships or jobs.
6. Rich extracurricular activities.

Assessors should thus turn all their ingenuity toward measuring the quality of the students’ learning environment, toward measuring students’ engagement with their teachers and their studies, and toward measuring activities in which students practice the freedom they have been working to develop in college. The results should be used to push back against easy assessments based on the categories of economics.

Higher education, on the other hand, would do well to repurpose most of the resources currently devoted to assessment. Use them instead to do away with large lecture classes — the very embodiment of education-as-commodity — so that students can have serious discussions with teachers, and teachers can practice the kind of continuous assessment that really matters.

 

Christopher B. Nelson is president of St. John's College, in Annapolis.

Editorial Tags: 
Image Source: 
Getty Images

Essay argues that colleges can measure the career success of graduates

With rising tuition, families are increasingly concerned about what students can expect after graduation in terms of debt, employment, and earnings. They want to know: What is the value of a college degree? Is it worth the cost? Are graduates getting good-paying jobs?

At the same time, state and federal policymakers are sounding the call to institutions for increased accountability and transparency. Are students graduating? Are they accruing unmanageable debt? Are graduates prepared to enter the workforce?

Colleges and universities struggle to answer some of these questions. Responses rely primarily on anecdotal evidence or under-researched and un-researched assumptions because there are little data available. Student data are the sole dominion of colleges and universities. Workforce data is confined to various state and federal agencies. With no systematic or easy way to pull the various data sources together, colleges universities have limited ability to provide the kind of analysis of return on investment that will satisfy the debate.

But access to unit-record data — connecting the student records to the workforce records — would allow institutions to discover those answers. What’s more, it would give colleges and universities the opportunity to conduct powerful research and analysis on post-graduation outcomes that could shape policies and program development.

For example, education provides a foundation of skills and abilities that students bring into the workforce upon graduation. But how long does this foundation continue to have a significant impact on workforce outcomes after graduation? Research based on unit-record data can also show the strongest predictors of student earnings after graduation — educational experience, the local and national economy, supply and demand within the field, or some combination of each.

President Obama and others have proposed that colleges share such information, and many colleges have objected. They have suggested that the information can’t be obtained; that data would be flawed because graduates of some programs at a college might see different career results than others at the same institution; that such a system would jeopardize student privacy; that it would penalize colleges with programs whose graduates might not earn the most one year out, but five or more years out.

At the University of Texas System, we have found a solution – at least within our own state – and, for the first time, are able to provide valuable information to our students and their families. We are doing so without assuming that data one year out is better or worse than a longer time frame – only that students and families should be able to have lots of statistics to examine. We formed a partnership with the Texas Workforce Commission that gives us access to the quarterly earnings records of our students who have graduated since 2001-02 and are found working in Texas. While most of our alumni do work in Texas, a similar partnership with the Social Security Administration might make this approach possible for institutions whose alumni scatter more than ours do.

With that data, we created seekUT, an online, interactive tool — accessible via desktop, tablet, and mobile device — that provides data on salaries and debt of UT System alumni who earned undergraduate, graduate, and professional degrees for 1, 5, and 10 years after graduation. The data are broken down by specific degrees and majors since we know that an education major and an engineering major from the same institution – both valuable to society – are unlikely to earn the same amount. Also, seekUT introduces the reality of student loan debt to prospective and graduate students. In addition to average total student loan debt, it shows estimated monthly loan payment alongside monthly income, as well as the debt-to-income ratio. And because this is shown over time, students get a longer view of how that debt load might play out over the course of their career as their earnings increase over time.

When we present data in this way, we provide students information to make important decisions about how much debt they can realistically afford to acquire based on what their potential earnings might be, not just a year after graduation, but 5 and 10 years down the road. Students and families can use seekUT to help inform decisions about their education and to plan for their financial future.

Admittedly, it is an incomplete picture. Many of our graduates, especially those with advanced degrees, leave the state. If they enroll elsewhere to continue their education, we can discover that through the National Student Clearinghouse StudentTracker. But for those who are not enrolled, there is no information. In lieu of a federal database, we are exploring other options and partnerships to help fill in these holes, but, for now, there are gaps.

With unit record data we can inform current and prospective students about past performance for graduates in their same major; this is a highly valuable product of this level of data. Access to this information in a user-friendly format can directly benefit students by offering real insights — not just alumni stories or survey-based information — into outcomes. The intent is not to change anyone’s major or sway them from their passion, but, instead, to help students make the decisions now that will allow them to pursue that passion after graduation.

There are a multitude of areas we need to explore, both to answer questions about how our universities are performing and to provide much-needed information to current and prospective students. The only way to definitively provide this important information is through unit-record data.

We recognize that there are legitimate concerns, especially given the nearly constant headlines regarding data breaches, about protecting student privacy and data. And the more expansive the data pool, the larger and more appealing the target. A federal student database may be an attractive target to hackers. But these risks can be mitigated — and are, in fact, on a daily basis by university institutional research offices, as well as state and federal agencies. We safeguard the IDs, locking down access to the original file, and not using any identified data for analysis. And when we display information, we do not include any data for cell sizes less than five. This has been true for the student data that we have always held. Given these safeguards, I believe that the need for the data and the benefits of having access to it far outweigh the risks.

seekUT is an example of just some of what higher education institutions can do with access to their workforce data. But for all its importance, seekUT is a tool to provide users access to the information, to inform individual decisions. It is from the deeper research and analysis of these data, however, that we may see major changes and shifts in the policies that impact all students. That is the true power of these data.

For example, while we are gleaning a great deal of helpful information studying our alumni, this same data gives us insights into our current students who are working while enrolled. UT System is currently examining the impact of income, type of work, and place of work (on or off campus) on student persistence and graduation. The results of this study could have an impact on work-study policies across our institutions.

Higher education institutions can leverage data from outside sources to better-understand student outcomes. However, without a federal unit record database, individual institutions will continue to be forced to forge their own partnerships, yielding piecemeal efforts and incomplete stories. We cannot wait; we must forge ahead. Institutions of higher education have a responsibility to students and parents and to the public.


 

Stephanie Bond Huie is vice chancellor of the Office of Strategic Initiatives at the University of Texas System.

Section: 
Editorial Tags: 

Administrators should work with the faculty to assess learning the right way (essay)

“Why do we have such trouble telling faculty what they are going to do?” said the self-identified administrator, hastening to add that he “still thinks of himself as part of the faculty.”

“They are our employees, after all. They should be doing what we tell them to do.”

Across a vast number of models for assessment, strategic planning, and student services on display at last month’s IUPUI Assessment Institute, it was disturbingly clear that assessment professionals have identified “The Faculty” (beyond the lip service to #notallfaculty, always as a collective body) as the chief obstacle to successful implementation of campuswide assessment of student learning. Faculty are recalcitrant. They are resistant to change for the sake of being resistant to change. They don’t care about student learning, only about protecting their jobs. They don’t understand the importance of assessment. They need to be guided toward the Gospel with incentives and, if those fail, consequences.

Certainly, one can find faculty members of whom these are true; every organization has those people who do just enough to keep from getting fired. But let me, at risk of offending the choir to whom keynote speaker Ralph Wolff preached, suggest that the faculty-as-enemy trope may well be a problem of the assessment field’s own making. There is a blindness to the organizational and substantive implications of assessment, hidden behind the belief that assessment is nothing more than collecting, analyzing, and acting rationally on information about student learning and faculty effectiveness.

Assessment is not neutral. In thinking of assessment as an effort to determine whether students are learning and faculty are being effective, it is imperative that we unpack the implicit subject doing the determining. That should make clear that assessment is first and foremost a management rather than a pedagogical practice. Assessment not reported to the administration meets the requirements of neither campus assessment processes nor accreditation standards, and is thus indistinguishable from non-assessment. As a fundamental principle of governance in higher education, assessment is designed to promote what social scientist James Scott has called “legibility”: the ability of outsiders to understand and compare conditions across very different areas in order to facilitate those outsiders’ capacity to manage.

The Northwest Commission on Colleges and Universities, for example, requires schools to practice “ongoing systematic collection and analysis of meaningful, assessable, and verifiable data” to demonstrate mission fulfillment. That is not simply demanding that schools make informed judgments. Data must be assessable and verifiable so that evaluators can examine the extent to which programs revise their practices using the assessment data. They can’t do that unless the data make sense to them. Administrators make the same demand on their departments through campus assessment processes. In the process a hierarchical, instrumentally rational, and externally oriented management model replaces one that has traditionally been decentralized, value rational, and peer-driven.

That’s a big shift in power. There are good (and bad) arguments to be made in favor of (and opposed to) it, and ways of managing assessment that shift that power more or less than others. Assessment professionals are naïve, however, to think that those shifts don’t happen, and fools to think that the people on the losing end of them will not notice or simply give in without objection.

At the same time, assessment also imposes substantive demands on programs through its demand that they “close the loop” and adapt their curriculums to those legible results regardless of how meaningful those results are to the programs themselves. An externally valid standard might demand significant changes to the curriculum that move the program away from its vision.

In my former department we used the ETS Major Field Test as such a standard. But while the MFT tests knowledge of political science as a whole, in political science competence is specific to subfields. Even at the undergraduate level students specialize sufficiently to be, for example, fully conversant in international relations and ignorant of political thought. The overall MFT score does not distinguish between competent specialization and broad mediocrity. One solution was to expect that students would demonstrate excellence in at least one subfield of the discipline. The curriculum would then have to require that students took nearly every course we offered in a subfield, and staffing realities in our program would inevitably make that field American politics.

Because the MFT was legible to a retired Air Force officer (the institutional effectiveness director), an English professor (the dean), a chemist (the provost), and a political appointee with no previous experience in higher education (the president), it stayed in place as a benchmark of progress, but offered little to guide program management. The main tool we settled on was an assessment of the research paper produced in a required junior-level research methods course (that nearly all students put off to their final semester). That assessment gave a common basis for evaluation (knowledge of quantitative research methods) and allowed faculty to evaluate substantive knowledge in a very narrow range of content through the literature review. But it also shifted emphasis toward quantitative work in the discipline, and further marginalized political thought altogether since that subfield isn’t based on empirical methods. We considered adding a political thought assignment, but that would have required students to prioritize that over the empirical fields (no other substantive field having a required assignment) rather than putting it on equal footing.

Evaluating a program with “meaningful, assessable, and verifiable data” can’t be done without changing the program. To “close the loop” based on MFT results required a substantive change in how we saw our mission: from producing well-rounded students to specialists in American politics. To do so with the methods paper required changes in course syllabuses and advising to bring more emphasis on empirical fields, more quantitative rather than qualitative work within those fields, more emphasis on methods supporting conclusions rather than the substance of the conclusions, and less coursework in political thought. We had a choice between these options. But we could not choose an option that would not require change in response to the standard, not just the results.

This is the reality facing those, like the administrator I quoted at the beginning of this essay, who believe that they can tell faculty what to do with assessment without telling them what to do with the curriculum. If assessment requires that a program make changes based on the results of its assessment processes, then the selection of processes defines a domain of curricular changes that can result. Some of these will be unavoidable: a multiple-choice test will require faculty to favor knowledge transmission over synthetic thinking. Others will be completely proscribed: if employment in the subfield of specialization is an assessment measure, the curriculum in political thought will never be reinforced, because people don’t work in political thought. But no process can be neutral among all possible curriculums.

Again, that may or may not be a bad thing. Sometimes a curriculum just doesn’t work, and assessment can be a way to identify it and replace it with something that does. But the substantive influence of assessment is most certainly a thing one way or the other, and that thing means that assessment professionals can’t say that assessment doesn’t change what faculty teach and how they teach it. When they tell faculty members that, they appear at best clueless and at worst disingenuous. With most faculty members having oversensitive BS detectors to begin with, especially when dealing with administrators, piling higher and deeper doesn’t exactly win friends and influence people.

The blindness that comes from belief in organizationally and curricularly neutral assessment is, I think, at the heart of the condescending attitudes toward faculty at the Assessment Institute. In the day two plenary session, one audience member asked, essentially, “What do we do about them?” as if there were no faculty members in the room. The faculty member next to me was quick to tune out as the panel took up the discussion with the usual platitudes about buy-in and caring about learning.

Throughout the conference there was plenty of discussion of why faculty members don’t “get it.” Of how to get them to buy into assessment on the institutional effectiveness office’s terms. Of providing effective incentives — carrots, yes, but plenty of sticks — to get them to cooperate. Of how to explain the importance of accreditation to them, as if they are unaware of even the basics. And of faculty paranoia that assessment was a means for the administration to come for their jobs.

What there wasn’t: discussion of what the faculty’s concerns with assessment actually are. Of how assessment processes do in fact influence what happens in classrooms. Of how assessment feeds program review, thus influencing administrative decisions about program closure and the allocation of tenure lines (especially of the conversion of tenure lines to adjunct positions when vacancies occur). Of the possibility that assessment might have unintended consequences that hinder student learning. These are very real concerns for faculty members, and should be for assessment professionals as well.

Nor was there discussion of what assessment professionals can do to work with faculty in a relationship that doesn’t subordinate faculty. Of how assessment professionals can build genuinely collaborative rather than merely cooptive relationships with faculty members. Of, more than anything, the virtues of listening before telling. When it comes to these things, it is the assessment field that doesn’t “get it.”

Let me assure you, as a former faculty member who talks about these issues with current ones: faculty members do care about whether students learn. In fact, many lose sleep over it. Faculty members informally assess their teaching techniques every time they leave a classroom and adjust what they do accordingly. In fact, that usually happens before they walk back into that classroom, not at the end of a two-year assessment cycle. Faculty members most certainly feel disrespected by suggestions they only care for themselves. In fact, it is downright offensive to suggest that they are selfish when in order to make learning happen they frequently make less than their graduates do and live in the places their graduates talk of escaping.

Assessment professionals need to approach faculty members as equal partners rather than as counterrevolutionaries in need of reeducation. That’s common courtesy, to be sure. But it is also essential if assessment is to actually improve student learning.

You do care about student learning, don’t you?

Jeffrey Alan Johnson is assistant director of institutional effectiveness and planning at Utah Valley University.

Editorial Tags: 

The media should cast a more skeptical eye on higher ed reforms (essay)

It’s September and therefore time once again to clear this year’s collection of task force, blue ribbon panel, and conference reports to await the new harvest. Sad. Every one of these efforts was once graced by a newspaper article, often with breathless headline, reporting on another well-intentioned group’s solution to one or another of higher education’s problems.

By now we know that much of this work will have little positive impact on higher education, and realize that some of it might have been harmful. The question in either case is, where was the press?

Where were the challenges, however delicately phrased, asking about evidence, methodology, experimentation or concrete results? Why were press releases taken at face value, and why was there no follow-up to explore whether the various studies had any relevance or import in the real world?

The journalists I know are certainly equal to the task: bright, invested, interesting. But along with the excellent writing, where is the healthy skepticism and the questioning attitude of the scholar and the journalist?

This absence of a critical attitude has consequences. A myth, given voice, can cause untold harm. In one extreme example, the canard that accreditors trooped through schools “counting books” enabled a mindless focus on irrelevant measured learning outcomes, bright lines, metrics, rubrics and the like. This helped erode one of the most effective characteristics of accreditation and gave rise to a host of alternatives, once again unexamined, unreviewed, and unchallenged -- but with enough press space to enable them to take root.

Many of us do apply a healthy dose of constructive skepticism to the new, the untested, and the unverified. But it’s only reporters and journalists who have the ability to voice such concerns in the press.

No doubt it’s more pleasant to write about promising new developments than to express concern and caution. But don’t we have a right to expect this as well? Surely de Tocqueville’s press, whose "eye is always open" and which "forces public men to appear before the tribunal of public opinion" has bequeathed a sense of responsibility to probe and to scrutinize proposals and plans as well as people.

Consider, for example, the attitude of the press to MOOCs. First came the thrilling stories of millions of people studying quantum electrodynamics, as well as the heartwarming tale of the little girl high in the Alps learning Esperanto from a MOOC while guarding the family’s sheep. Or something.

The MOOC ardor has cooled, but it’s not because of a mature, responsible examination by the press.

The mob calling for disruption hasn’t dispersed, only the watchword is now "innovation." Any proposal that claims to teach students more effectively, at a lower cost and a quicker pace, is granted a place in the sun, while faculty and institutions are labeled as obstructionists trying to save their jobs.

That responsible voices don’t get heard often enough might be partially our fault. Even though every journalist went to college, this personal experience was necessarily limited. Higher education is maddeningly diverse, and writers should be invited to observe or participate in a variety of classes, at different levels and in all kinds of schools.

Accrediting agencies should invite more reporters to join site visits. Reality is a powerful teacher and bright journalists would make excellent students.

Reporters who understand higher education would also be more effective in examining proposed legislation. We need a questioning eye placed on unworkable or unrealistic initiatives to ensure that higher education not be harmed – as has been the case so often in the past.

Senator Tom Harkin’s recent Higher Education Act bill has language that would make accreditation totally ineffective. Hopefully it will be removed in further iterations of the legislation.

But wouldn’t we be better off if searching questions came from an independent, informed, and insistent press?

 

Bernard Fryshman is a professor of physics and former accreditor.

Editorial Tags: 

Group wants to create voluntary standards for the for-profit industry

Smart Title: 

New effort aims to create voluntary standards and a seal of approval aimed at for-profit colleges, this time by an outside group that works with a wide swath of the corporate world.

Colleges should focus less on student failure and more on success (essay)

In their effort to improve outcomes, colleges and universities are becoming more sophisticated in how they analyze student data – a promising development. But too often they focus their analytics muscle on predicting which students will fail, and then allocate all of their support resources to those students.

That’s a mistake. Colleges should instead broaden their approach to determine which support services will work best with particular groups of students. In other words, they should go beyond predicting failure to predicting which actions are most likely to lead to success. 

Higher education institutions are awash in the resources needed for sophisticated analysis of student success issues. They have talented research professionals, mountains of data and robust methodologies and tools. Unfortunately, most resourced-constrained institutional research (IR) departments are focused on supporting accreditation and external reporting requirements. 

Some institutions have started turning their analytics resources inward to address operational and student performance issues, but the question remains: Are they asking the right questions?

Colleges spend hundreds of millions of dollars on services designed to enhance student success. When making allocation decisions, the typical approach is to identify the 20 to 30 percent of students who are most “at risk” of dropping out and throw as many support resources at them as possible. This approach involves a number of troubling assumptions:

  1. The most “at risk” students are the most likely to be affected by a particular form of support.
  2. Every form of support has a positive impact on every “at risk” student.
  3. Students outside this group do not require or deserve support.

What we have found over 14 years working with students and institutions across the country is that:

  1. There are students whose success you can positively affect at every point along the risk distribution.
  2. Different forms of support impact different students in different ways.
  3. The ideal allocation of support resources varies by institution (or more to the point, by the students and situations within the institution).

Another problem with a risk-focused approach is that when students are labeled “at risk” and support resources directed to them on that basis, asking for or accepting help becomes seen as a sign of weakness. When tailored support is provided to all students, even the most disadvantaged are better-off. The difference is a mindset of “success creation” versus “failure prevention.” Colleges must provide support without stigma.

To better understand impact analysis, consider Eric Siegel’s book Predictive Analytics. In it, he talks about the Obama 2012 campaign’s use of microtargeting to cost-effectively identify groups of swing voters who could be moved to vote for Obama by a specific outreach technique (or intervention), such as piece of direct mail or a knock on their door -- the “persuadable” voters. The approach involved assessing what proportion of people in a particular group (e.g., high-income suburban moms with certain behavioral characteristics) was most likely to:

  • vote for Obama if they received the intervention (positive impact subgroup)
  • vote for Obama or Romney irrespective of the intervention (no impact subgroup)
  • vote for Romney if they received the intervention (negative impact subgroup)

The campaign then leveraged this analysis to focus that particular intervention on the first subgroup.

This same technique can be applied in higher education by identifying which students are most likely to respond favorably to a particular form of support, which will be unmoved by it and which will be negatively impacted and dropout. 

Of course, impact modeling is much more difficult than risk modeling. Nonetheless, if our goal is to get more students to graduate, it’s where we need to focus analytics efforts.

The biggest challenge with this analysis is that it requires large, controlled studies involving multiple forms of intervention. The need for large controlled studies is one of the key reasons why institutional researchers focus on risk modeling. It is easy to track which students completed their programs and which did not. So, as long as the characteristics of incoming students aren’t changing much, risk modeling is rather simple. 

However, once you’ve assessed a student’s risk, you’re still left trying to answer the question, “Now what do I do about it?” This is why impact modeling is so essential. It gives researchers and institutions guidance on allocating the resources that are appropriate for each student.

There is tremendous analytical capacity in higher education, but we are currently directing it toward the wrong goal. While it’s wonderful to know which students are most likely to struggle in college, it is more important to know what we can do to help more students succeed.

Dave Jarrat is a member of the leadership team at InsideTrack, where he directs marketing, research and industry relations activities.

Wake Forest U. tries to measure well-being

Smart Title: 

Wake Forest U. looks to measure the lives of its students and alumni.

We need a new student data system -- but the right kind of one (essay)

The New America Foundation’s recent report on the Student Unit Record System (SURS) is fascinating reading.  It is hard to argue with the writers’ contention that our current systems of data collection are broken, do not serve the public or policy makers very well, and are no better at protecting student privacy than their proposed SURS might be. 

It also lifts the veil on One Dupont Circle and Washington behind-the-scenes lobbying and politics that is delicious and also troubling, if not exactly "House of Cards" dramatic. Indeed, it is good wonkish history and analysis and sets the stage for a better informed debate about any national unit record system.

As president of a nonprofit private institution and paid-up member of NAICU, the industry sector and its representative organization in D.C. that respectively stand as SURS roadblocks in the report’s telling, I find myself both in support of a student unit record system and worried about the things it wants to record. Privacy, the principle argument mounted against such a system, is not my worry, and I tend to agree with the report’s arguments that it is the canard that masks the real reason for opposition: institutional fear of accountability. 

Our industry is a troubled one, after all, that loses too many students (Would we accept a 50 percent success rate among surgeons and bridge builders?) and often saddles them with too much debt, and whose outputs are increasingly questioned by employers.

The lack of a student record system hinders our ability to understand our industry, as New America’s Clare McCann and Amy Laitinen point out, and understanding the higher education landscape remains ever more challenging for consumers. A well-designed SURS would certainly help with the former and might eventually help with the latter problem, though college choices have so much irrationality built into them that consumer education is only one part of the issue.  But what does “well-designed” mean here? This is where I, like everyone, gets worried.

For me, three design principles must be in place for an effective SURS:

Hold us accountable for what we can control. This is a cornerstone principle of accountability and data collection. As an institution, we should be held accountable for what students learn, their readiness for their chosen careers, and giving them all the tools they need to go out there and begin their job search. Fair enough. But don’t hold me accountable for what I can’t control:

  • The labor market. I can’t create jobs where they don’t exist, and the struggles of undeniably well-prepared students to find good-paying, meaningful jobs say more about the economy, the ways in which technology is replacing human labor, and the choices that corporations make than my institutional effectiveness.  If the government wants to hold us accountable on earnings post-graduation, can we hold it accountable for making sure that good-paying jobs are out there?
  • Graduate motivation and grit. My institution can do everything in its power to encourage students to start their job search early, to do internships and network, and to be polished and ready for that first interview.  But if a student chooses to take that first year to travel, to be a ski bum, or simply stay in their home area when jobs in their discipline might be in Los Angeles or Washington or Omaha, there is little I can do.  Yet those have a lot of impact on the measure of earnings just after graduation.
  • Irrational passion. We should arm prospective students with good information about their majors: job prospects, average salaries, geographic demand, how recent graduates have fared.  However, if a student is convinced that being a poet or an art historian is his or her calling, to recall President Obama’s recent comment, how accountable is my individual institution if that student graduates and then struggles to find work? 

We wrestle with these questions internally.  We talk about capping majors that seem to have diminished demand, putting in place differential tuition rates, and more.  How should we think about our debt to earnings ratio? None of this is an argument against a unit record system, but a plea that it measure things that are more fully in our institutional control.   For example, does it make more sense to measure earnings three or five years out, which at least gets us past the transitional period into the labor market and allows for some evening out of the flux that often attends those first years after graduation? 

Contextualize the findings. As has been pointed out many times, a 98 percent graduation rate at a place like Harvard is less a testimony to its institutional quality than evidence of its remarkably talented incoming classes of students.  Not only would a 40 percent graduation rate at some institutions be a smashing success, but Harvard would almost certainly fail those very same students. As McCann and Laitinen point out, so much of what we measure and report on is not about students, so let’s make sure that an eventual SURS provides consumer information that makes sense for the individual consumer and institutional sector. 

If the consumer dimension of a student unit record system is to help people make wise choices, it can’t treat all institutions the same and it should be consumer-focused.  For example, can it be “smart” enough to solicit the kind of consumer information that then allows us to answer not only the question the authors pose, “What kinds of students are graduating from specific institutions?” but “What kinds of students like you are graduating from what set of similar institutions and how does my institution perform in that context?”

This idea extends to other items we might and should measure. For example, is a $30,000 salary for an elementary school teacher in a given region below, at, or above the average for a newly minted teacher three years after graduation?  How then are my teachers doing compared to graduates in my sector? Merely reporting the number without context is not very useful. It’s all about context.

What we measure will matter. This is obvious and it speaks to both the power of measuring and raises the specter of inadvertent consequences.  A cardiologist friend commented to me that his unit’s performance is measured in various ways and the simplest way for him to improve its mortality metric is to take fewer very sick heart patients. He of course worries that such a decision contradicts its mission and why he practices medicine. It continues to bother me that proposed student records systems don’t measure learning, the thing that matters most to my institution.  More precisely, that they don’t measure how much we have moved the dial for any given student, how impactful we have been. 

Internally, we have honed our predictive analytics based on student profile data and can measure impact pretty precisely.  Similarly, if we used student profile data as part of the SURS consumer function, we might be able to address more effectively both my first and second design principles. 

Imagine a system that was smart enough to say “Based on your student profile, here is the segment of colleges similar students most commonly attend, what the average performance band is for that segment, and how a particular institution performs within that band across these factors.…”  We would address the thing for which we should be held most accountable, student impact, and we’d provide context. And what matters most -- our ability to move students along to a better education -- would start to matter most to everyone and we’d see dramatic shifts in behaviors in many institutions.

This is the hard one, of course, and I’m not saying that we ought to hold up a SURS until we work it out. We can do a lot of what I’m calling for and find ways to at least let institutions supplement their reports with the claims they make for learning and how they know.  In many disciplines, schools already report passage rates on boards, C.P.A. exams, and more.  Competency-based models are also moving us forward in this regard. 

These suggestions are not insurmountable hurdles to a national student unit record system. New America makes a persuasive case for putting in place such a system and I and many of my colleagues in the private, nonprofit sector would support one. 

But we need something better than a blunt instrument that replaces one kind of informational fog for another.  That is their goal too, of course, and we should now step back from looking at what kinds of data we can collect to also look at our broader design principles and what kinds things we should collect and how we can best make sense of that data for students and their families. 

Their report gives us a lot of the answer and smart guidance on how a system might work.  It should also be our call to action to further refine the design model to take into account the kinds of challenges outlined above.

Paul LeBlanc is president of Southern New Hampshire University.

Pages

Subscribe to RSS - Assessment
Back to Top