Assessment

Group wants to create voluntary standards for the for-profit industry

Smart Title: 

New effort aims to create voluntary standards and a seal of approval aimed at for-profit colleges, this time by an outside group that works with a wide swath of the corporate world.

Colleges should focus less on student failure and more on success (essay)

In their effort to improve outcomes, colleges and universities are becoming more sophisticated in how they analyze student data – a promising development. But too often they focus their analytics muscle on predicting which students will fail, and then allocate all of their support resources to those students.

That’s a mistake. Colleges should instead broaden their approach to determine which support services will work best with particular groups of students. In other words, they should go beyond predicting failure to predicting which actions are most likely to lead to success. 

Higher education institutions are awash in the resources needed for sophisticated analysis of student success issues. They have talented research professionals, mountains of data and robust methodologies and tools. Unfortunately, most resourced-constrained institutional research (IR) departments are focused on supporting accreditation and external reporting requirements. 

Some institutions have started turning their analytics resources inward to address operational and student performance issues, but the question remains: Are they asking the right questions?

Colleges spend hundreds of millions of dollars on services designed to enhance student success. When making allocation decisions, the typical approach is to identify the 20 to 30 percent of students who are most “at risk” of dropping out and throw as many support resources at them as possible. This approach involves a number of troubling assumptions:

  1. The most “at risk” students are the most likely to be affected by a particular form of support.
  2. Every form of support has a positive impact on every “at risk” student.
  3. Students outside this group do not require or deserve support.

What we have found over 14 years working with students and institutions across the country is that:

  1. There are students whose success you can positively affect at every point along the risk distribution.
  2. Different forms of support impact different students in different ways.
  3. The ideal allocation of support resources varies by institution (or more to the point, by the students and situations within the institution).

Another problem with a risk-focused approach is that when students are labeled “at risk” and support resources directed to them on that basis, asking for or accepting help becomes seen as a sign of weakness. When tailored support is provided to all students, even the most disadvantaged are better-off. The difference is a mindset of “success creation” versus “failure prevention.” Colleges must provide support without stigma.

To better understand impact analysis, consider Eric Siegel’s book Predictive Analytics. In it, he talks about the Obama 2012 campaign’s use of microtargeting to cost-effectively identify groups of swing voters who could be moved to vote for Obama by a specific outreach technique (or intervention), such as piece of direct mail or a knock on their door -- the “persuadable” voters. The approach involved assessing what proportion of people in a particular group (e.g., high-income suburban moms with certain behavioral characteristics) was most likely to:

  • vote for Obama if they received the intervention (positive impact subgroup)
  • vote for Obama or Romney irrespective of the intervention (no impact subgroup)
  • vote for Romney if they received the intervention (negative impact subgroup)

The campaign then leveraged this analysis to focus that particular intervention on the first subgroup.

This same technique can be applied in higher education by identifying which students are most likely to respond favorably to a particular form of support, which will be unmoved by it and which will be negatively impacted and dropout. 

Of course, impact modeling is much more difficult than risk modeling. Nonetheless, if our goal is to get more students to graduate, it’s where we need to focus analytics efforts.

The biggest challenge with this analysis is that it requires large, controlled studies involving multiple forms of intervention. The need for large controlled studies is one of the key reasons why institutional researchers focus on risk modeling. It is easy to track which students completed their programs and which did not. So, as long as the characteristics of incoming students aren’t changing much, risk modeling is rather simple. 

However, once you’ve assessed a student’s risk, you’re still left trying to answer the question, “Now what do I do about it?” This is why impact modeling is so essential. It gives researchers and institutions guidance on allocating the resources that are appropriate for each student.

There is tremendous analytical capacity in higher education, but we are currently directing it toward the wrong goal. While it’s wonderful to know which students are most likely to struggle in college, it is more important to know what we can do to help more students succeed.

Dave Jarrat is a member of the leadership team at InsideTrack, where he directs marketing, research and industry relations activities.

Wake Forest U. tries to measure well-being

Smart Title: 

Wake Forest U. looks to measure the lives of its students and alumni.

We need a new student data system -- but the right kind of one (essay)

The New America Foundation’s recent report on the Student Unit Record System (SURS) is fascinating reading.  It is hard to argue with the writers’ contention that our current systems of data collection are broken, do not serve the public or policy makers very well, and are no better at protecting student privacy than their proposed SURS might be. 

It also lifts the veil on One Dupont Circle and Washington behind-the-scenes lobbying and politics that is delicious and also troubling, if not exactly "House of Cards" dramatic. Indeed, it is good wonkish history and analysis and sets the stage for a better informed debate about any national unit record system.

As president of a nonprofit private institution and paid-up member of NAICU, the industry sector and its representative organization in D.C. that respectively stand as SURS roadblocks in the report’s telling, I find myself both in support of a student unit record system and worried about the things it wants to record. Privacy, the principle argument mounted against such a system, is not my worry, and I tend to agree with the report’s arguments that it is the canard that masks the real reason for opposition: institutional fear of accountability. 

Our industry is a troubled one, after all, that loses too many students (Would we accept a 50 percent success rate among surgeons and bridge builders?) and often saddles them with too much debt, and whose outputs are increasingly questioned by employers.

The lack of a student record system hinders our ability to understand our industry, as New America’s Clare McCann and Amy Laitinen point out, and understanding the higher education landscape remains ever more challenging for consumers. A well-designed SURS would certainly help with the former and might eventually help with the latter problem, though college choices have so much irrationality built into them that consumer education is only one part of the issue.  But what does “well-designed” mean here? This is where I, like everyone, gets worried.

For me, three design principles must be in place for an effective SURS:

Hold us accountable for what we can control. This is a cornerstone principle of accountability and data collection. As an institution, we should be held accountable for what students learn, their readiness for their chosen careers, and giving them all the tools they need to go out there and begin their job search. Fair enough. But don’t hold me accountable for what I can’t control:

  • The labor market. I can’t create jobs where they don’t exist, and the struggles of undeniably well-prepared students to find good-paying, meaningful jobs say more about the economy, the ways in which technology is replacing human labor, and the choices that corporations make than my institutional effectiveness.  If the government wants to hold us accountable on earnings post-graduation, can we hold it accountable for making sure that good-paying jobs are out there?
  • Graduate motivation and grit. My institution can do everything in its power to encourage students to start their job search early, to do internships and network, and to be polished and ready for that first interview.  But if a student chooses to take that first year to travel, to be a ski bum, or simply stay in their home area when jobs in their discipline might be in Los Angeles or Washington or Omaha, there is little I can do.  Yet those have a lot of impact on the measure of earnings just after graduation.
  • Irrational passion. We should arm prospective students with good information about their majors: job prospects, average salaries, geographic demand, how recent graduates have fared.  However, if a student is convinced that being a poet or an art historian is his or her calling, to recall President Obama’s recent comment, how accountable is my individual institution if that student graduates and then struggles to find work? 

We wrestle with these questions internally.  We talk about capping majors that seem to have diminished demand, putting in place differential tuition rates, and more.  How should we think about our debt to earnings ratio? None of this is an argument against a unit record system, but a plea that it measure things that are more fully in our institutional control.   For example, does it make more sense to measure earnings three or five years out, which at least gets us past the transitional period into the labor market and allows for some evening out of the flux that often attends those first years after graduation? 

Contextualize the findings. As has been pointed out many times, a 98 percent graduation rate at a place like Harvard is less a testimony to its institutional quality than evidence of its remarkably talented incoming classes of students.  Not only would a 40 percent graduation rate at some institutions be a smashing success, but Harvard would almost certainly fail those very same students. As McCann and Laitinen point out, so much of what we measure and report on is not about students, so let’s make sure that an eventual SURS provides consumer information that makes sense for the individual consumer and institutional sector. 

If the consumer dimension of a student unit record system is to help people make wise choices, it can’t treat all institutions the same and it should be consumer-focused.  For example, can it be “smart” enough to solicit the kind of consumer information that then allows us to answer not only the question the authors pose, “What kinds of students are graduating from specific institutions?” but “What kinds of students like you are graduating from what set of similar institutions and how does my institution perform in that context?”

This idea extends to other items we might and should measure. For example, is a $30,000 salary for an elementary school teacher in a given region below, at, or above the average for a newly minted teacher three years after graduation?  How then are my teachers doing compared to graduates in my sector? Merely reporting the number without context is not very useful. It’s all about context.

What we measure will matter. This is obvious and it speaks to both the power of measuring and raises the specter of inadvertent consequences.  A cardiologist friend commented to me that his unit’s performance is measured in various ways and the simplest way for him to improve its mortality metric is to take fewer very sick heart patients. He of course worries that such a decision contradicts its mission and why he practices medicine. It continues to bother me that proposed student records systems don’t measure learning, the thing that matters most to my institution.  More precisely, that they don’t measure how much we have moved the dial for any given student, how impactful we have been. 

Internally, we have honed our predictive analytics based on student profile data and can measure impact pretty precisely.  Similarly, if we used student profile data as part of the SURS consumer function, we might be able to address more effectively both my first and second design principles. 

Imagine a system that was smart enough to say “Based on your student profile, here is the segment of colleges similar students most commonly attend, what the average performance band is for that segment, and how a particular institution performs within that band across these factors.…”  We would address the thing for which we should be held most accountable, student impact, and we’d provide context. And what matters most -- our ability to move students along to a better education -- would start to matter most to everyone and we’d see dramatic shifts in behaviors in many institutions.

This is the hard one, of course, and I’m not saying that we ought to hold up a SURS until we work it out. We can do a lot of what I’m calling for and find ways to at least let institutions supplement their reports with the claims they make for learning and how they know.  In many disciplines, schools already report passage rates on boards, C.P.A. exams, and more.  Competency-based models are also moving us forward in this regard. 

These suggestions are not insurmountable hurdles to a national student unit record system. New America makes a persuasive case for putting in place such a system and I and many of my colleagues in the private, nonprofit sector would support one. 

But we need something better than a blunt instrument that replaces one kind of informational fog for another.  That is their goal too, of course, and we should now step back from looking at what kinds of data we can collect to also look at our broader design principles and what kinds things we should collect and how we can best make sense of that data for students and their families. 

Their report gives us a lot of the answer and smart guidance on how a system might work.  It should also be our call to action to further refine the design model to take into account the kinds of challenges outlined above.

Paul LeBlanc is president of Southern New Hampshire University.

UT System creates database to track graduates' earnings, debt

Smart Title: 

University of Texas System creates web tool to track graduates' earnings and debt five years after leaving college, among other outcomes.

Conference Connoisseurs visit the City of Brotherly Love (and cheesesteaks)

Our conference-going gourmands check out the culinary treats of the City of Brotherly Love.

Editorial Tags: 
Show on Jobs site: 

The risks of assessing only what students know and can do (essay)

A central tenet of the student learning outcomes "movement" is that higher education institutions must articulate a specific set of skills, traits and/or dispositions that all of its students will learn before graduation. Then, through legitimate means of measurement, institutions must assess and publicize the degree to which its students make gains on each of these outcomes.

Although many institutions have yet to implement this concept fully (especially regarding the thorough assessment of institutional outcomes), this idea is more than just a suggestion. Each of the regional accrediting bodies now requires institutions to identify specific learning outcomes and demonstrate evidence of outcomes assessment as a standard of practice.

This approach to educational design seems at the very least reasonable. All students, regardless of major, need a certain set of skills and aptitudes (things like critical thinking, collaborative leadership, intercultural competence) to succeed in life as they take on additional professional responsibilities, embark (by choice or by circumstance) on a new career, or address a daunting civic or personal challenge. In light of the educational mission our institutions espouse, committing ourselves to a set of learning outcomes for all students seems like what we should have been doing all along.

Yet too often the outcomes that institutions select to represent the full scope of their educational mission, and the way that those institutions choose to assess gains on those outcomes, unwittingly limit their ability to fulfill the mission they espouse. For when institutions narrow their educational vision to a discrete set of skills and dispositions that can be presented, performed or produced at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision that would cultivate in students a self-sustaining approach to learning. What we measure dictates the focus of our efforts to improve.

As such, it’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. Instead of redesigning the college learning experience to alter the lifetime trajectory of an individual, we allow the whole to be nothing more than the sum of the parts -- because all we have done is swap one collection of parts for another. Although there may be value in establishing and implementing a threshold of competence for a bachelor’s degree (for which a major serves a legitimate purpose), limiting ourselves to this framework fails to account for the deeply held belief that a college experience should approach learning as a process -- one that is cumulative, iterative, multidimensional and, most importantly, self-sustaining long beyond graduation.

The disconnect between our conception of a college education as a process and our tendency to track learning as a finite set of productions (outcomes) is particularly apparent in the way that we assess our students’ development as lifelong learners. Typically, we measure this construct with a pre-test and a post-test that tracks learning gains between the years of 18 and 22 -- hardly a lifetime (the fact that a few institutions gather data from alumni 5 and 10 years after graduation doesn’t invalidate the larger point).

Under these conditions, trying to claim empirically that (1) an individual has developed and maintained a perpetual interest in learning throughout their life, and that (2) this lifelong approach is directly attributable to one’s undergraduate education probably borders on the delusional. The complexity of life even under the most mundane of circumstances makes such a hypothesis deeply suspect. Yet we all know of students that experienced college as a process through which they found a direction that excited them and a momentum that carried them down a purposeful path that extended far beyond commencement.

I am by no means suggesting that institutions should abandon assessing learning gains on a given set of outcomes. On the contrary, we should expect no less of ourselves than substantial growth in all of our students as a result of our efforts. Designed appropriately, a well-organized sequence of outcomes assessment snapshots can provide information vital to tracking student learning over time and potentially increasing institutional effectiveness. However, because the very act of learning occurs (as the seminal developmental psychologist Lev Vygotsky would describe it) in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve.

If you think that assessing learning outcomes effectively is difficult, then assessing the quality of the learning process ought to send chills down even the most skilled assessment coordinator’s spine. Defining and measuring the nature of process requires a very different conception of assessment – and for that matter a substantially more complex understanding of learning outcomes.

Instead of merely measuring what is already in the rearview mirror (i.e., whatever has already been acquired), assessing the college experience as a process requires a look at the road ahead, emphasizing the connection between what has already occurred and what is yet to come. In other words, assessment of the learning that results from a given experience would include the degree to which a student is prepared or “primed” to make the most of a future learning experience (either one that is intentionally designed to follow immediately, or one that is likely to occur somewhere down the road). Ultimately, this approach would substantially improve our ability to determine the degree to which we are preparing students to approach life in a way that is thoughtful, pro-actively adaptable, and even nimble in the face of both unforeseen opportunity and sudden disappointment.

Of course, this idea runs counter to the way that we typically organize our students’ postsecondary educational experience. For if we are going to track the degree to which a given experience “primes” students for subsequent experiences -- especially subsequent experiences that occur during college -- then the educational experience can’t be so loosely constructed that the number of potential variations in the order of a student experiences virtually equals the number of students enrolled at our institution.

This doesn’t mean that we return to the days in which every student took the same courses at the same time in the same order, but it does require an increased level of collective commitment to the intentional design of the student experience, a commitment to student-centered learning that will likely come at the expense of an individual instructor’s or administrator’s preference for which courses they teach or programs they lead and when they might be offered.

The other serious challenge is the act of operationalizing a concept of assessment that attempts to directly measure an individual’s preparation to make the most of a subsequent educational experience. But if we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes – whether these outcomes are somehow connected or entirely independent of each other – then we have to expand our approach to include process as well as product. 

Only then can we actually demonstrate that the whole is greater than the sum of the parts, that in fact the educational process is the glue that fuses those disparate parts into a greater -- and qualitatively distinct -- whole.

Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. He blogs at Delicious Ambiguity.

Editorial Tags: 
Image Source: 
Getty Images

Students, faculty sign pledge for college completion

Smart Title: 

Students are asking faculty members to pledge to create a culture of completion.

Seven state coalition pushes for more information about military credit recommendations

Smart Title: 

Seven states partner up to ensure that student veterans earn college credit for service, while also calling for help from ACE and the Pentagon.

Colleges should end outdated policies that don't put students first (essay)

When institutions and organizations begin to identify with processes instead of intended outcomes, they become vulnerable. They lose sight of their real missions and, when faced with challenges or disruptive innovation, often struggle to survive. 

Eastman Kodak, once the dominant brand in photography, identified too closely with the chemical processes it used and failed to recognize that its overarching mission was photography rather than film and film processing. Swiss watch manufacturers likewise identified too closely with the mechanical workings of their watches and lost market share to companies that understood that the real mission was the production of reliable and wearable instruments to tell time. If railroads had viewed their mission as transportation of people and goods rather than moving trains on tracks, we might have some different brand names on airplanes and vehicles today.  

In retrospect, it seems that the decisions made by these industries defied common sense. Although the leaders were experienced and capable, they were blinded by tradition, and they confused established processes with the real mission of their enterprises.

Higher education today identifies closely with its processes. In open-access public institutions, we recruit, admit and enroll students; assess them for college readiness; place or advise those who are not adequately prepared into remedial classes; give others access to a bewildering variety of course options, often without adequate orientation and advising; provide instruction, often in a passive lecture format; offer services to those who seek and find their way to them; grade students on how well they can navigate our systems and how well they perform on assignments and tests; and issue degrees and certificates based upon the number of credits the students accumulate in required and elective courses. 

We need to fund our institutions, so we concentrate on enrollment targets and make sure classroom seats are filled in accordance with regulations that specify when we count our students for revenue purposes.

At the same time that American higher education is so focused on and protective of its processes, it is also facing both significant challenges and potentially disruptive innovation. Challenges include responding to calls from federal and state policy makers for higher education to increase completion rates and to keep costs down, finding ways that are more effective to help students who are unprepared for college to become successful students, making college information more accessible and processes more transparent for prospective students and their parents, explaining new college rating systems and public score cards, coordinating across institutional boundaries to help an increasingly mobile student population to transfer more seamlessly and successfully from one institution to another and to graduate, dealing with the threat to shift from peer-based institutional accreditation to a federal system of quality assurance, and responding to new funding systems that are based upon institutional performance. 

Potentially disruptive innovations include the increasing use of social media such as YouTube and other open education resources (OER) for learning, the advent of massive online open courses (MOOCs), the quick access to information made possible by advances in technology, and the potential for a shift from the Carnegie unit to documented competencies as the primary way to measure student progression.

One of today’s most significant challenges to higher education is the increased focus on student success. In response to calls and sometimes financial incentives from policy makers -- and with the assistance provided by major foundations -- colleges and universities are shifting their focus from student access and opportunity to student access and success. Higher education associations have committed themselves to helping institutions improve college completion rates. The terminology used is that we are shifting from an “access agenda” to a “success agenda” or a “completion agenda.” 

This identification with outcomes is positive, but it raises concerns about both loss of access to higher education for those students who are less likely to succeed, and the potential for decreased academic rigor. The real mission of higher education is student learning; degrees and certificates must be the institution’s certification of identified student learning outcomes rather than just accumulated credits.

Faculty and academic administrators, perhaps working with appropriate representatives from business and industry, need to identify the learner competencies that should be developed by the curriculum. The curriculum should be designed or modified to ensure that those competencies are appropriately addressed. Students should be challenged to rise to the high expectations required to master the identified competencies and should be provided the support they need to become successful. Finally, learners should be assessed in order to ensure that a degree or certificate is a certification of acquired competencies. 

What would we do differently if, rather than identifying with our processes, we identified with our overarching mission -- student learning? When viewed through the lens of student learning, many of the processes that we currently rely upon and the decisions we make (or fail to make) seem to defy common sense. The institution itself controls some of these policies and practices; others are policies (or the lack of policies) between and among educational institutions; and some are the result of state or federal legislation.

A prime example of a detrimental institutional process is late registration, the practice of allowing students to register after orientation activities -- and often after classes have begun. Can we really expect students to be successful if they enter a class after it is under way? Research consistently shows that students who register late are at a significant disadvantage and, most often, fail or drop out.

Yet, many institutions continue this practice, perhaps in the belief that they are providing opportunity -- but it is opportunity that most often leads to discouragement and failure. Some institutional leaders may worry about the potential negative impact on budgets of not having seats filled. However, the enrollment consequences to eliminating late registration have almost always been temporary or negligible.

Sometimes institutional policies are developed in isolation and create unintended roadblocks for students. When I assumed the presidency of Palomar College, the college had a policy that students could not repeat a course in which they received a passing grade (C or above). But another policy prohibited students who had not received a grade of B or higher in the highest-level developmental writing class from progressing to freshman composition. Students who passed the developmental class with a grade of C were out of luck and had to transfer to another institution if they were to proceed with their education. The English faculty likely wanted only the best-performing students from developmental writing in their freshman composition classes, but this same objective could be accomplished by raising the standards for a C grade in the developmental writing class.

Higher education institutions rely on their faculty and staff to accomplish their missions, so it is important for everyone to understand it in the same way. A faculty member I once met told me that he was proud of the high rate of failure in his classes. He believed that it demonstrated both the rigor of his classes and his excellence as a teacher. If we measured the excellence of medical doctors by the percentage of their patients who die, it would make as much sense. Everyone at the institution has a role in promoting student learning, and everyone needs to understand that the job is to inspire students and help them to be successful rather than sorting out those who have challenges.

"The mission of higher education is student learning, and all of or policies, procedures, and practices must be aligned with that mission if our institutions are to remain relevant."

It is important for faculty and staff to enjoy their work, to feel valued by trustees, administrators, peers, and students -- and for them to feel free to innovate and secure in their employment. As important as our people are to accomplishing our mission, their special interests are not the mission. Periodic discussions about revising general education requirements are often influenced by faculty biases about the importance of their disciplines or even by concerns about job security rather than what students need to learn as part of a degree or certificate program. Before these discussions begin, ground rules should be established so that the determinations are based upon desired skills and knowledge of graduates.

Too often, students leave high school unprepared for college, and they almost always face barriers when transferring from one higher education institution to another. The only solution to these problems is for educators to agree on expectations and learning outcome standards. However, institutional autonomy and sometimes prejudice act as barriers to faculty dialogue across institutional boundaries. It is rare for community college faculty and administrators to interact with their colleagues in high schools -- and interaction between university and community college faculty is just as rare. 

Why should we be surprised when students leaving high school are often not ready to succeed in college or when the transition between community college and university is not as seamless as it should be for students? If we are serious about increasing the rates of success for students, educators will need to come together to begin important discussions about standards for curriculums and expectations for students.

Despite the best intentions of legislators, government policies often force the focus of institutions away from the mission of student learning.  In California, legislation requires community colleges to spend at least 50 percent of their revenue on classroom faculty.  Librarians, counselors, student advisers, and financial aid officers are “on the other side of the Fifty Percent Law.”  The ratio of student advisers or counselors is most often greater than a thousand to one. Research clearly demonstrates that investments in student guidance pay off in increased student learning and success.  Despite the fact that community college students are the most financially disadvantaged students in higher education, they are less likely to receive the financial aid they deserve. Yet, the Fifty Percent Law severely limits what local college faculty and academic administrators can do on their campuses to meet the needs of students in these areas.  Clearly, this law is a barrier to increasing student learning and success. Perhaps state legislators and the faculty unions that lobby them do not trust local trustees and administrators to spend resources appropriately, but this law, in its current form, defies common sense if our mission is student learning.

At the federal level, systems of accountability that track only students who are first-time, full-time freshmen to an institution do not make sense in an era when college students are more mobile than ever and in an environment in which most community college students attend part-time.  A few years ago, I met with a group of presidents of historically black universities and encouraged them to work with community colleges to increase the number of students who transfer to their institutions.  The presidents told me that doing so could lower their measured student success rates because transfers are not first-time freshmen, and the presidents were not willing to take that risk. Fortunately, officials in the U.S. Department of Education are aware of this issue and are working to correct data systems. 

There are many other examples of policies and procedures that seem senseless when viewed through the lens of student learning rather than cherished processes and tradition, just as it seems silly that Eastman Kodak did not recognize that its business was photography or that the Swiss watch manufacturers did not understand that their business was to manufacture accurate and affordable wristwatches. 

American higher education today is increasingly criticized for increasing costs and low completion rates. Higher education costs have risen at an even faster rate than those of health care; student indebtedness has skyrocketed to nearly $1 trillion; and college completion rates in the United States have fallen to 16th in the world. In addition, new technologies and innovations may soon threaten established practices.

Challenging the status quo and confronting those with special interests that are not aligned with the mission of higher education can be risky for both elected officials and educational leaders. But given the challenges that we face today, “muddling through” brings even greater risks. Every decision that is made and every policy that is proposed must be data-informed, and policy makers and leaders need the courage to ask how the changes will affect student learning, student success, and college costs. Existing policies and practices should be examined with the same questions in mind. Faculty and staff need to be free of restraining practices so they can experiment with strategies to engage students and to help them to learn.

Colleges and universities are too important for educators to deny the challenges and demands of today and too important for policy makers to pass laws because of pressure from special interests or based on their recollection of what college used to be. Decisions cannot be based on past practices when the world is changing so rapidly. The mission of higher education is student learning, and all of our policies, procedures and practices must be aligned with that mission if our institutions are to remain relevant.  

George R. Boggs is the president and CEO emeritus of the American Association of Community Colleges. He is a clinical professor for the Roueche Graduate Center at National American University.

Pages

Subscribe to RSS - Assessment
Back to Top