administrators

How to avoid dysfunctional academic departments (essay)

“I can go this far and not an inch farther,” I said, red faced, at an English department meeting at the University of Michigan some time in the early 1980s. “This” -- and my hand theatrically swept the air -- “is where I draw the line.”

My colleague Julie Ellison rose to speak, full of fury at this bullying male. “Well,” she said, “I have my own line and this” -- mocking my gesture -- “is where I draw it.”

When we looked back upon that glare-versus-glare moment of tense conflict, neither Julie nor I could quite remember what it concerned. That would be unremarkable if, as would be likely between close friends of long standing, our inability to recall even the general subject of the fierce debate had taken place just recently -- years after the altercation. In fact, our recall failure occurred at a conciliatory lunch just one week after the Gunfight at Lit Crit Corral.

That’s how it goes in departments -- those crucial components of a university -- and especially in humanities departments, where we rely on hyperbole as much as social scientists rely on data. That kind of late-afternoon argument, fueled by fatigue and hunger, over a course requirement or a written policy on which the future of the world appeared to depend, often shrinks to its right proportions after the drive home and a salad, a sauvignon and a salmon.

At least that’s the way it should be, and that is the way it was in the Michigan English department by the time Julie and I had our contretemps. But that’s certainly not the way it ever is in unhappy departments, and dysfunction at the department level, if left untreated, can have ramifications throughout the entire institution.

For example, when I arrived in Ann Arbor in 1972, lunches were not the cure for conflicts but the incitement. I was a naïve 26-year-old assistant professor who had been drawn to academe in part because it appeared to provide a faculty society far superior to the life of, say, an insurance office. That fantasy began to erode as I was taken out to lunches during the first week of the semester by two separate groups of faculty. Each, it became plain in minutes, despised the other and was urging me not so subtly to sign up on their side.

The arguments centered on standards for promotion. I found myself agreeing more with the more rigorous group, but I found it natural to have friendships that defied the divide. Nonetheless, I quickly learned that you do not invite X and Y to the same party, especially if it is a small one. And every spring before the elections to the department’s executive committee, there would be two caucuses.

We were an unhealthy culture, but we were surrounded by a mostly healthy college culture, which reacted to our warfare predictably -- and rightly. We were not placed in the dreaded receivership, but the dean made certain that we received few replacement faculty members or other optional benefits. There are plenty of other departments, was the implication, and as long as you have such opposing lunch cabals, those other departments will eat your collective lunch.

That’s the problem with departmental cultures that have failed. However distinct the battling antagonists, they have one thing in common: everyone loses.

Several years ago I was a member of an external review of a department at one of universities in the State University of New York system where conflict was so rife that the five of us were registered in a local hotel under false names so that no one could get at us with their side of the conflict. The creative writers, the literary historians, the theorists and the composition experts each wanted to secede -- taking, of course, all of the departmental resources with them. At a faculty meeting organized for us, one colleague’s highly emotional diatribe was greeted by the response of another who stood and said with startling simplicity, “Well, eff that.” The original speaker unhesitatingly replied, “Well, eff you.” “If you behave this way,” I cried, “you are all effed.”

And they all were. We wrote what we believed was a strategic report allowing each of the secession scenarios to play out and showing how each would be a disaster for all concerned. Amazingly, the report’s insistence that the factions needed to disarm and learn to work together actually succeeded -- but, less amazingly, only for a year or two, after which the wars resumed and the department was placed under college control.

Why so often is departmental culture so unsuccessful? For one thing, it is understudied. Look at 20 tomes on the crises facing higher education, and you probably will not find one that discusses the life of individual departments as a key factor. Yet it is the primary place where we academics live, more there than in a college, much less a whole university. When I became chair of the English department at Michigan, after perhaps 15 years as a faculty member, and thereby began to meet colleagues in a vast array of disciplines, I felt like I had entered a new universe. Before that, my department was my planet and the universe-ity was the far-off sky.

Beyond Dysfunction

We have not thought nearly enough about departmental cultures. The usual external review rarely provides adequate help, as the visitors themselves are not always adept at leadership issues and may see their task as one of advocating for the department to the supposedly unfeeling dean. A real redo of departmental culture and behavior requires planning backward from shared goals, as moderated by academics or fellow travelers who get it. And when national reforms of higher education sponsored by foundations and agencies are undertaken, they should be informed by studies of department life that do not yet exist. Both at the local and national levels, then, we need to consider departmental culture as an anthropologist would study the life of a tribe and as a wise counselor would minister to a valuable but neurotic patient.

Speaking of which, the larger tribe is the discipline rather than the college, and as long as we continue the questionable practice of mimicking disciplines in the organization of our departments, whatever schisms exist in the discipline are likely to show up in the department to challenge a harmonious community. That is what had taken place in that department where I had been part of an external visiting team, and while that was an extreme situation, it was not a rare one. Composition and literature colleagues sometimes stare at each other like the creatures at the Star Wars bar, wondering, what are you -- or what am I -- doing here? So too historians of medieval China and their modern American history counterparts.

We either must make continually explicit why the branches of a discipline extend from a common trunk or we should reconsider the forest and plant anew. And we might question as well the high number of departments even in the smallest colleges. At one college that I’m familiar with, the classics department consisted of two tenured faculty members who alternated in the role of chair. Since they despised each other, the constant motif resembled revenge tragedy.

In addition to a lack of scrutiny and those disciplinary schisms, one other problem faces many departments: administrative neglect. Some deans and provosts just let the dysfunction go on, refusing to take sides or even take conciliatory action for fear of descending into the muck. That kind of leadership cowardice is not rare.

Herewith, I offer three fast suggestions and one slow one.

  • Find Ms. or Mr. Right. Many colleges and universities simply allow the departmental chairing responsibility to rotate among tenured faculty. That is insanity. Leadership is a precious and vital talent, and no department can thrive without a fine leader. No one would think it wise to rotate the college presidency or a deanship in this manner. Why then the vitally important departmental chairs, who collectively matter as much or more to the institutional well-being?
  • Provide an incentive. It could be a bump in salary, a course release, a research assistant or a combination of all these. Because, let’s face it, chairing is hard work and heart work. It requires an all-in dedication as well as the ability to match the institution’s overall goals with the department’s own. (The chair must interpret the president’s broad bromides and tell her, “This is what we in our department think you meant by that.”) The chair must also serve as a friend/analyst for each faculty member, while coordinating what each wants to do with an overall program that best serves students. And beyond all that, the chair must foster a “let’s try it” spirit among a group that is better known for criticizing than for entrepreneurial zeal.
  • Allow space to govern. Too often shared governance devolves into snared governance. Shared governance needs to be real and clear so that all faculty members have skin in the game, but that clarity must include a territory for the chair to have some freedom and discretion. Without that, no incentive will be adequate. With it, a culture will depend less on the individual leader and create a tradition of both leadership and collegiality.
  • Reconsider the overall departmental structure. This is the slow suggestion, but it is worthy of the highest thought. Is it best to equate disciplines and departments, given so many ambiguities and anomalies? Might fewer be far better? How do we make the multidisciplinary more than an add-on while we never take anything away and thus everything gets thinner? This rethinking of a college or university ought not to be just slow, but continuous. Colleges compete to distinguish themselves with one or another gimmick and yet all offer very similar smorgasbords of fields. Would not basic redesign provide a more dignified distinction?

Healthy Conflict and an Esprit de Corps

None of this can happen without leadership beyond the department, encouraging the right people to act in the right ways. In Michigan English, the college leadership walked the talk, and two superb chairs -- who also had friendships that crossed boundaries and who practiced integrity like master musicians -- patiently eased the conflicts. The first used his naturally diplomatic manner and his obvious goodwill to dissolve the warring camps. The second inspired us with calmly and honestly stated high goals that people beyond the college came to respect. Neither ever lied. Each knew how to say two magic words when they made a decision that didn’t work out: “I’m sorry.” But because they were both remarkably strategic as well as fine, they did not have to say that very often.

They left me, as their successor, with a department full of healthy conflict within a context of great esprit de corps. We all remembered the bad old days, and we became willing -- if not at once, at least by lunch the next week -- to let what we held in common rule over difference or to refuse difference its devolution into personal dislike. “Human beings can be awful cruel to each other,” Huck Finn remarks, but human beings can also learn to value shared achievement over fiercely held dogmas. During one of those good-spirited years, a friend from another university who visited our party at the Modern Language Association conference wrote to me in wonder, “It was like halftime in the locker room of a winning basketball team.”

Departments matter more than anything else. And however and whoever they choose to do it, they would do well to survey themselves and their students and to adopt the suggestion of David Grant in The Social Profit Handbook that the staff members of any nonprofit be challenged to ask themselves, “What would we look like if we were really successful?” And then, “What would we look like if we were even more successful?” And to plan backward from there.

If no consensus emerges, the only right response is to question the very departmental structure and seek new forms of organization. But if there really is a reason why we in a department are all here together, then what are those deep values that we share? And how do we promulgate them, in large ways and small, with our students foremost in consideration?

If we really want a new era of triumph for the liberal arts, it cannot happen in any unit larger than a single department until it happens there.

Robert Weisbuch, a former president of the Woodrow Wilson National Fellowship Foundation and of Drew University, now leads Robert Weisbuch and Associates, a consultancy for liberal arts colleges and universities.

Editorial Tags: 
Image Source: 
iStock/Akindo

Flagler College flooded by hurricane as other campuses wait for the storm to pass

Smart Title: 

Flagler flooded; students at many campuses are evacuated; many colleges in region hit by Hurricane Matthew will remain closed Monday.

U of Richmond Adopts New Policies on Sex Assaults

The University of Richmond, which has been criticized this fall by students and others who say that the institution has not properly handled allegations of sex assaults, on Friday announced reforms in its procedures. In a statement to the campus, President Ronald A. Crutcher announced the creation of a new Center for Sexual Assault Prevention and Response, the separation of investigations from undergraduate colleges so that the inquiries would be handled in a centralized way, and the creation of an advisory committee to suggest further changes in policy.

On social media, student groups that have been pushing Richmond to do more to prevent and punish sex assaults said that the university was moving in the right direction. But some said that more still needed to be done, particularly on some past allegations of sex assault.

Ad keywords: 

Why you should use football tailgating to find a job (essay)

Category: 

Katharyn L. Stober describes how and why you should use tailgating and football lingo in your job search.

Ad keywords: 
Editorial Tags: 
Show on Jobs site: 
Image Source: 
iStock/Terry Katz

B Lab Releases Standards for Colleges

B Lab is a nonprofit group that issues a seal of approval to companies across 120 industries that adhere to voluntary standards based on social and environmental performance, accountability and transparency. After a two years of work, the group on Friday released a new benchmarking tool for colleges. The voluntary standards are designed to enable comparisons of both nonprofit and for-profit institutions.

"B Lab recognizes that the cost and outcomes of higher education, particularly regarding for-profit institutions, have become increasingly controversial, but regardless of structure institutions should put their students’ needs first," Dan Osusky, standards development manager at B Lab, said in a written statement. "We see our role as the promoter of robust standards of industry-specific performance that can be used by for-profits and nonprofits alike to create the greatest possible positive impact and serve the public interest, ultimately by improving the lives of their students."

A committee of experts, working with HCM Strategists and with funding from the Lumina Foundation, devised the standards. Laureate Education, a global for-profit chain, already uses the assessment tool.

Man With Machete Killed by Police at Boulder

A man with a machete was shot and killed by police officers when he was wielding the weapon in the sports medicine facility of the University of Colorado at Boulder, The Denver Post reported. “Given the weapon the suspect was armed with, given the statement already made to our initial victim and given the nature of how he was maneuvering through the Champions Center, we believe it was in the best interest of the university that it was a deadly force situation,” said the university's campus police chief, Melissa Za, at a news conference.

Ad keywords: 

An evaluation of whether performance funding in higher education works (essay)

More than 30 states now provide performance funding for higher education, with several more states seriously considering it. Under PF, state funding for higher education is not based on enrollments and prior-year funding levels. Rather, it is tied directly to institutional performance on such metrics as student retention, credit accrual, degree completion and job placement. The amount of state funding tied to performance indicators ranges from less than 1 percent in Illinois to as much as 80 to 90 percent in Ohio and Tennessee.

Performance funding has received strong endorsements from federal and state elected officials and influential public policy groups and educational foundations. The U.S. Department of Education has urged states to “embrace performance-based funding of higher education based on progress toward completion and other quality goals.” And a report by the National Governors Association declared, “Currently, the prevailing approach for funding public colleges and universities … gives colleges and universities little incentive to focus on retaining and graduating students or meeting state needs …. Performance funding instead provides financial incentives for graduating students and meeting state needs.”

But with all this state activity and national support, does performance funding actually work? As we report in a book being published this week, Performance Funding for Higher Education (Johns Hopkins University Press), the answer is both yes and no.

Based on extensive research we conducted in three states with much-discussed performance funding programs -- Indiana, Ohio, and Tennessee -- we find evidence for the claims of both those who champion performance funding and those who reject it. In keeping with the arguments of PF champions, we find that performance funding has resulted in institutions making changes to their policies and programs to improve student outcomes -- whether by revamping developmental education or altering advising and counseling services.

Underpinning those changes have been increased institutional efforts to gather data on their performance and to change their institutional practices in response.

But we often cannot clearly determine to what degree performance funding is driving those changes. Many of the colleges we studied stated they were already committed to improving student outcomes before the advent of performance funding. Moreover, in addition to PF, the states often are simultaneously pursuing other policies -- such as initiatives to improve developmental education or establish better student pathways into and through higher education -- that push institutions in the same direction as their PF programs. As a result, it is nearly impossible to determine the distinct contribution of PF to many of those institutional changes.

Meanwhile, supporting the arguments of the PF detractors, we have not found conclusive evidence that performance funding results in significant improvements in student outcomes -- and, in fact, we’ve discovered that it produces substantial negative side effects. In reviewing the research literature on PF impacts, we find that careful multivariate studies -- which compare states with and without performance funding and control for a host of factors besides PF that influence student outcomes -- largely fail to find a significant positive impact of performance funding on student retention and degree attainment. Those studies do find some evidence of effects on four-year college graduation and community college certificates and associate degrees in some states and some years. However, those results are too scattered to allow anyone to conclude that performance funding is having a substantial impact on student outcomes.

Various organizational obstacles may help explain that lack of effect. Many institutions enroll numerous students who are not well prepared for college. In addition, state performance metrics often do not align well with the missions of broad-access institutions such as community colleges, and states do not adequately support institutional efforts to better understand where they are failing and how best to respond.

Even if performance funding ultimately proves to significantly improve student outcomes, the fact remains that it has serious unintended impacts that need to be addressed. Faced both by state financial pressures to improve student outcomes and substantial obstacles to doing so easily, institutions are tempted to game the system. By reducing academic demands and restricting the enrollment of less-prepared students, broad-access colleges can retain and graduate more students, but only at the expense of an essential part of their social mission of helping disadvantaged students attain high-quality college degrees. Policy makers should address such negative side effects, or they could well vitiate any apparent success that performance funding achieves in improving student outcomes.

In the end, performance funding, like so many policies, is complicated and even contradictory. To the question of whether it works, our answer has to be both yes and no. It does prod institutions to better attend to student outcomes and to substantially change their academic and student-service policies and programs. However, performance funding has not yet conclusively produced the student outcomes desired, and it has engendered serious negative side effects. The question is whether, with further research and careful policy making, it is possible for performance funding to emerge as a policy that significantly improves student retention, graduation and job placement without paying a stiff price in reduced academic quality and restricted admission of disadvantaged students. Time will tell.

Kevin Dougherty is a senior research associate at the Community College Research Center, Teachers College, Columbia University and an associate professor at Teachers College. Sosanya M. Jones is an assistant professor at Southern Illinois University. Hana Lahr is a research associate, Rebecca S. Natow is a senior research associate, Lara Pheatt is a former research associate and Vikash Reddy is a postdoctoral research associate, all with CCRC.

Editorial Tags: 

Report Proposes Alternate Form of Accreditation

The Center for American Progress today released a report that proposes a "complementary competitor" to the current system of accreditation.

The report describes three primary components for an outcomes-focused, alternative system, which, like current accreditors, would serve as a gatekeeper to federal financial aid.

  • High standards for student outcomes and financial health;
  • Standards set by private third parties;
  • Data definition, collection and verification, as well as enforcement of standards by the federal government.

"If implemented, this new system would provide a pathway to address America’s completion and quality challenges through desperately needed innovation," the report said. "And it would do so while establishing strong requirements to ensure that students and taxpayers get their money’s worth."

Differing Views on Free College, State Disinvestment

Public Agenda, a nonpartisan group, on Thursday released results of two recent national surveys of American adults on higher education. Respondents generally favor using taxpayer money to make public colleges free for students from low- and middle-income families, with roughly two-thirds calling it a good idea. However, the survey found that Democrats are much more likely to like free college proposals (86 percent) than Republicans (34 percent). Respondents were also divided by age, with those under 49 liking the free-college idea (73 percent) more than those who are at least 50 (58 percent).

The group also found a partisan divide on a question about cuts in state government funding of public colleges. Democrats were more likely to call disinvestment a problem (79 percent) than were Republicans (57 percent).

What current college rankings do and don't tell us (essay)

The Wall Street Journal and Times Higher Education have partnered to produce yet another college ranking. Should we applaud, groan, ignore or something else? I choose applause -- with suggestions.

This new project represents a positive step. For starters, any ranking that further challenges the hegemony of what I have termed the “wealth, reputation and rejection” rankings from U.S. News & World Report is welcome. Frank Bruni said much the same thing in his recent New York Times column, “Why College Rankings Are a Joke.”

I traveled the country for two years for the U.S. Department of Education -- I called myself the “listener in chief” -- to hear what students and colleges wanted or worried about in the federal College Scorecard. I explained that one of the most important reasons to develop the College Scorecard was to help shift the focus in evaluating higher education institutions to better questions and to the results and differences that should really matter to students choosing colleges and taxpayers who underwrite student aid. Just this week, the White House put it this way:

By shining light on the value that institutions provide to their students, the College Scorecard aligns incentives for institutions with the goals of their students and community. Although college rankings have traditionally rewarded schools for rejecting students and amassing wealth instead of giving every student a fair chance to succeed in college, more are incorporating information on whether students graduate, find good-paying jobs and repay their loans.

Some ratings have already blazed new trails by giving weight to compelling dimensions. Washington Monthly, for example, improved the discourse when it added public service to its criteria. The New York Times ranking high-performing institutions by their enrollment rates for Pell-eligible students was an enormous contribution to rethinking what matters most. The Scorecard in turn contributed by adding some (but admittedly not all) of the central reasons we as a nation underwrite postsecondary education opportunity: completion, affordability, meaningful employment and loan repayment.

The WSJ/THE entrant also offers positive approaches to appreciate. The project shares some sensible choices with the Scorecard, starting with not considering selectivity. That’s good: counting how many students a college or university rejects often tells us more about name recognition and gamesmanship than learning.

The WSJ/THE rankings also incorporate repayment, which is a useful window that reflects such contributing factors as an institution’s sensitivity to affordability, which in turn includes net price and time to degree. It also grows out of whether students are well counseled and realistic about debt and projected income -- and use their flexible repayment options, like income-contingent choices, so they can handle living costs and also their loan obligations, even if they choose work that pays only moderately.

And it was a very smart choice to use value-added outcome measures, drawing on work by the Brookings Institution melding Scorecard information and student characteristics. That approach is designed “to isolate the contribution to the college to student outcomes.” It is also important because value-added metrics help respect the accomplishments of colleges that are taking, or want to increase enrollment of, populations of students that might not graduate or achieve other targets as easily as others.

That said, the WSJ/THE transition to outcomes-based measures is incomplete. A fresh new ranking is a chance to recognize colleges and universities that do a remarkable job in achieving strong results for students at affordable prices. Including a metric for per-student expenditure is an unfortunate relic from old-fashioned input-based rankings. It’s a problem not just because it mixes inputs and outcomes. More significantly, it clouds the focus on results, giving an advantage at the starting gate to incumbents that simply have a lot of money, even if other institutions are achieving better results more economically. That’s counterproductive when the goal should be to identify and reward institutional efficiency and affordability measures that generate good results as quickly as possible.

Even Better Questions

But it’s not too late. The tool already allows sorting by the component pillars, and Times Higher Education plans to work with the data to explore relationships and additional questions. WSJ/THE could rerun their analysis without the wealth measure and its conservative influence to see whether and how that alters the rankings. It will be interesting to see whether publics rise if resources are factored out. I’d like to think that a true results-based version would surface colleges and universities, perhaps a bit lesser known, that outperform their spending and bank accounts. Whether institutions achieve that through innovation, culture, dedication or some other advantages or efficiencies, it’s well worth a look.

These rankings include a new dimension, using a new 100,000-student survey that asks students about the opportunities their colleges and universities provide to engage in high-impact education practices. The survey also asks about students’ satisfaction and how enthusiastically they would recommend their institutions. We definitely need fresh ways to understand such education outcomes as well-being, lifelong learning, competence and satisfaction. How effectively does this survey, or the Gallup-Purdue survey, answer that need?

My first question is whether this really fits the sponsors’ desire to focus on outcome measures. Is it just an input or process measure, albeit an interesting one? I’d also like to know more about the relationship of opportunities to actual engagement in those practices, and I wonder how seriously or consistently students answered the questions. What does it mean to different students to have opportunities to “meet people from other countries,” for example, and does simply meeting them make for a more successful education?

This is also my chance to ask about the strength of the causal links between the opportunity to participate in a particular educational practice (whether an internship or speaking in class) and the outcomes, from learning to job getting and performance, with which they may be associated. This question applies to questions with the WSJ/THE “Did you have an opportunity to …” format, and is most vivid for the Gallup-Purdue study. Maybe the people whose personalities and preferences incline them to choose projects that give them close, long-term connections with faculty members -- or whom faculty members chose for those projects based on characteristics including charm, social capital or prior experiences -- are engaging, optimistic and, yes, privileged in ways that would also make them engaged and healthy later in life. Was the involvement with those practices in college really causal?

To evaluate the WSJ/THE question about recommending the college or university to others, it would be valuable to know more about the basis for the students’ responses: Were they thinking about academic or quality-of-life considerations? Past experience or projections for how their educations would serve them later? I wonder if their replies were colored by an intuition that their answers could affect their institution’s standing, thus kicking in both their loyalty and self-serving desire to promote it. In short, it’s hard to know how much these new measures tell us. But it’s a worthy effort to build new tools beyond the few crude metrics we have now for understanding differences among institutions.

Serious Work to Do

On a broader level, none of the ratings and rankings have made much progress in expressing learning outcomes, although we urgently need more powerful and sensitive ways to articulate them. Whether or not students have developed the knowledge and skills, the capacities and problem-solving abilities, appropriate to their program should be at the heart of any assessment of educational results.

It’s also time to move beyond Washington Monthly’s good but simple additions to capture intangible and societal outcomes more successfully. By that failure of imagination, not only we ratings designers, but also all of us in higher education, have allowed income to take center stage among outcomes -- and played into the damaging transformation in perception of higher education from a public to a private good.

I said many times in the Scorecard conversation that it’s sensible and understandable for families to want to know if an educational experience would typically generate what they would consider a decent living, including the ability to handle the loans assumed to pay for it, and how employment or income results compare across programs or institutions. But, I went on, that does not mean that income can stand alone as though that’s all that matters. If an affordable housing advocate or journalist is satisfied in her preparation to do that work, or a dancer or teacher got exactly the education he needs to pursue his goals and be a good citizen, how can we measure and reflect that? This is not a news flash but an echo: we need better ways to communicate with families and students about how higher education makes a difference to both the student and society far beyond just economic returns.

There’s other work to be done in the college information enterprise. This month, as the Scorecard celebrated its first anniversary, the Education Department marked the occasion with a data refresh and also wove together new partnerships to expand the Scorecard’s value. One of the big challenges for any useful college information source is to make sure it reaches the students who need it the most: the ones who aren’t sure if, or may already doubt that, they can afford college, who know least about the options available, who are uncertain about the outcomes they can expect from college or at different schools. They’re the ones who suffer from the severe “guidance gap” at far too many high schools and among many adult populations. This intensified collaboration among government, counselor organizations and higher education institutions at every level is a wise strategy.

Ultimately, however, the most transformative role of the well-conceived rankings and scorecards will turn out to lie not in whether every student reads them but in their value in supporting institutional improvement. They are a gold mine for benchmarking and can help institutions choose which outcomes really matter and then work across functions to improve them.

The fact is that for decades colleges have invested far too much energy striving -- or replaying games -- to get better on measures that don’t really matter. Asking smarter questions about genuinely significant priorities can help us graduate more low-income students into rewarding work and find affordable paths to solid learning outcomes for citizens and workers. Better ratings, continuously improved and building on each other’s contributions, give us a chance to put higher education’s intense competitive energies into worthy races.

Jamienne S. Studley, former deputy under secretary of the U.S. Department of Education and president of Skidmore College, is national policy adviser with Beyond 12 and consultant to the Aspen Institute, colleges and nonprofits.

Editorial Tags: 
Image Source: 
iStock/filo

Pages

Subscribe to RSS - administrators
Back to Top