Assessment will make higher education accountable. That’s the claim of many federal and state education policy makers, as illustrated by the Commission on the Future of Higher Education. Improved assessment has become for many the lever to control rising tuition and to inform the public about how much students might learn (and whether they learn at all). But many in higher education worry that assessment can become a simplistic tool -- producing data and more work for colleges, but potentially little else.
Has the politicization of assessment deepened the divide between higher education and the public? How can assessment play the role wished for by policy makers to gauge accountability and affordability and also be a powerful tool for faculty members and college presidents and provosts to use to improve quality and measure competitiveness? Successful policies will include practices that lead to confidence, trust and satisfaction -- confidence by faculty members in the multiple roles of assessment, trust by the public that assessment will bring accountability, and satisfaction by the leaders such as the presidents that assessment will restore the public’s confidence in higher education. A tall order to be sure, but we believe assessment – done correctly -- can play a pivotal role in the resolution to the current debate on cost and quality.
For confidence, trust and satisfaction to occur, higher education and public officials must each take two steps. Higher education must first recognize that public accountability is a fact and an appropriate expectation. This means muting the calls by public higher education for more autonomy from state and federal government based simply on the declining percent of the annual higher education budget provided by public sources. This argument may help gain the attention of policy makers regarding the financial conundrums in higher education but it is not a suitable argument against accountability. Between federal and state sources, billions of dollars have been invested in higher education over the nearly 150 years of public higher education. The public deserves to know that its investments of the past are being used well today -- efficiently and effectively.
In response, federal and state policy makers need to publicly embrace the notion advocated as early as 1997 that quality is based on “high standards not standardization.” Higher education’s differentiation is a great gift to America. The cornerstone of American higher education -- institutions with a diversity of missions -- is meeting the educational needs of different kinds of students with different levels of preparation and ability to pay. It is important to recognize that assessment must match and reinforce the pluralism of American higher education. America is graced with many different kinds of colleges -- private, public, religious, secular, research, etc. It is important to have an assessment system that encourages colleges and universities to pursue unique missions.
A second step is for higher education to make transparent the evidence of quality that the public needs in order to trust higher education. “Just trust us,” is no longer sufficient as higher education has flexed its independence in setting ever increasing tuition rates in spite of the public’s belief that it has been excessive. Trust is built on transparency of evidence not mere declarations of quality. Practically a few indicators of quality that cut across higher education are going to be required. For example, surrogate and indirect measures of learning and development captured by student surveys, amount of need-based financial assistance, dollars per student invested in advising services, and dollars per faculty member dedicated to instructional and curricular development are some possibilities. Public opinion is heavily on the side of legislators and members of Congress on this issue.
For public policy makers, it is imperative to accept the notion that to assess is to share the evidence and then to care. Caring requires action and support not just criticism. Public policy makers must educate themselves about the complexity of higher education teaching, research and public engagement. This means accepting that the indicators of quality of the work of the academy are complex, as they should be. Whatever indicators are chosen, the benchmarks will vary by type of college or university. Take graduation rates as an example. Inevitably, highly selective colleges and universities are much more likely to have higher graduation rates than those with access as a goal. The students being admitted to the highly selective colleges and universities already have demonstrated their ability to achieve and have the study skills and background to be successful in college. Open access colleges and universities, on the other hand, have a greater percentage of students who are at risk, need to develop study skills in college, and are in general less prepared for the riggers of college study when compared to those with high achievement records out of high school. But these characteristics -- which frequently also result in lower graduation rates -- do not make these colleges and universities inadequate or not worthy of public support. Many great thinkers have said that a nation can be judged by how it treats its poor; this same argument works for education. The goal for everyone is to do better, starting where the students are -- not where we would like them to be when admitted.
With both sides changing their approaches, the public and higher education can productively focus on how together they can use assessment as an effective tool to determine quality and foster improvement. In doing so, we offer eight recommendations that if followed can offer the faculty the confidence they demand that assessment is a valid tool for communicating the evidence of student learning and development, the presidents the satisfaction that when all is said and done, it will have been worth the effort, and the public the trust that higher education is responsive to its concerns.
1. Recognize that assessment can serve both those within the academy and those outside of it, but different approaches to assessment are required. Faculty members and students can use assessment to provide the feedback that creates patterns and provides insight for their own discussion and decision making. To them assessment is not to be some distant mechanical process far removed from teaching and learning. On the other hand, parents, prospective students, collaborators, and policy makers also can benefit from the results of assessment but the evidence is very different. Through institutional assessment, they can know that specific colleges and universities are more or less effective as places to educate students, which types of students they best serve, and the best fit for jointly tackling society’s problems.
2. Focus on creating a culture of evidence as opposed to a culture of outcomes. Language and terms are important in this endeavor. The latter implies a rigidity of ends, whereas the former reflects the dynamic nature of learning, student development and solution making. A “teaching for the test” mentality cannot be the goal for most academic programs. We know from experience that assessment strategies that have relied heaviest on external standardized measures of achievement have been inadequate to detect with any precision any of the complex learning and developmental goals of higher education, e.g. critical thinking, commitment, values.
3. Accept that measurement of basic academic and vocationally oriented skills and competences may be appropriate for segments of the student population. For example, every time we get on an airplane we think of the minimum (and hopefully) high standards of the training of the pilots and the rigorous assessment procedures that “guarantee” quality assurance.
4. Avoid generic comparisons between colleges and universities as much as possible. A norm-referenced approach to testing guarantees that one half of the colleges and universities will be below average. The goal is not to be above average on some arbitrary criterion, but to achieve the unique mission and purpose of the specific college and university. A better strategy is to build off one’s strengths -- at both the individual and institutional level. Doing so reinforces an asset rather than a deficit view of both individual and institutional behavior leading to positive change and pride in institutional purpose. In order to benchmark progress, identify similar institutions. Such practices will encourage more differentiation in higher education and work to stem the tide of institutions clamoring to catch up with or be like what is perceived as a more prestigious college or university. "Be what you are, do it exceptionally well, and we will do what we can to fund you" would be a good state education policy.
5. Focus on tools that assess a range of student talent, not just one type or set of skills or knowledge. Multiple perspectives are critical to portraying the complexity of students’ achievements and the most effective learning and development environments for the enrolled students. All components of the learning environment, including student experiences outside the classroom and in the community must be assessed. We must measure what is meaningful, not give meaning to what we measure or test. Sometimes simple quantitative data such as graduation rates and records of employments are sufficient and essential for accountability purposes. But to give a full portrayal of student learning and development and environmental assessment, many types of evidence in addition to achievement tests are needed. Sometimes portfolio assessment will be appropriate, and at other times standardized exams will be sufficient.
6. Connect assessment with development and change. Assessment has been most useful when driven by commitment to l earn, create and develop, not when it has been mandated for purposes of administration and policy making. Assessment is the means, not the end. It is an important tool to be sure, but it always needs to point to some action by the participating stakeholders and parties.
7. Create campus conversations about establishing effective environments for the desirable ends of a college education. Assessment can contribute to this discussion. In its best from, assessment focuses discussion, not make decisions. People do that, and people need to be engaged in conversations and dialogue in ways that they focus not on the evidence but the solutions. As we stated earlier, to assess is to share and care. When groups of faculty get together to discuss the evaluations of their students they initially focus, somewhat defensively, on the assessment evidence (and the biases inherent in such endeavors), but as they get to know and trust each other they focus on how to help each other to improve.
8. Emphasize assessment’s role in “value added” strategies. Assessment should be informing the various publics about how the educational experiences of students or of the institutional engagement in the larger society is bringing value to the students and society. All parties need to get used to the idea that education can be conceptualized and interpreted in terms of a return on investment. But this can only be accomplished if we know what we are aiming for. This will be different for each college and university and that is why the dialogue with policy makers is so crucial. For some, the primary goal of college will focus on guiding students in their self discovery and contributing to society; for others it will be more on making a living; for yet others on understanding the world in which we live.
When both the public and higher education accept and endorse the principle that assessment is less about compliance or standardization and more about sharing, caring and transparency, then confidence, trust and satisfaction will be more likely. We believe that higher education must take the lead by focusing on student learning and development and engage with the public in collaborative decision making. If not, policy makers may conclude that they have only the clubs of compliance and standardization to get higher education’s attention.
Larry Braskamp and Steven Schomberg
Larry A. Braskamp, formerly senior vice President for academic affairs at Loyola University Chicago, is professor of education at the university. Steven Schomberg, retired in 2005 as Vice Chancellor for Public Engagement and Institutional Relations, University of Illinois at Urbana-Champaign.
Many people think they know what we should produce with the process we call a college education. Unfortunately, they don’t agree with each other, so the topic of measuring college success provides an endless opportunity for self-assured clarity about what is not at all clear. The current occasion for the revival of this topic, which has had various other high and low points on the national accountability agenda, comes from the Spellings commission’s discussion and draft reports that call for colleges and universities to tell their customers the college will produce for students.
This seemingly reasonable request is like most high level educational principles: dramatic and simple in general and remarkably complicated and difficult in specific. Let’s look at some of the complications.
The product of a college degree is, of course, the student. Many want to assure parents and other customers that their students will emerge from the process of higher education with a specific level of skills and abilities. Recognizing the difficulty and expense of enforcing exit testing on all students, some propose to test a sample of students and infer from the results an achievement score for the institution that customers can then compare with the scores from other institutions. Leaving aside for the moment the touchy question of exactly what we want the students to know, testing that produces a raw institutional score is not likely to work very well by itself.
Everyone knows that smart, well prepared freshmen usually end up as smart well prepared graduating seniors. If students test well entering the institution they are very likely to test well exiting the institution. Our egalitarian spirit worries that institutions whose students are less smart and less well prepared will necessarily score low on these exit tests in comparison to elite institutions with very well prepared students. Every institution that works hard to improve their students’ abilities should get a good score because the idea of improvement inspires everyone. A method to ensure that every institution, whatever the initial quality of its students’ preparation can score well on a national scale goes by the term “value added.”
Value-added methods attempt to measure the ability and preparation of students when they enter the institution, measure the ability and achievement of the students as they leave the institution, and then calculate an improvement score. Value added ascribes the improvement score to the wisdom and dedication of the institution (even if the achievement is actually the students’).
A value-added score, calculated using the same methodology for all higher education institutions in America, would enable an institution with limited resources that admits students with very poor high school records and very low SAT scores but graduates students who have pretty good GRE scores (as an example of an exit exam) to get a 100% score because the improvement or value-added is large. Colleges with superb facilities and resources that admit students with very high SAT scores and very fine high school preparation and graduate students with very good GRE scores could get a 50% score because the improvement measured by the tests would be modest (from terrific coming in to terrific going out). Then, in the national rankings, the first institution could claim to be a much better institution for improvement than the second one.
This discourse fools no one and would actually tell consumers that the institution they want their students to enroll in is the one that has high scores going in and high scores going out rather than the one that has low scores going in and medium scores going out. What matters, as everyone knows, is the score leaving the institution.
This approach also has the perverse effect of devaluing actual accomplishment and ability in favor of improvement. It implies that a student is doing just as well at an institution that graduates at the middle level of accomplishment (but with lots of improvement) as the student would do at an institution that graduates at the top level of accomplishment (but with less improvement).
It does the employer and the student no good to know that the student attended an institution that produces middle level performance from very poor preparation. The employer wants a graduate who has high performance, high skills, high levels of knowledge and ability. The employer is likely less interested in knowing that the student had to work hard to be a middle level performer and more interested in hiring someone with a high level of performance.
If we measure value added (by whatever means), we have to create a test for the end point: what the graduating student knows about the specific subjects studied, about the specific major completed. When we test for what the student knows about the substance of the various fields of study, on some national scale, then we will have a marker for achievement. Once we have this marker for achievement, no one will care much about the marker at the entry level. Everyone will want their student to be in an institution whose scores demonstrate high levels of graduating achievement. It may give struggling institutions a sense of accomplishment to move students from awful preparation to modest achievement, but it will not change the competitive nature of the marketplace nor will it reduce the incentive to get the very best students who will, even if they don’t improve at all, score high on exit exams.
In this discussion, as is true in all efforts to measure institutional quality and performance, nothing is simple and no single number or measure will achieve that national reference point for total college achievement. College, as so many of us repeat over and over, is a complicated experience. There is no standardized college experience.
What we have is a relatively standardized curriculum and time frame. We have a four to five year actual or virtual educational process for students pursuing a traditional four-year baccalaureate degree, we have a general education requirement and a major requirement, and we have a host of extra or enhanced optional or required experiences for students. Within these large categories, the experience of students, the learning of students, and the engagement of students varies dramatically from discipline to discipline within institutions as well as between institutions.
Much of the emphasis on accountability measurement has as its premise the highly destructive goal of homogenizing the content and process of American higher education so that all students have the same experience and the same process. This centralizing drive comforts regulators, but it does not reflect the reality of the marketplace. As we have emphasized before, the American commitment to universal access to higher education requires a high level of variability in institutions, in the educational process, and in the outcomes. We do need good data from our institutions about what they do and what success their graduates have, but we do not need elaborate, centralized, homogeneity enforced by an ever more intrusive regulatory apparatus.
Look around you. Virtually everyone in the room is engaged in a job different from the one they prepared for in college.
This tells a story of a process that transcends content and curriculum, a process that goes beyond training, to the point where education actually took place. You and your colleagues underwent a transformation in the 1,800 or so hours you spent in the classroom interacting with your peers and with 40 or so faculty members at one level or another. You emerged from college having developed the ability to listen, to assimilate, to learn on your own, to project your own insights, opinions and views.
Some faculty members taught you how to think, how to challenge, to have confidence and to be independent. Most of you acquired the ability to analyze and to synthesize. Many acquired a love of learning for its own sake. You found faculty members with a wide variety of skills and goals; some tried to teach you content, as well as discernment. Others projected a point of view and welcomed a contrary view, if well supported.
In all this time, you also acquired knowledge, most of which is long gone. But you are still a different person from the high school graduate who entered college as a freshman. You learned how to read analytically and critically, you began to appreciate the role of originality and creativity. You know how to formulate and defend a hypothesis. And you learned how to assimilate the ideas of others and to interact, whether to support or to disagree.
There is so much else that you acquired, and when you graduated it was not just because you passed a number of courses. The structure, the faculty, the ever more demanding senior courses, the coherence of your major, and the qualities of mind, marked you as a successful outcome.
You are the reason the colleges are proud of what they do and your accomplishments represent the performance that colleges and universities point to in developing and justifying their reputation. Reputations are not developed in a vacuum. You, your parents, your children, your colleagues and your peers are the living remnants of the college experience. Your success justifies the massive resources poured by private Americans into supporting colleges and universities. And your success validates the vocation that characterizes the role of so many faculty members.
There is something special about American higher education, which continues to produce some of the world's greatest scientists and engineers, thinkers and scholars. There is something unique in the education we offer, which provides a breadth, an intellectual depth to accompany the skills and aptitudes of the specialist. And there are the human successes in sectors whose mission is to produce an involved, thinking citizenry.
Not everyone agrees that American higher education is characterized by success. Numbers are quoted indicating that the quality of graduates is not what it used to be. But they forget that sometimes the numbers go down as the numbers go up. As American higher education welcomes people less prepared, less gifted and often less motivated, as the atmosphere at some colleges becomes less rarified by the proliferation of remedial education, the average accomplishment will go down.
Nonetheless they insist it is time to measure learning outcomes. We are to select slices of the educational experience -- those slices that can be measured -- and somehow draw conclusions about all learning. Unfortunately, that which can be measured usually excludes the most important characteristics of a person's education. Depending on the consequences of these measurements, colleges will teach to the test and so, too, will faculty. Everyone wants to succeed, and if success is going to be defined by those outside academe, it is learning and teaching that will feel the pain first. In the end all of society will suffer.
Tragically, the intellectual immersion, which you yourselves recognized as characteristic of the totality of your undergraduate experience, will be compromised. That will happen precisely at the time when young people from emerging communities arrive at the gates of our colleges and universities, desperately needing this kind of intellectual immersion.
In the end, higher education has responded to the call for broad measures of learning outcomes. Several national organizations have committed to encouraging member institutions to experiment in this direction. But we must remember we are talking about experiments. These efforts must remain pilot projects subject to validation carried out within academe. We must further insist that the use of such measures be based on inherent value, rather than governmental mandate.
Government has heard from all the others; it is time to hear from us. From you.
Bernard Fryshman is executive vice president of the Association of Advanced Rabbinical and Talmudic Schools’ Accreditation Commission.
The secretary of education’s Commission on the Future of Higher Education unequivocally advances the notion that the “business” of colleges and universities -- defined primarily in the final report as “preparation for the work force” -- is best advanced by the disclosure of data allowing institutions to be compared to one another, particularly in measurements of student learning. Standardized testing of all college students would be required to produce those comparative quantitative data. Such universal application of testing is forwarded as the guarantee of accountability for what this American democracy requires most essentially from its higher-education institutions. In other words, what has already been applied with mixed success to pre-collegiate education is now to be applied to higher education. In addition to the No Child Left Behind Act, we are to have what might be called No College Left Behind.
In the nation’s current zeal to account for all transfer of teaching and insight through quantitative, standardized testing, perhaps we should advance quantitative measurement into other areas of human meaning and definition. Why leave work undone?
I suggest, for example, that a federal commission propose an accountability initiative for those of faith (not such a wild notion as an increasing number of politicians are calling the traditional separation of church and state unhealthy for the nation). This effort should be titled No God Left Behind. The federal government would demand that places of worship, in order to be deemed successful, efficient and worthy of federal, state and local tax-support exemption, provide quantitative evidence of the effectiveness of their “teaching.” (Places of worship are not unlike colleges and universities in that they are increasing their fund-raising expectations -- their form of “price” -- because of increasing costs.) The faithful, in turn, would be required to provide quantitative evidence of the concrete influence of their respective God upon behaviors within a few years of exposure -- say four years.
And in keeping with the Commission on the Future of Higher Education’s suggestion that one test would be appropriate for all types of higher-education institutions regardless of mission -- liberal-arts colleges, private research universities, public research universities, community colleges, for-profit-online universities, vocational schools -- a standardized test would be applied to a person of faith, whether Christian, Jew, Muslim, Hindi or other “approved” religions. Additionally, a pre-test would be given to the faithful upon initial engagement with their respective God and place of worship, and would be followed by a post-test after four years to assess “value added.”
Of course, I really don’t think No God Left Behind is a good idea. The reasons why also are applicable to No College Left Behind and No Child Left Behind. Most people of faith, I believe, would argue that this quality lies beyond mere human quantitative measurement to validate its worth, that it exists in a variety of forms (only the most radical would argue for the exclusion of faiths that fail a test), and that its effects on human beings may not be immediately evident. None of these assertions, of course, makes faith for believers any less real as a source of improving the quality of human life.
My case for faith continuing to flourish for those who wish it, without proof through standardized testing, shares critical affinities with my argument for higher education not being universally subject to quantitative assessment. There are at least four inter-related issues that confound the Commission’s absolutism towards quantitative measurement to solve the imagined knowledge deficit and lack of contribution to the nation by American higher education.
First, quantitative testing, to be of application, must have as its subject that which can be empirically assessed. Such limitation leaves out critical areas of human knowledge, meaning and definition that are not readily subject to immediate empirical assessment during the course of instruction but are, nevertheless, very real: the development of character thorough trial and error in a residential setting, an appreciation of the arts and aesthetics; a literary and poetic sensibility; a recognition of the responsibilities of citizenship; an appreciation of liberty and freedom; a spirit of business entrepreneurialism; and creativity and inventiveness in the sciences (and I am not talking solely about the short-term acquisition of cultural, historical and political “fact” in these areas).
The commission’s recommendations -- with their focus on workforce preparation -- might well reduce the scope of what is taught and discussed in those institutions to only those areas that can be indisputably measured by a test. An abiding respect for learning, which is not so obviously technical and thus not measurable through standardized assessment, is rooted deeply in the intentions for a distinctively American higher education by our country’s founders. Indeed, Benjamin Rush, a patriot, signer of the Declaration of Independence and founder of several colleges, to include Dickinson, proclaimed this distinctive American relationship among advanced knowledge, abstract concepts and the future well-being of the nation when he said, “Freedom can exist only in a society of knowledge. Without learning, men are incapable of knowing their rights.” The intent of a liberal education is thus defined.
Both propositions are based not on the quantitative assessment of the merely technical, but rather the confidently ambiguous power of existing in a “society of knowledge,” one that would influence learners to a much desired and critically important ideal -- democracy and the diversity of perspective that it secures. There exists in Rush and his co-conspirators, in founding a distinctively American higher education after the end of the revolution, a mature appreciation of the complexity and variety of the instruction necessary to advance a democracy.
Second, and closely related to the perspective of Rush, is that education in America was not intended solely to provide young people for “the work force” through the empirically demonstrated mastery of a limited set of practical skills. Fundamental literacy, numeracy and scientific knowledge were more properly the task of the grammar schools and the academies (high schools). American higher education historically builds on this “technical” accomplishment and engages students in a democratic way of life through both advanced technical and speculative (creative) learning.
Third, students in the United States at all levels of formal education already are the most “tested” by standardized measurement in the world. Yet, we still seem to be in a position of deficit in improving what students actually know and need to know to function productively in society. Do we truly believe that more testing will lead to improved teaching and learning? Are we so convinced that “to test is to learn” despite so much evidence to the contrary?
Fourth, are we oblivious to the fact that, like the flourishing of spirituality only in societies that are generously supportive, the acquisition of knowledge only advances in political entities for which this activity is esteemed and generally valued? A society and government in which only practical, technical knowledge is lauded and that which is more abstract is derided -- such as the long-term, arduous education for the appreciation of democracy, liberty and freedom -- have little chance of moving a people to take the enterprise seriously.
I have no doubt that Secretary Spellings, the Commission members and the chairman, Charles Miller, intend an American higher education that offers the nation and the world graduates who can confront, with knowledge, skill, creativity and an entrepreneurial spirit, the challenges and the opportunities that the world demands. My caution -- and it is a pointed one -- is that in our rush to secure excellence thorough the simplistic and misguided notion of increased quantitative assessment of workforce skills, we will destroy the historic distinctiveness of American higher education.
Derek Bok, in Our Underachieving Colleges, cites numerous commentators over the last few decades alarmed at the perversion of American higher education as it progressively leans to practical and technical knowledge at the expense of more generous, less immediately focused ambitions. For example, Diane Ravitch, an education analyst who has frequently criticized the college establishment, states, “American higher education has remade itself into a vast job-training program in which the liberal arts are no longer central.” And Eric Gould in 2003 observes negatively that, “What we now mean by knowledge is information effective in action, information focused on results. We tend to promote the need for a productive [emphasis added] citizenry rather than a critical, socially responsive, reflective individualism.”
We must never forget that a distinctively American higher education, using a wide variety of internal and external assessments already in place, aims to increase competencies and literacies established prior to college (although far greater public transparency is certainly needed). This ambition the United States shares with the rest of the world. American education, however, infuses this globally shared agenda with something extra, something that has secured its distinction for centuries -- to extend beyond factual and technical knowledge and to introduce its students to what Derek Bok describes as, “more ethically discerning … more knowledgeable and active in civic affairs” -- and that cannot be captured through standardized testing at the moment of introduction, for it unfolds over time and with experience.
Lose this ambition and American higher education has lost permanently its distinction as a democratic society of knowledge.
William G. Durden
William G. Durden is president of Dickinson College.
Of all the ideas to come out of Margaret Spellings's Commission on the Future of Higher Education, the final report proposal that has been the most contentious inside the DC Beltway is the proposal for a unit-records database. There are plenty of other controversial ideas floated in the commission's hearings, briefing papers, and report drafts, but the one bureaucratic detail that most vexed private colleges and student associations over the past year is the idea that the federal government would keep track of every student enrolled in every college and university in the country. Given reports this year about the Pentagon hiring a marketing firm to collect data on teens and college students, the possibility that Big Brother would know every student's grades and financial aid package has worried privacy advocates.
Fortunately, privacy and accountability do not need to be at odds.
The proposal for a unit-records database was floated in a 2005 report that the U.S. Department of Education commissioned. Advocates have argued that the current system of reporting graduation data through the Integrated Postsecondary Education Data System (IPEDS) only captures the experiences of first-time, full-time students who stay in a single college or university for their undergraduate education. How do we capture the experiences of those who transfer, or those who accumulate credits from more than one institution? Theoretically, we could trace such educational paths by tracking individuals, including your Social Security Number or another identifier to link records.
Charles Miller, who led the Spellings commission, was one of the unit-records database advocates and pushed it through the commission's deliberations. Community-college organizations liked the idea, because it would allow them to gain credit for the degrees earned by their alumni. But the National Association of Independent Colleges and Universities, the U.S. Student Association, and other organizations opposed the unit-records database, and in its current form the proposal is certainly dead on arrival as far as Congress is concerned.
There are three problems with a unit records database. The first problem is privacy. I just don't believe that the federal government would keep my children's college student records secure. An October report by the House Committee on Government Reform documents data losses by 19 agencies, including financial aid records that the U.S. Department of Education is responsible for. Who trusts that the federal Department of Education could keep records safe?
The second problem is accuracy. I have worked with the individual-level records of Florida, which has had a student-level database in elementary and secondary education since the early 1990s. If any state could have worked the kinks out, Florida should have. But the database is not perfectly accurate. I have seen records of first graders who are in their 30s (or 40s) and records of other students whose birthdays (as recorded in the database) are in 2008 and 2010. The problem is not that the shepherds of the database system are incompetent but that the management task is overwhelming, and there are insufficient resources to maintain the database. Poorly-paid data entry clerks spend their time entering students into the rolls, entering grades, withdrawals, and dozens of other small bits of information. We probably could have a nearly perfect unit-records database system, if we are willing to spend billions of dollars on maintenance, editing, and auditing. In all likelihood, a unit-records database system for all higher education in the U.S. would push most of the costs onto colleges and universities, with insufficient resources to ensure their complete accuracy.
The third problem with such a database is that the structure and size would be unwieldy. Florida and some other states have extensive experience with unit records, and very few researchers use the data that exist in such states. The structures of the data sets are complicated, and beyond the fact that using the data taxes the resources of even the fastest computers, the expertise needed to understand and work with the structures is specialized. Such experts live in Florida's universities and produce reports because they are the experts. But few others are. There would be no huge bonanza of research that would come from a national unit-records database.
A Solution: Anonymous Diploma Registration
Most of the problems with the unit-records database proposal can be solved if we follow the advice of statistician Steven Banks (from The Bristol Observatory) and change the fundamental orientation away from the question, Who graduated? and toward the question, How many graduated? The first question requires an invasion of privacy, expensive efforts to build and maintain a database, and a complex structure for data that few will use. But the second question -- how many graduated? -- is the one to answer for accountability purposes. It's the question that community colleges want answered for their alumni. And it does not require keeping track of enrollment, course-taking, or financial aid every semester for every student in the country.
All that we need is the post-graduation reporting of diploma recipients by institutions, with birthdates, sex, and some other information but without personal identifiers that would allow easy record linkage. Such a diploma registration system would fit with the process colleges and universities already go through in processing graduations. An anonymous diploma registration system could also identify prior institutions -- high schools where they graduated and other colleges where students earned credits that transferred and were used for graduation. Such an additional part of the system could be phased in, so that colleges and universities record the information when they evaluate transcripts of transfer students and other admissions. The recording of prior institutions would address the need of community colleges to find out where their alumni went and how many graduated with baccalaureate degrees.
Under such a system, any college or university could calculate how many students graduated and the average time to degree (as my institution in Florida already can). Any college or university could also count how many students who transferred to other institutions eventually graduated. High schools would be able to identify how many of their own graduates finished college from either in-state and out-of-state institutions. Institutions could figure out what types of programs helped students graduate, and the public would have information that is more accurate and fairer than the current IPEDS graduation statistics. All of these benefits would happen without having to identify a single student in a new database.
A short column is not the place to describe the complete structure for such a system or to address the inevitable questions. I am presenting the idea in more depth this afternoon at the Minnesota Population Center, and I have established an online tutorial describing the idea of anonymous diploma registration in more detail. But I am convinced that the unit-records database idea is wasteful, dangerous, and unnecessary. Anonymous diploma registration is sufficient to address the most critical questions of how many graduate from institutions, and it does not threaten privacy.
I often mention in my community college classroom that 150 years ago probably none of us would have been in college. "After all," I say, "there's no point in educating women. It's a waste. They're only going to get married and have babies. Besides, their brains can't take all that academic work." I, especially, wouldn't be in college, I add, being Jewish: "We don't want those people in colleges!" I go on to comment that the Irish are, of course, good only for domestic work and hard labor, all being drunkards and sleeping with the pigs as they do. Asians aren't even human, if you educate blacks they get uppity, and so on. And yet, here we are, all of us capable of receiving and appreciating a college education. Times have changed.
In some ways it was easier when only educated white men went to college. The instructors could assume a certain level of competence in Latin and the classics, a common creed, and agreement on appropriate dress and manners. I long ago gave up expecting students to know anything about Noah, Moses, and Jesus, let alone Socrates, Homer, and Galileo, but I was stunned when no one in a class this semester had any idea who "leaps tall buildings with a single bound," or that the author's allusion to the Man of Steel was sarcastic.
In exchange, however, we have people who contribute experiences that may never have been discussed in those all-white, all-male, four-year college classrooms: veterans of Viet Nam, the Gulf War, and Iraq bring vivid perspectives to texts that treat of war; speakers of languages like Tamil add to my growing certainty that English is the only language in the world that uses an apostrophe to designate the genitive tense. The experiences of Tillie Olsen's narrator in "I Stand Here Ironing" come alive when women talk about their own pregnancies, childbirth, and raising children alone.
We still, however, look at an undergraduate college education as something that takes place mainly in a residential school and must be completed in four years. The definition of "success" in any college reflects this no-longer-accurate idea that for everyone ages 18 to 22 (23 at the most), college is the dominant life experience, and the outcome must be a diploma.
At both community colleges where I have taught, a "successful" student completes an associate degree, and does so within three years of entering a program designed to take two years. By this definition, then, the young woman who just transferred to the four-year school with a 3.97 average and a substantial scholarship, 12 credits shy of a community college degree, is officially a failure. She didn't complete a degree program. This example is extreme; however, legislatures and public policy makers have long cited low graduation rates and students who take too long to complete their work as evidence of failure in community colleges. Inside Higher Ed recently reported that the Public Policy Institute of California has issued a report sharply criticizing the retention and transfer rates of California community colleges, concluding that "if community college continues to be the dominant form of higher education for these students, achievement rates for these students must improve."
Why? Or, rather, why is "achievement" built on the old model of the all-white, all-male, four-year residential college? While the report acknowledges the multiple constituencies of community colleges, it still sees the completion of a degree within a certain time limit as the primary goal.
As a non-traditional student, I began my college education at 23 when I took one course in Irish history with John Kelleher (may he rest in peace) at Harvard University Extension. I took it because I had a passion for Irish history, and I took it for credit because I figured, "why not?" Someday in a million years someone might give me a college degree for this. Twelve years later, 8 months pregnant, and having taken a quarter of my college credits in Harvard College (daytime) classes, I received my bachelor's degree with honors. In the meantime, my leisurely pace had enabled me to explore classes thoroughly, two at a time, and evolve into a Jewish studies major who eventually published in national journals. That's something I may not have achieved in a conventional four-year residential program when I was 18.
Comments on the Inside Higher Edarticle on the California report point out that community college students have often been out of school for several years. Many, even the 18-year-olds, come with multiple responsibilities; they may be socially, educationally, or economically disadvantaged. They may arrive with physical, emotional, or learning disabilities. I have taught many students who don't succeed at community college because they are taking four or five courses, working 40 hours a week, and raising children or otherwise contributing to their households. Some have little idea why they are in school or what they want from a college education. Attending college part-time would make sense for these people, too many of whom fail and fail and fail class after class.
Why don't they go part-time? For the growing number of students ages 18 to 24, the top reason is health insurance: If they don't take four courses, they are not covered under their parents' health insurance. If the health insurance companies would take a long hard look at this destructive policy, we might not have so many students failing courses in which they are enrolled solely to maintain that full-time status. But the health insurance model is based on the traditional college model: four years, your parents support you, and you're out.
Another reason students don't go part-time is the structure of financial aid. Again, if the federal and state governments took a long hard look at what they are funding, they might decide that funding a part-time A-average student who makes steady progress is as useful as funding the frantic full-time C-average student who regularly fails one course a semester.
That student may be frantic because she is under pressure to finish the degree quickly, and, while the pressure comes from several directions, much of it comes from the academic expectations based on the traditional model and from organizations like the one that issued the report. Admissions departments, enrollment divisions, counseling centers, instructors, and ultimately students are under subtle but constant pressure to "succeed" by having students complete that degree and complete it "on time" -- in the time determined by model of the all-white, all-male, traditional four-year residential college. Why?
Rodney was a 35-year-old Gulf War veteran, father of three, who passed my intensive remedial reading and writing course having read not one but six books entirely through for the first time in his life. At the end of the year, he decided not to pursue his associate degree, but to transfer to a commercial computer training program. The last time I heard from him, he had graduated from the computer course and been accepted to a much higher paying job, pending a security clearance. Yet he didn't succeed in finishing the college program or transferring to another recognized degree program. He didn't even succeed in achieving his initial goal because, based on his experience, he changed that goal. Is this what failure looks like?
Measuring success in community colleges is not as easy or fast as tallying graduation rates. Colleges may need to make an effort to find out why students like Rodney do not finish or transfer. Because I persisted in investigating, I know that Marc disappeared mid-semester because his mother died and he had to return to his home in the next state to care for his son. Ryan withdrew from all his classes a week before classes began because his National Guard unit was mobilized, and Karen called me from her husband's new posting to ask for help in transferring credits. Some of these students may return to our college, but the college can hardly be held accountable for the fact that they left. The first step in accountability is to find out why students have not returned.
As for when colleges should be held accountable, perhaps we need to look not at graduation and transfer rates but at what students themselves have gained from their experiences. When Fred, who insisted that he hates poetry, analyzed Robert Bly’s “Gratitude to Old Teachers” with such grace and insight that I carried his final exam around with me for three days because reading it made me happy, the college succeeded. Fred succeeded.
When "the boys came home" in the 1940s, the GI Bill helped to change the college model to include men of modest means, many of whom had served in the military after high school and who helped establish the model of returning adult students who juggle family and school obligations. In the 1960s and 1970s, we changed the college model to include women, blacks, Asians, and others traditionally not entitled to an education. In the 1980s and 1990s we changed the college model again to include people who need ramps and elevators and special testing accommodations to gain access to and complete their college educations. Now it's time to change the model again. A community college is open to all people who want to learn. Success means students achieve what they came for: one class, one semester, or a degree that takes ten years. Perhaps after we've redefined success for community colleges, we can share our new model with students now "failing" in four-year colleges.
Jane Arnold is the reading specialist and an assistant professor of English at Adirondack Community College.Â She has taught at the community college level for 15 years and has also taught in private four-year colleges.
We have just finished the year of the higher education critique. Beginning with the influential National Academies jeremiad, "Rising Above the Gathering Storm," and leading up to December’s National Center on Education and the Economy’s call to arms, "Tough Choices or Tough Times," no fewer than six major commissions last year published recommendations for reinventing American education.
Within this blizzard of reports, higher education -- and especially public higher education -- has faced considerable criticism. Add in politicians decrying rising tuition, and editorial writers charging public universities with elitism, and education officials and policymakers can be forgiven for feeling overwhelmed by (often conflicting) advice.
More recent reports by the Education Trust and National Conference of State Legislatures were particularly critical of public universities, the former charging that state flagships have abandoned their historic mission of increasing social mobility for all, with the latter scolding state legislators for weak leadership and schools for lax accountability.
Last month a New York Times editorial went even further, accusing public universities of “choking off” college access to poor and minority students. The Times claimed that the “compact,” in which public universities offer broad access in exchange for taxpayer subsidies, has been “pretty much discarded.”
The Need for a Wider Angle View
The increased public attention to education is understandable given the growing nationwide anxiety over issues such as the achievement gap between white and minority students; poor science and math performance among primary and secondary students; the outsourcing of jobs to China, India, and other developing nations; and rapid increases in the cost of college.
But when assigning blame for these problems, the recent critiques may well focus too narrowly, using a telephoto lens when what is needed is a wide angle one.
Look at a few examples. As the National Conference of State Legislatures and Spellings Commission reports acknowledge, funding and financial aid mechanisms for higher education are in need of serious reform. The Manhattan Institute’s reports on high dropout rates in urban high schools have lifted the veil on a critical issue. And the National Academies report sparked a dialogue on economic competitiveness that even found voice in President Bush’s 2006 State of the Union address.
To be sure, each of these topics is important. But these critiques isolate troublesome segments of the pre-K-16 continuum rather than assessing the whole. (One exception is the National Center on Education and the Economy's report -- the most radical of the bunch -- which, among other things, calls for an overhaul of the American K-12 system along the lines of the European model of college preparation.)
The most narrowly focused is the searing Education Trust report and subsequent Times editorial that focus, by and large, on the issue of who is admitted to public flagship universities, and then how that education is financed. No doubt these are topics worthy of debate. However, singling out the admissions and financial aid practices of 50 or so flagship universities is to treat a symptom of a broader problem, not a root cause.
When students arrive at the doorstep of the University at Buffalo, or any other major university, we see the results of nearly two decades of prior experience, from families, society, and, of course, schools -- circumstances over which we currently have only minimal control. The reports, though, do not sufficiently appreciate that readiness is a precondition for success in college, and is an element of the education pipeline that actually begins in childhood. But in many urban and poor districts -- where dropout rates often top 50 percent for minority students -- the education pipeline simply is broken.
Indeed, the issue of diversity and access to higher education is a complex one, and can only be partially understood – or rectified – by looking at the end of the education pipeline. We must therefore focus on the entire spectrum of potential students' experiences that lead to the characteristics they arrive with at admissions time.
For that reason, the notion that public universities deserve most of the blame for enrolling too few students from minority and low-income backgrounds is a gross oversimplification, as well as highly misleading. Virtually every month state systems announce new financial aid, college preparation, and advising programs to help low-income families send their children to college with little or no debt.
These hardly sound like the efforts of universities that have abandoned their mission to educate a diverse cross-section of qualified students. In truth, addressing the very real problems the reports describe is a long-range and collaborative process that will occur not just in college admissions and financial aid offices, but in the pre-kindergarten classroom, and many other stops in between.
What to Do
Higher education is not without problems, nor the reports without merit. Saying there are too few graduates from minority groups and low-income families only begins to touch on broader issues of race, class, and mobility in American society. Training and retaining enough graduates in the STEM fields is a growing, if poorly understood, national concern. And students and their families are under tremendous strain to make sense of a multitude of educational offerings, finance their choice, and, ultimately, graduate within a reasonable amount of time.
In my experience, however, many leaders in higher education are working every day to find solutions to these problems. Far from a lack of desire, the failure of our education system to make faster progress stems, in large measure, from the complexity of the challenge. This is why it is imperative that any collective solutions to these problems be undergirded by a recognition that the social forces behind them are broader and more historically-rooted than the reports acknowledge.
Unlike private institutions, publics have an obligation to try to reflect society at large. Though many private colleges have made sincere efforts to become less economically elite, these institutions ultimately bear neither the public’s expectation nor the statutory responsibility to educate a broadly representative population.
Indeed, our public colleges and universities, which educate more than three-quarters of all college students nationwide, serve communities of all racial, ethnic, and economic backgrounds as a fundamental part of their missions. Many public research universities, like mine, are today working diligently and creatively to increase the number of qualified minority and low-income students we educate.
For example, the Buffalo-Area Engineering Awareness for Minorities (BEAM) program in the UB School of Engineering and Applied Sciences offers free pre-college classes to middle school and high school students who want to explore the wonders of science and technology. Since BEAM’s inception in 1982, 90 percent of its students have gone on to attend four-year colleges. Here, and elsewhere in major public universities, this hardly is a new endeavor.
But these efforts, by themselves, will lead to only incremental improvements. Success will come only when public higher education works in lock-step with primary and secondary education systems to ensure that students have the intellectual and emotional preparation for success, and the financial support to achieve it.
State and local officials therefore need to strengthen the education pipeline by supporting changes that will make the system a seamless whole rather than a series of disjointed parts. As a critical part of that strategy, public higher education must do more to help primary and secondary schools improve educational outcomes.
For example, my university is building on its years of engagement with the Buffalo Public Schools to create a strong partnership that will help students gain the education and skills needed to succeed in the 21st century economy. This partnership would include, for instance, early childhood experts sharing the latest insights on cognitive development, or addiction researchers working to break generational cycles of dependence.
Finally, states must re-commit themselves to providing the financial support necessary for public colleges and universities to thrive. At a time when America risks falling further behind other countries in educational achievement, the state share of public colleges’ budgets is in decline -- and has been so for more than three decades. If states reversed this trend it would send a clear signal to Washington that investment in higher education should be our first national economic priority.
Far from being isolated ivory towers, our public institutions of higher education are actually more relevant today than ever before. If fully embraced, and more engaged in strengthening the education pipeline, these institutions have both the potential and the intention to do far more than they already do to offer solutions to the serious issues raised in this year’s reports.
John B. Simpson
John B. Simpson is president of the University at Buffalo, State University of New York. He was a member of the recent higher education delegation to Asia led by the U.S. secretary of education.
The results of student learning assessments, including value added measurements that indicate how students’ skills have improved over time, should be made available to students and reported in the aggregate publicly.
The collection of data from public institutions allowing meaningful interstate comparison of student learning should be encouraged and implemented in all states.
I appreciate the commission’s focus on student learning and its assessment. But my experience and my reading and conduct of research on these topics lead me to argue against the use of standardized tests of general intellectual skills to compare the effectiveness of colleges and universities.
Secretary Spellings currently is undertaking a variety of initiatives designed to implement the commission’s recommendations. In addition, several national organizations, including the Educational Testing Service and a partnership involving the National Association of State Universities and Land Grant Colleges and the American Association of State Colleges and Universities, are working to identify or develop “student learning assessments, including value added measurements” that will facilitate “meaningful interstate comparison.”
I have devoted much of my career to helping faculty identify and develop ways to assess student learning and institutional effectiveness, then use assessment findings to improve students’ learning and educational experiences, I have conducted my own research on assessment and have studied that of many others and have established a reputation as an advocate of appropriate (i.e., valid and reliable) assessment that can improve student learning. Thus I have more than a passing interest in these current developments.
For a decade beginning in the mid-1980s I coordinated the University Tennessee at Knoxville’s response to Tennessee’s Performance Funding initiative, which required us to test thousands of freshmen and seniors and calculate gain, or "value added." Given the large numbers of students involved, we were able to try out several standardized tests of general intellectual skills (ACT’s COMP and CAAP; CBASE; and the Academic Profile, the ETS precursor to MAPP) as well as to test seniors who had taken the same exam as freshmen. In addition, my associate Gary Pike and I, along with other colleagues in various disciplines at Tennessee, undertook a program of research on the reliability and validity of the tests and on the reliability of value added calculations.
Our research confirmed findings and conclusions dating to the 1960s reached by such respected measurement scholars as Lee Cronbach, Frederic Lord, Robert Linn, and Robert Thorndike. Some generalizations based on these findings may be helpful to others as we confront once again the challenge to find valid measures of college students’ learning and score gain that permit institutional comparisons.
While standardized tests can be helpful in initiating faculty conversations about assessment, our research casts serious doubt on the validity of using standardized tests of general intellectual skills for assessing individual students, then aggregating their scores for the purpose of comparing institutions.
Standardized tests of general intellectual skills (writing, critical thinking, etc.):
test primarily entering ability (e.g., when the institution is the unit of analysis, the correlation between scores on these tests and entering ACT/SAT scores is quite high, ranging from .7 to .9), therefore differences in test scores reflect individual differences among students taking the test more accurately than they illustrate differences in the quality of education offered at different institutions.
are not content neutral, thus disadvantage students specializing in some disciplines.
contain questions and problems that do not match the learning experiences of all students at any given institution.
measure at best 30% of the knowledge and skills faculty want students to develop in the course of their general education experiences.
cannot be given to samples of volunteers if scores are to be generalized to all students and used in making important decisions such as the ranking of institutions on the basis of presumed quality.
cannot be required of some students at an institution and not of others—yet making the test a requirement is the only way to ensure participation by a sample over time.
If standardized tests of general intellectual skills are required of all students,
and if an institution’s ranking is at stake, faculty may narrow the curriculum to focus on test content.
student motivation to perform conscientiously becomes a significant concern.
extrinsic incentives (pizza, stipends) do not ensure conscientious performance over time.
ultimately, a requirement to achieve a minimum score on the test, with consequences, is needed to ensure conscientious performance. And if a senior achieves less than the minimum score, does that student fail to graduate despite meeting other requirements?
For nearly 50 years measurement scholars have warned against pursuing the blind alley of value added assessment. Our research has demonstrated yet again that the reliability of gain scores and residual scores -- the two chief methods of calculating value added -- is negligible (i.e., 0.1).
We conclude that standardized tests of generic intellectual skills do not provide valid evidence of institutional differences in the quality of education provided to students. Moreover, we see no virtue in attempting to compare institutions, since by design they are pursuing diverse missions and thus attracting students with different interests, abilities, levels of motivation, and career aspirations.
If it is imperative that those of us concerned about assessment in higher education identify standardized methods of assessing student learning that permit institutional comparisons, I propose two alternatives:
1. electronic portfolios that can illustrate growth over time in generic as well as discipline-based skills and are not distorted by a student having a bad day and performing poorly on a 3-hour snapshot of what s/he has learned in college. Portfolios can be scored reliably using rubrics developed by groups of faculty. Then scores can be aggregated to provide the numbers decision-makers want to compare.
2. measures based in academic disciplines that show how students can use discipline-based knowledge, as well as generic skills, in their chosen fields and as informed citizens with specialized expertise.
In short, a substantial and credible body of measurement research tells us that standardized tests of general intellectual skills cannot furnish meaningful information on the value added by a college education nor can they provide a sound basis for inter-institutional comparisons. In fact, the use of test scores to make comparisons can lead to a number of negative consequences, not the least of which is homogenization of educational experiences and institutions. The wide variety of opportunities for higher education has heretofore been one of the great strengths of higher education in the United States.
Trudy W. Banta
Trudy W. Banta is a professor of higher education and vice chancellor for planning and institutional improvement at Indiana University-Purdue University at Indianapolis.
Now that Education Secretary Margaret Spellings is using the report of her Commission on the Future of Higher Education to stake out accreditation as the de rigueur battlefront/seed ground/hammer/hoe, we are seeing institutions and accrediting agencies and higher education associations alike scrambling to raise their hands high to the Department of Education in a show-and-tell fest, unprecedented since another commission’s report card, "A Nation at Risk: The Imperative for Educational Reform," was sent home nearly a quarter of a century ago.
While faculty, deans, and provosts are earnestly trying to address the accountability issue and to apply a wide range of instructional and enrollment patterns made possible through new uses of technology -- such as wholly online courses and degree programs, hybrid courses and programs with blends of face-/seat-time and online work alongside traditional campus-based learning; collaborative learning tools; and immersive simulation learning environments (see the EDUCAUSE Learning Intiative 7 Things You Should Know About… series) -- they face the challenges of decreasing resources, increasing enrollments, more demands for non-traditional courses, and a growing entry level population who arrive in class without the basic skills needed to succeed.
To be successful, major academic redesign efforts often require the involvement of individuals with skills and knowledge not available at the department level where most of the discipline-specific work is done. While experts in technology, in assessment, in teaching methodology, and in course and program design are sometimes made available to faculty and academic offices, the registrar is, unfortunately, rarely involved in these discussions from the earliest stages.
Such an omission can be costly because the registrar can often be a critical component in academic transformation. No matter which of the many possible outcomes of the accountability movement we are talking about -- whether a national unit record system; new metrics for gauging academic progress and graduation rates; adaptable information systems for new forms of instructional design; discipline-specific measures of learning outcomes; mission-, demographic-, and Carnegie class-specific success standards or a more direct match between learning outcomes, assessment and grading criteria -- in each instance new support systems and policy changes will often be required, and in each instance the registrar is a key agent for any changes that may be required.
In the role of translator, arbiter, influencer, recorder, encoder, manipulator, and implementer of academic policy, grading protocols and keeper of official transcript records, privacy policies, enterprise information system architecture, real and virtual classroom usage rules, and academic calendar parameters, the registrar in involved in a wide array of campus activities below the radar of most faculty and many administrators. The registrar, however, can play a vital role in academic innovation by providing invaluable policy counsel and advice about the degree to which information systems can be customized, and, ultimately, can grease the tracks of academic innovation.
The role of the registrar in academic innovation
The registrar has, in fact, a major role to play in four of the most basic academic initiatives found on many campuses:
Redesigning and improving the quality of courses and curricula.
Enhancing the processes of course management and delivery to create more options and increased flexibility.
Translating academic policies into efficient and easily used procedures and refining campus-wide inter-departmental records management procedures accordingly.
Maintain official academic records and related processes in accord with state and federal privacy legislation while providing faculty and students with the information they require for quality advising and decision-making.
At far too many institutions, academic support, management, and information systems have simply been unable to keep up with the demands and requirements of faculty and academic units as they explore new applications of technology and new patterns of teaching and learning to improve the retention of students, to increase the involvement of students in the community, and to improve the quality and effectiveness of their academic programs.
The problem is a basic one. Many of the academic procedures and structures we now use were developed in a time when colleges and universities were far different than they are today. The challenges were fewer, the instructional capabilities of today’s technology not even dreamed of, the students far more homogenous and motivated, and interaction between the disciplines was the exception and not the rule, with most instruction taking place on campus in the classroom, the library, or the laboratory. It was a far less complex world for students, faculty, administrators, and staff.
Typical efforts to redesign courses and curricula involve faculty working alone or on a team with other faculty in the discipline. Experience has shown, however, that the most effective projects include, in addition to the stakeholder faculty members, others who bring to the table expertise in areas not found in most departments. Without this broader participation key questions will go often go unasked and unanswered, and important options will remain unexplored.
Serving on the core team should be the key faculty members, and an instructional designer or faculty member from another discipline who understands process of change and brings to the table the knowledge of the research on teaching and learning and the ability and willingness to ask hard questions and to test assumptions. Available to the team should be experts on assessment, on technology, and, while often overlooked, the registrar to anticipate and assist in making the necessary adjustments that will be required in academic regulations and system support.
The common issues
When comprehensive course or curriculum redesign efforts get underway at either the graduate or undergraduate level a number of fundamental questions need to be addressed. Among them:
What were the assumptions being made by faculty about the students entering their courses and degree programs, and how accurate were the assumptions?
What knowledge and skills did students actually bring to particular classes or programs? (If students entered an introductory course with a wide range of knowledge and competencies, why should they all start at the same place? If students had advanced skills or knowledge, could they be exempted from certain units within a course or curriculum?)
Must all students move through a course or program at the same pace? If some students required more time to complete a unit, how could we handle grades at the end of the semester when the work was not yet complete? When students move at different rates, have different requirements based on prior knowledge and experience, and if work might carry over from semester to semester, how can we handle credits, grades, student charges and faculty loads not to mention various student-aid issues?
The Syracuse experience offers three key lessons that can guide other campuses.
First, without the registrar as a key player from the start, no easy synergy can be developed between instructional innovation, academic policy, records procedures, and system adaptation. If those directing the project, whether the focus be on on-campus, off-campus or a combination of both settings, are building on the latest research on teaching and learning and are “thinking outside of the box” new administrative systems will be required and these changes will be impossible to implement without the active participation of the registrars office.
Second, new technology innovations such as e-portfolios and course/learning management systems are often implemented under accelerated pressure jeopardizing compliance with external privacy regulations that the registrar could have anticipated.
Third, unless an individual or a design organization (i.e., the registrar or a teaching and learning support unit) becomes a visible proponent of opportunity to adapt technology and policy, new visions will chafe against tradition and sputter at best. The registrar often brings to the project a knowledge of the institutional change culture, the political and technical history of the institution, and remembers what has worked and why.
Without the active involvement of the registrar, schools, colleges and academic departments attempting to significantly improve the quality of their academic program can anticipate inefficient or retarded progress.
Robert M. Diamond and Peter B. DeBlois
Robert M. Diamond is president of the National Academy for Academic Leadership and professor emeritus at Syracuse University, where he played a major role in the development of the flexible credit and continuous registration system. Peter B. DeBlois, currently director of communications and publishing at EDUCAUSE, served as university registrar at Syracuse University from 1985–2001. Before that, he served as director of registration and records and assistant director of freshman English. He helped design and implement Syracuse’s flexible credit and continuous registration system.
Richard Sloan, writing in Blind Faith (St. Martin's Press, 2007), examines critically the hypothesis that "frequency of prayer can be associated with health outcomes." But in so doing, he goes beyond outcomes. Skirting accountability (if I pray, should I expect a health outcome?), he lands squarely on comparability:
"If we are truly interested in collecting the information relevant to health outcomes, then we should want to know whether it is better for our health to attend a Catholic Mass or a Quaker meeting."
The flaws in this presentation are immediately obvious: frequency of prayer, although it can be measured, does not begin to reflect the complex and comprehensive nature of religion.
Moreover, comparability of religions makes no sense. Religions are different by choice, not at all influenced by a simplistic measurement that is both limiting and largely irrelevant.
People, their lives, their interactions and their institutions occasionally reveal an aspect that can be measured. But we do so at our peril: Can we measure the quality of a marriage by the number of shared smiles, or a legislator's effectiveness by the number of votes she casts in a year? Even more treacherous would be an attempt to use these measurements to establish accountability, to compare, to improve.
And so to higher education, just now beginning to emerge from two decades of assessment. Assessment was a largely undefined activity that resulted in the collection of billions of elements of data, with no results and to no effect. The process was characterized by external pressure, by blissfully unaware faculty, and by administrators forced to comply with Dilbert-like mandates.
We heard about the potential of the Collegiate Learning Assessment, among others, as a means of establishing student success over a wide range of learning outcomes. We also heard about the limitations, as one speaker reported that his institution had to pay students to take the CLA exam.
We learned of experiments being carried out with scientific rigor, so that results, when available, will withstand searching scrutiny and perhaps give us real answers. It was encouraging to learn that we are not alone in being troubled by the fact that measurements are being carried out with little thought to reliability, validity and relevance to actual student learning.
Best of all, there was a general tone of "we are not there yet," a message that should calm the ardor of those who insist on measuring postsecondary student learning outcomes, whether or not these measurements are reasonable.
Leaving measurement, we, like Professor Sloan, can skirt accountability. Accountability based on a limited slice of the postsecondary experience makes no sense and will ultimately collapse of its own weight. Appropriate accountability, on the other hand, is regularly demonstrated by colleges and universities in a multitude of ways, as was first established in 1994 by the National Association of Independent Colleges and Universities' Task Force on Appropriate Accountability.
This brings us to comparability. No less than religions -- or marriages -- or legislators, colleges and universities have unique personalities, cultures, missions, histories, areas of expertise and communities of service. In fact, much of the success of American higher education lies in its diversity, its competition and its differences.
For the most part, comparability is inapt. Regrettably, the measurement of student learning outcomes is sometimes justified by claiming that this will enable postsecondary institutions to be compared, and thereafter to be "improved."
American colleges and universities already operate within a culture of continuing improvement, self generated and responsive to changing needs and changing opportunities. But, improvement dictated by specious comparisons that are in turn based on "measurement" could seriously compromise the diversity of institutions that is so healthy a characteristic of American higher education.
We are in the midst of an unprecedented examination of higher education. Some have used this to advocate measurement of a kind that limits both teaching and learning, to establish accountability on the basis of irrelevant yardsticks, and to speak of comparability as an inevitable consequence.
In this atmosphere, a message from the education secretary reaffirming the strength of higher education and identifying diversity as an aspect to be preserved would be highly reassuring.
Bernard Fryshman is executive vice president of the Association of Advanced Rabbinical and Talmudic Schools’ Accreditation Commission.