The University of Central Florida has suspended Hyung-il Jung, an instructor, over a comment about "a killing spree," but students say that he posed no danger and was misunderstood, The Orlando Sentinel reported. Jung told a group of students, he said, something along these lines: "This question is very difficult. It looks like you guys are being slowly suffocated by these questions. Am I on a killing spree or what?" Some students have sent a joint letter to the university saying that there is no need for the investigation Central Florida says it is conducting, and that the comment was clearly a joke.
Montana State University will decertify its faculty union after the affiliate of the American Federation of Teachers and the National Education Association conceded defeat in a referendum brought by faculty members who wanted to end collective bargaining. The union had challenged four ballots after a preliminary results showed the faculty members favoring decertification held a five-vote lead. The Montana Board of Personnel Appeals has yet to release an official notice of decertification.
The carnage and manhunt in Boston last week obliged the Digital Public Library of America to postpone its grand opening festivities at the Boston Public Library until sometime this fall. So sudden a change of plans could only create a logistical nightmare. The roster of museums, archives, and libraries participating in DPLA runs into the hundreds, and the two-day event (Thursday and Friday) was booked to capacity, with scores of people on the standby list. But the finish line for the marathon was just outside the library, and rescheduling unavoidable.
The delay applied only to the gala, not to DPLA itself: the site launched on Thursday at noon, E.S.T., right on schedule. The response online has been, for the most part, enthusiasm just short of euphoria. The collection contains not quite 2.4 million digital “objects,” including books, manuscripts, photographs, recorded sound, and film/video. More impressive than the quantity of material, though, is how much thought has gone into how it’s made available.
That’s true even of the site’s address: DP.LA. I’ve seen at least one grumble about how anomalous this looks. Which it does, but in a good way. Even if you forget the address, it takes no effort to reconstruct. The brevity of the URL makes it convenient to type on a cellphone; when you do, the site’s homepage is readily navigable on the small screen. That demonstrates an awareness of how a good many visitors will actually use the site – more so than is often the case with library catalogs online.
DPLA is the work of people who understand that design is not just icing on the digital cake, but a significant (even decisive) factor in how we engage with content in the first place. They have made available an application program interface (API) for the site, which is a very useful thing indeed, according to my source in the geek community. With the API, users can create new tools for sorting and presenting the library’s materials. Combine it with a geolocation API, for example, and you could put together an application displaying the available photographs of the street you are on, organized decade by decade.
The library’s potential for assembling and integrating an incredible range of documents and knowledge is almost unimaginable. Excitement seems appropriate. But in describing my own impressions of DPLA, I want to be a little more qualified about the enthusiasm it inspires. Things are not nearly as far along as some comments have implied. This isn’t just naysaying. The site is currently in its beta version, and many of my points will probably be nullified in due course. But it’s better to be aware of some of the limitations beforehand than to visit the site expecting a digital Library of Alexandria.
One thing to keep in mind is that DPLA is not so much a library as an enormous card catalog, with the “shelves” of books, photographs, and so forth being the digital collections of libraries and historical societies, large and small, all over the country. The range of material offered through the Digital Public Library of America reflects what people running the local collections have decided to digitize and make available. What DPLA gathers and makes searchable is the metadata: descriptions of what a document contains (its subject, origins, copyright status, and so on) and of its characteristics as a digital object (size and file type).
The DPLA “card” gives the available information about an item, often accompanied by a thumbnail image of the book cover, manuscript, etc. – along with a link taking you to the digital repository in which it appears. DPLA puts the metadata into a standard format. But much of the content-description will inevitably be done by local librarians and archivists, making for a considerable range in detail. Often the DPLA entry will provide a bare minimum of description, though some entries run to a paragraph or two.
But the entry is only as strong as its link. It seemed appropriate to make one of my earliest searches at the Digital Public Library for the quintessential American poet Walt Whitman. There were 52 hits, with 9 of the top 10 being manuscripts of his letters in the Department of Justice collection at the National Archives. Not one of the links for the letters worked. By contrast, I had no trouble getting access to photographs of the poet held by the Smithsonian Institution.
This proved par for the course. Most links worked -- but out of two dozen entries for items in National Archives, only one did. It’s hardly surprising (gremlins have a strong work ethic), but it shows the need for troubleshooting. Users of the library can be expected to point out such glitches, if encouraged to do so. It might be worth adding a widget that would appear in each record allowing users to flag an inoperative link, a typographical error, or some problem with the content description. It's true that the site has a contact page, but people are more likely to report errors if they are encouraged to do so.
Continued thumbing through the catalog demonstrated how early a stage DPLA is in accumulating its collection – and how much fine-tuning its search engine may need.
Entering “Benjamin Franklin,” you get more than 1,400 results. Out of the first 30, all but 3 are documents (usually death certificates) for people named after the inventor and statesman. A toolbar on the left allows the user to refine the search in various ways – but the most useful filter, by subject, is at the very bottom and easy to overlook.
It was encouraging to get 17 results when searching for Phyllis Wheatley, the first published African-American poet, but 15 of them led to records from the 1940 census, by which point she had been dead the better part of 150 years. Only one of the other two was at all germane to her as historical figure. The other concerned an Atlanta branch of the Young Women’s Christian Association named in her honor.
I expected to locate just a few things about the Southern Tenant Farmers Union of the 1930s, but in fact got no hits at all. At the other extreme, DPLA has records for more than 90 items pertaining to the Ku Klux Klan – photographs, handbills, and cartoons, both pro- and anti-. Quite likely these were among the most striking and attention-grabbing items in various collections, and were digitized for use in print publications and online. It's concrete evidence that the Digital Public Library of America's offerings will be only as representative as the decisions made by the contributing institutions.
A number of foundations and government agencies have lent their support to DPLA, and its progress towards incorporation as a 501(c)3 organization should make it an even more appealing destination for the big philanthropic bucks. But important as funding certainly is for the library’s future, what it will ultimately be decisive for its success is a massive infusion of intellectual capital. Some of it will come from code writers hacking out new applications using the library's metadata and API. More than that, though, DPLA will need to encourage the participation and the expertise of people using the site. It's an impressive foundation and scaffold, but it's up to scholars, librarians, and other knowledgeable citizens to build the library, from the ground up.
Supreme Court rejects U. of Oregon appeal on suit by former graduate student. Higher ed groups believe ruling left standing endangers academic freedom in doctoral education, but ex-student's lawyer disagrees.
My first encounter with assessment came in the form of a joke. The seminary where I did my Ph.D. was preparing for a visit from the Association of Theological Schools, and the dean remarked that he was looking forward to developing ways to quantify all the students' spiritual growth. By the time I sat down for my first meeting on assessment as a full-time faculty member in the humanities at a small liberal arts college, I had stopped laughing. Even if we were not setting out to grade someone’s closeness to God on a scale from 1 to 10, the detailed list of "learning outcomes" made it seem like we were expected to do something close. Could education in the liberal arts — and particularly in the humanities — really be reduced to a series of measurable outputs?
Since that initial reaction of shock, I have come to hold a different view of assessment. I am suspicious of the broader education reform movement of which it forms a part, but at a certain point I asked myself what my response would be if I had never heard of No Child Left Behind or Arne Duncan. Would I really object if someone suggested that my institution might want to clarify its goals, gather information about how it’s doing in meeting those goals, and change its practices if they are not working? I doubt that I would: in a certain sense it’s what every institution should be doing. Doing so systematically does bear significant costs in terms of time and energy — but then so does plugging away at something that’s not working. Paying a reasonable number of hours up front in the form of data collection seems like a reasonable hedge against wasting time on efforts or approaches that don’t contribute to our mission. By the same token, getting into the habit of explaining why we’re doing what we’re doing can help us to avoid making decisions based on institutional inertia.
My deeper concerns come from the pressure to adopt numerical measurements. I share the skepticism of many of my colleagues that numbers can really capture what we do as educators in the humanities and at liberal arts colleges. I would note, however, that there is much less skepticism that numerical assessment can capture what our students are achieving — at least when that numerical assessment is translated into the alphabetical form of grades. In fact, some have argued that grades are already outcome assessment, rendering further measures redundant.
I believe the argument for viewing grades as a form of outcome assessment is flawed in two ways. First, I simply do not think it’s true that student grades factor significantly in professors’ self-assessment of how their courses are working. Professors who give systematically lower grades often believe that they are holding students to a higher standard, while professors who grade on a curve are simply ranking students relative to one another. Further, I imagine that no one would be comfortable with the assumption that the department that awarded the best grades was providing the best education — many of us would likely suspect just the opposite.
Second, it is widely acknowledged that faculty as a whole have wavered in their dedication to strict grading, due in large part to the increasingly disproportionate real-world consequences grades can have on their students’ lives. The "grade inflation" trend seems to have begun because professors were unwilling to condemn a student to die in Vietnam because his term paper was too short, and the financial consequences of grades in the era of ballooning student loan debt likely play a similar role today. Hence it makes sense to come up with a parallel internal system of measurement so that we can be more objective.
Another frequently raised concern about outcome assessment is that the pressure to use measures that can easily be compared across institutions could lead to homogenization. This suspicion is amplified by the fact that many (including myself) view the assessment movement as part of the broader neoliberal project of creating “markets” for public goods rather than directly providing them. A key example here is Obamacare: instead of directly providing health insurance to all citizens (as nearly all other developed nations do), the goal was to create a more competitive market in an area where market forces have not previously been effective in controlling costs.
There is much that is troubling about viewing higher education as a competitive market. I for one believe it should be regarded as a public good and funded directly by the state. The reality, however, is that higher education is already a competitive market. Even leaving aside the declining public support for state institutions, private colleges and universities have always played an important role in American higher education. Further, this competitive market is already based on a measure that can easily be compared across institutions: price.
Education is currently a perverse market where everyone is in a competition to charge more, because that is the only way to signal quality in the absence of any other reliable measure of quality. There are other, more detailed measures such as those collected by the widely derided U.S. News & World Report ranking system — but those standards have no direct connection to pedagogical effectiveness and are in any case extremely easy to game.
The attempt to create a competitive market based on pedagogical effectiveness may prove unsuccessful, but in principle, it seems preferable to the current tuition arms race. Further, while there are variations among accrediting bodies, most are encouraging their member institutions to create assessment programs that reflect their own unique goals and institutional ethos. In other words, for now the question is not whether we’re measuring up to some arbitrary standard, but whether institutions can make the case that they are delivering on what they promise.
Hence it seems possible to come up with an assessment system that would actually be helpful for figuring out how to be faithful to each school or department’s own goals. I have to admit that part of my sanguine attitude stems from the fact that Shimer’s pedagogy embodies what independent researchers have already demonstrated to be “best practices” in terms of discussion-centered, small classes — and so if we take the trouble to come up with a plausible way to measure what the program is doing for our students, I’m confident the results will be very strong. Despite that overall optimism, however, I’m also sure that there are some things that we’re doing that aren’t working as well as they could, but we have no way of really knowing that currently. We all have limited energy and time, and so anything that can help us make sure we’re devoting our energy to things that are actually beneficial seems all to the good.
Further, it seems to me that strong faculty involvement in assessment can help to protect us from the whims of administrators who, in their passion for running schools "like a business," make arbitrary decisions based on their own perception of what is most effective or useful. I have faith that the humanities programs that are normally targeted in such efforts can easily make the case for their pedagogical value, just as I am confident that small liberal arts schools like Shimer can make a persuasive argument for the value of their approach. For all our justified suspicions of the agenda behind the assessment movement, none of us in the humanities or at liberal arts colleges can afford to unilaterally disarm and insist that everyone recognize our self-evident worth. If we believe in what we’re doing, we should welcome the opportunity to present our case.
Adam Kotsko is assistant professor of humanities at Shimer College.