You have /5 articles left.
Sign up for a free account or log in.

NEW YORK CITY -- They are students, they are faculty members. They are hobbyists and autodidacts.

They still prefer to read texts in print, but they are intrigued by the possibilities of digital, especially when it comes to scanning huge swaths of text for key words and phrases. They travel in herds and pledge allegiance to tribes; their social instincts are stronger than their market instincts. Their actions speak louder than their survey responses.

The most studious among them can be reasonably expected to read around 1,700 books in their lifetimes (unless they are Winston Churchill, in which case they might crack 5,000 -- or at least claim to). Give them the right tools, and they might distill the content of thousands more through quantitative analysis and concept mapping. Give them the right incentives, and they can help you build those tools. Make it into a game, and they can discover galaxies.

They are “users” -- of libraries, books and websites. And they were the focus of this year’s Ithaka S+R Sustainable Scholarship conference here. The above, according to the invited speakers, are some of their attributes.

Libraries and publishers trying to learn more about their patrons is nothing new. But this year's Ithaka conference took special aim at how the stewards of scholarly communications might dig into the essential motivations and tendencies of their digital readers in hope of not only serving them as consumers of existing scholarship, but using them as instruments to create new scholarship.

Complex Subjects

Users have lately become both an asset to scholarship and an object of it. Most recently, a consortium of Illinois universities, known as ERIAL, conducted a series of anthropological studies of undergraduates that revealed, in excruciating detail, the ignorance of many students with regard to academic research processes, and how some professors and librarians have unwittingly perpetuated that ignorance. On Tuesday, Susan Gibbons, the university librarian at Yale University, described the findings of an anthropological study she led at the University of Rochester that predates the ERIAL study.

Beginning in 2003, Gibbons and her colleagues in the Rochester library undertook a series of ethnographic studies aimed at answering a few vexing questions.

One such question was: What are some barriers for graduate students as they try to complete their dissertations?

What they found was that understanding how to get students and faculty to use helpful tools was as important as providing them those tools in the first place, and much more complicated.

Gibbons and her colleagues asked graduate students: If they had a magic wand that could create any tool to help them with their dissertation, what would it be? “Over and over again, we were hearing, ‘I want a tool that will help me with citations and bibliographies and references,' ” Gibbons said.

“We thought, 'Terrific, this is an easy solution, because it already exists in the marketplace, with RefWorks, or EndNote,'” she continued. “So we got site licenses for those products. We then did the exercise again: If you had a magic wand to make a tool, what would it be? ‘Something to help with my citations, my references, my bibliographies.’ ”

Gibbons and her colleagues decided the problem was insufficient marketing. So they did a marketing campaign. Then they asked the same question again -- and, alas, got the same answer. So they pointed to the tools and asked students why they weren’t using them.

“It turned out that when our students had started their dissertation writing, their work practices were set,” Gibbons said. “It was too risky [for them] at that point to introduce something new into the process, particularly a new tool like this -- a technology that could just go haywire and then the whole dissertation goes down with it. So we were introducing it at the wrong time. We needed to be marketing it to students who had just arrived on campus, so they could use it in their early papers and by the time they got to their dissertation it was part of their toolbox.”

Francois Gossieaux, a business consultant and co-founder of the marketing firm Human 1.0, also emphasized an anthropological approach when seeking to understand and direct the behavior of users. While librarians and publishers tend to regard audiences in demographic terms, the associations that actually drive user behavior are far narrower, Gossieaux said.

Anthropology teaches us that people are tribal, he said. They act as members of small, specific social groups; not large, generalized consumer blocs.

Gossieaux described an experiment in which subjects were asked to help with an open-ended number of tasks. One group was asked to complete the tasks as a favor to a friend. A second group was asked to help with the tasks in exchange for $5. The third group was asked to help in exchange for 50 cents. The group that was given $5 completed more tasks than the group that was given 50 cents, Gossieaux said. But the first group, the subjects who were asked for help not as a quid pro quo but as a favor to a friend, completed more tasks than either of the other groups.

“You should always try to tap into people’s social framework rather than their market framework, because it’s much more powerful,” he said.

The Power and the Crowd

The power of crowds can yield astonishing advances for researchers who manage to caress the right nerves. So testified Ben Vershbow, manager of an experimental technology lab at the New York Public Library (NYPL), and Chris Lintott, an astrophysicist at the University of Oxford.

Vershbow and his colleagues have been harnessing the power of crowds to analyze New York City restaurant menus from the early 20th century. The NYPL has about 20,000 of them in its archive. And while previous researchers have gleaned some insights into the city’s culinary history by merely browsing through them, Vershbow really wants to do a computational analysis on the whole archive in hope of gleaning trends and insights that might be missed by a manual accounting.

Problem is, transcribing the 20,000 menus by hand would be too daunting a task for Vershbow and his small team at the NYPL lab. So he opened the archive to the public, inviting visitors to transcribe menus themselves. Foodies came in droves. As it turns out, the project met a certain set of criteria essential to any crowdsourcing project, Vershbow said: that the task at hand be “discrete, delightful and unambiguous.” Since the library opened the menu archive, it has drawn more than 3 million unique visitors, who have transcribed more than 10,000 menus.

Lintott’s crowdsourcing project is in many ways even more impressive. The users of his Galaxy Zoo project have not only helped Lintott and his colleagues classify hundreds of thousands of faraway galaxies photographed by the Sloan Digital Sky Survey, a telescope that took pictures of the night sky for eight years -- they actually discovered a new class of galaxy.

“Having that much data creates problems,” Lintott said. “…You can give a student about 50,000 galaxies [to classify] before they tell you where to stick the others.”

So Lintott turned to amateur astronomers to help sort the galaxies into three known categories -- spiral, elliptical, and irregular. “I thought it would get a few thousand classifications a month,” he said. Two days after launching, the online volunteers were classifying galaxies at a rate of 70,000 per hour. With many different users assessing the same images, each galaxy was classified by consensus, with the volunteers checking each other’s work.

One tribe of about 20 users took to trading puns about a recurring type of galaxy that appeared small, round, and green in the eye of the telescopic photos. They called the strange galaxies “peas," and themselves the “peas corps.” When (and only when) a member of the corps discovered a new pea-like galaxy, they celebrated with a new pea-related pun, thus creating the sort of gaming element that Lintott -- and other speakers here -- noted is often an essential element in a successful crowdsourcing project.

“Without our input, “ said Lintott, “the ‘peas corps’ systematically found these things, noticed they were the same color, recruited a computer programmer who wrote a database… [for] the 16,000 things that were this color, built their own website to sort through those 16,000, had a teleconference to decide what qualified these things [as ‘peas’], downloaded more data from the survey, reinvented the concept of signal-to-noise, because they needed a particular detection -- then sent me an e-mail saying, ‘We’ve discovered a new class of galaxy.’ ”

But Lintott cautioned that despite Galaxy Zoo's success, leveraging the wisdom of crowds is hardly an exact science. Without the right incentives, researchers might not get the critical mass of volunteers they need to make a crowdsourced project work. Then again, "The Internet is a big place," he said. "Even if you're one in a million, there are a lot of you out there."

For the latest technology news from Inside Higher Ed, follow @IHEtech on Twitter.

Next Story

More from News