Second Penn State College Experiences Cyberattacks

Pennsylvania State University announced Friday that its College of Liberal Arts experienced two cyberattacks. These attacks took place after enhanced security measures were adopted in the wake of a May attack on Penn State's engineering college network. Penn State said that its investigation of the new incidents uncovered "no evidence that personally identifiable information or research data were compromised."

Kadenze launches online education platform for creative arts courses

Ed-tech start-up Kadenze launches an online education platform specifically created for creative arts courses.

Technology podcast features interview with D2L's John Baker

This month's edition of Inside Higher Ed's monthly technology podcast features a discussion with John Baker, CEO of D2L, about the new version of the company's learning platform, Brightspace.

Article on difficulties in social-media research

Five years ago, this column looked into scholarly potential of the Twitter archive the Library of Congress had recently acquired. That potential was by no means self-evident. The incensed “my tax dollars are being used for this?” comments practically wrote themselves, even without the help of Twitter bots.

For what -- after all -- is the value of a dead tweet? Why would anyone study 140-character messages, for the most part concerning mundane and hyperephemeral topics, with many of them written as if to document the lowest possible levels of functional literacy?

As I wrote at the time, papers by those actually doing the research treated Twitter as one more form of human communication and interaction. The focus was not on the content of any specific message, but on the patterns that emerged when they were analyzed in the aggregate. Gather enough raw data, apply suitable methods, and the results could be interesting. (For more detail, see the original discussion.)

The key thing was to have enough tweets on hand to grind up and analyze. So, yes, an archive. In the meantime, the case for tweet preservation seems easier to make now that elected officials, religious leaders and major media outlets use Twitter. A recent volume called Twitter and Society (Peter Lang, 2014) collects papers on how politics, journalism, the marketplace and (of course) academe itself have absorbed the impact of this high-volume, low-word-count medium.

One of the book’s co-editors is Katrin Weller, who is an information scientist from the GESIS Leibniz Institute for the Social Sciences, in Cologne, Germany. At present she is in the final month of a Kluge Fellowship at the Library of Congress, which seems like an obvious place to conduct her research into the use of Twitter to study historical events. Or it would have been, if the archive of tweets were open to scholars, which it still isn’t, and won’t be any time soon.

Unable to pursue her original project, Weller used the Kluge Fellowship to broaden her focus -- which, she told me in an email exchange “has been pretty much on working with Twitter data [over] the last years.” She spent her time catching up with the scholarship on other forms of social media and investigating various web-archiving projects at the library.

As for the digital collection that made her want to go to Washington, DC, in the first place… well, the last official statement from library was issued in January 2013. It reported that Twitter’s output from 2006 to 2010 -- consisting of “approximately 21 billion tweets, each with more than 50 accompanying metadata fields, such as place and description” -- had finally been organized, by hour. The process was to be completed that month, even as another half billion or so tweets per day were added to the collection.

The Library of Congress finds itself in the position of someone who has agreed to store the Atlantic Ocean in his basement. The embarrassment is palpable. No report on the status of the archive has been issued in more than two years, and my effort to extract one elicited nothing but a statement of facts that were never in doubt.

“The library continues to collect and preserve tweets,” said Gayle Osterberg, the library’s director of communications, in reply to my inquiry. “It was very important for the library to focus initially on those first two aspects -- collection and preservation. If you don’t get those two right, the question of access is a moot point. So that’s where our efforts were initially focused and we are pleased with where we are in that regard.”

As of early 2013, the library reported it had received more than 400 requests to use the archive. Since then, members of the public have asked for updates on the library’s blog, with no response forthcoming. At this point no date has been set for the archive to be opened to researchers. The leadership of the Library of Congress may be “pleased [by] where we are,” but their delight is not likely to be contagious.

No grumbling from Katrin Weller, though. She sent me a number of her recent and forthcoming papers on what might be called second-order social-media research. That is, they take up the problems and concerns that face scholars trying to study social media.

Apart from the difficulties involved in archiving -- enough on that, for now -- there are methodological and ethical problems galore, as becomes clear from a paper Weller co-authored with her colleague Katharina E. Kinder-Kurlanda, a cultural anthropologist also at the Leibniz Institute. In 2013 and 2014, they conducted 42 interviews with social-media researchers at international conferences. The subjects were from various fields and parts of the world. What they had in common was the use of data gathered from a variety of social-media venues -- not just Twitter and Facebook but “many other platforms such as Foursquare, Tumblr, 4chan and Reddit.”

Elsewhere, Weller has described social-media research as a kaleidoscope containing “thousands of individual pieces, originating from different perspectives and discipline, applying different methods and establishing different assumption about social media” -- with the kaleidoscope constantly shaking from site redesigns, changes in privacy policy and so on.

All of which makes establishing methodological standards -- how material from social media platforms is collected, documented and handled -- extremely difficult, if not impossible. A research team might find it necessary to invent a program to harvest raw data from a site, but if the overall focus of the project is sociological or linguistic, the details will probably not be discussed in the resulting publication. There is also the issue of “data cleaning,” i.e., filtering out messages from spam accounts, bots and the like, in order to create a data set consisting of only human-generated material (as much as that is possible). It is a time- and labor-intensive process, and the thoroughness of the job will in part be a function of the budget.

So the size, quality and reliability of the raw material itself are going to vary widely from researcher to researcher. Weller and Kinder-Kurlanda note the case of the same data being collected from a single social-media website using the same tools, but run in parallel on two different servers. The result was different data sets. And all of this, mind you, before the serious analytical crunching even gets started.

One partial solution, or at least stopgap measure, is to share data sets -- certainly easing the strain on some researchers’ purses. The authors mention finding researchers “who felt an ethical obligation to share their data sets, either with other researchers or with the public.” About a third of the researchers Weller and Kinder-Kurlanda interviewed “had experience in working with data collected by others.” But the practice raises ethical problems about privacy, and it sounds like some of the exchanges take place sub rosa. And in any event, sharing the data sets probably won't change the drift toward some social-media platforms being over- or underresearched because their data are easier to collect or clean.

Weller indicates that she intends to write more about the epistemological issues raised by social media. That sounds like an interesting topic, and a perplexing one. Besides, it will clearly be a long, long time before anyone gets to use Twitter as a tool for historical research.

Editorial Tags: 

D2L gets into adaptive learning with a new tool aimed at professors

Recent adaptive learning entrants seek to put faculty members in charge of "personalized" content, but will the tools go beyond pilot projects?

Essay on issues facing young academics on social media


Kerry Ann Rockquemore offers questions for pretenure academics to consider before getting active on controversial topics on social media.

Job Tags: 
Ad keywords: 
Editorial Tags: 
Show on Jobs site: 
Image Size: 

The Learning House acquires Carnegie Mellon U. spinoff Acatar


The Learning House, an online enabler, acquires Carnegie Mellon U spinoff Acatar to be more competitive among prestigious universities.

Professors should seize chance to use data to improve learning (essay)

When Rowland Hussey Macy opened his namesake store in 1858, understanding consumer behavior was largely a matter of guessing. Retailers had little data to assess what customers wanted or how variables like store hours, assortment or pricing might impact sales. Decision making was slow: managers relied on manual sales tallies, compiled weekly or annually. Dozens of stores failed, including several of Macy’s original stores.

Predictive analytics, in the early days of retail, were rudimentary. Forward-thinking retailers combined transactional data with other types of information -- the weather, for example -- to understand the drivers of consumer behavior. In the 1970s, everything changed. Digital cash registers took hold, allowing companies to capture data and spot trends more quickly. They began A/B testing, piloting ideas in a test vs. control model, at the store level to understand the impact of strategy in near real time.

In the early days of AOL, where I worked in the 1990s and early 2000s, we were quick to recognize the risk to brick-and-mortar stores, as online retailers gathered unprecedented data on consumer behavior. Companies like Amazon could track a customer’s movements on their site using click-stream data to understand which products a customer was considering, or how long they spent comparing products before purchasing. Their brick-and-mortar counterparts, meanwhile, were stuck in the 1800s.

Unexpected innovations, however, have a funny way of leveling the playing field. Today, broadband ubiquity and the proliferation of mobile devices are enabling brick-and-mortar stores to track cell phone signals or use video surveillance to understand the way consumers navigate a store, or how much time they spend in a particular aisle. Sophisticated multichannel retailers now merge online behavior with in-person information to piece together a more holistic picture of their consumers, generating powerful data that drive changes in layout, staffing, assortment and pricing. A recent study found that 36 percent of in-store retail purchases -- worth a whopping $1.1 trillion -- are now influenced by the use of digital devices. Retailers who leverage online research to drive brick-and-mortar sales are gaining a competitive advantage.

The use of big data and predictive analytics in higher education is nascent. So-called disrupters often claim that the lecture hasn’t changed in 150 years, and that only online learning can drive transformative, game-changing outcomes for students. Of course, these claims ring hollow among today’s tech-savvy professors.

Since my transition into higher education, I have been struck by the parallel journey retailers and educators face. Both have been proclaimed obsolete at various points, but the reality is that the lecture, like the retail experience, has and will continue to evolve to meet the new demands of 21st-century users.

Like brick-and-mortar stores, lectures were once a black box -- but smart faculty members are beginning to harness the presence of mobile devices to capture unprecedented levels of data in traditional classrooms. And smart institutions are combining real-time engagement data with historic information to spot challenges early and change the academic trajectory for students.

Historical sources of student data (FAFSA, GPA, SAT, etc.) have predictive validity, but they are a bit like the year-over-year data retailers used: limited in depth and timeliness. The heart of a higher education institution is its professors -- and its classes. In addition to professors being experts in their fields, providing unique learning opportunities to their students, studies have shown that when professors have positive relationships with students, it leads to greater student success.

Some of the most interesting early data are coming from the big, first-year lecture courses. While most students experience these as a rite of passage, they also hold great potential as models of how behavioral data can improve engagement and completion rates for students. Faculty are no longer powerless in the face of larger classes and limited insight into their students' learning behavior. They can track how well students are engaging in traditional lecture classes and intervene with students who aren’t engaged in the behaviors (note taking, asking questions and attendance) that correlate with success.

Historically, professors have relied on piecemeal solutions to gather insights on student behavior. So-called student-response systems and learning management software, like digital cash registers in the ’70s, provide useful data -- but they don’t provide the sort of real-time analytics that can inform an instructor’s practice or to identify students in need of additional support and coaching.

A more recent brand of solutions -- in full disclosure, including ours at Echo360 -- are designed to work in conjunction with great teaching, while providing instructors with the tools to track and measure student engagement: Are students taking notes? Are they asking questions? These tools give administrators and instructors insight into how students are interacting and participating both in class, as well as with content or readings before and after class. No more waiting for summative tests to demonstrate that a student misunderstood a concept weeks or months earlier.

The analogy between retail and education has its limitations. The mission and objectives in education are more nuanced, and frankly, more important. However, education, like every sector, has what we call a moment of truth.

For retailers, that moment of truth is centered around the purchase decision. Sophisticated marketers and retailers have used behavioral data to become incredibly skilled at understanding and shaping that purchase decision to achieve extraordinary results.

It’s time to use those learnings for a higher calling. The explosion of digital devices in the classroom allows us to understand the learning process wherever it is happening on campus, and to support education’s vital moment of truth -- a transaction of knowledge between professors and students.

Frederick Singer is CEO and founder of Echo360, which provides active learning and lecture capture services to more than 650 higher ed clients in 30 countries.

Editorial Tags: 

Four liberal arts colleges, early to the MOOC scene, form online education consortium

Four liberal arts colleges -- all early adopters of massive open online courses -- form a consortium to expand their online education efforts.

Educause releases blueprint for next-generation learning management systems


Educause releases a blueprint for next-generation learning management systems, recommending a "Lego approach."


Subscribe to RSS - techfaculty
Back to Top