"Sooner or later," Knewton's senior marketing associate Will Fleiss told me in a recent email, "we'll convince the world we're no longer just a test prep company." And with the company's announcement that its adaptive engine would be powering Pearson's digital courseware, indeed, that day is probably here.
Fleiss's email to me was a response to my coverage a few weeks ago of Knewton's most recent round of funding -- $33 million, with Pearson leading the round of investment. And although yes, I had described Knewton with the adjectival phrase "test prep," I made it clear -- and in the headline at that -- that this was "big bucks for adaptive learning platforms."
Knewton isn't alone in working on building algorithms to help deliver educational content, "adapted" to students' responses. Companies like Grockit, Dreambox Learning and Carnegie Learning are also working on adaptive learning platforms. But the partnership with Pearson certainly sets Knewton apart from its competition. The image that accompanies the news in Techcrunch -- a comparison of Pearson's share of the market versus Cengage Learning and the University of Phoenix -- doesn't seem to get the proportions quite right, but we get the point: Pearson is the largest digital content provider in higher ed, reaching some 9 million students. Pearson is, in fact, the largest education company in the world with interests in curriculum, textbooks, and assessment at both the higher education and the K-12 level.
That digital content can now connect to an adaptive learning platform. Big content. Big data.
To begin with, Knewton says, it will only integrate with a few of the subjects within Pearson's vast MyLab and Mastering courseware catalog -- MyMathLab, MyReadingLab and MyWritingLab, and MyFoundationsLab, Pearson’s "all-in-one solution for college readiness." But we can anticipate that other courses will follow, and the companies say they'll "jointly develop a line of custom, next-generation digital course solutions, and will explore new products in the K12 and international markets."
As the recent investment in Knewton highlighted (funding that brought the total investment in Knewton to over $54 million and its rumored valuation to over $150 million), there is immense interest in student data and learning analytics. No doubt today's announcement isn't just about Pearson content, it's about Pearson, or rather student data.
Buzz about big data notwithstanding, the interest in student data is hardly surprising. As we spend more and more money on higher education and as we struggle with college readiness and college competition (and student loan debt), there are more demands for us to be able to decipher some of what George Siemens describes as the "black box" of education. As it stands there is a great uncertainty about what actually influences learning outcomes. The promise of digital content is, in part, the promise to be able to glean more insight into learning. Being able to provide solid data about student progress -- what they understand, what they don't -- in real time and not just at the beginning or end of the semester, being able to provide remediation aimed at those strengths and weaknesses… these are powerful, powerful offerings.
It's the promise of "personalized learning" -- that by mediating students' work via software, we can deliver content best suited to them. As CEO founder and CEO Jose Ferreira said in the company press release today, “You’ll soon see Pearson products that diagnose each student’s proficiency at every concept, and precisely deliver the needed content in the optimal learning style for each. These products will use the combined data power of millions of students to provide uniquely personalized learning to each.” (emphasis mine)
We can problematize how truly "personalized" that learning is, I think, as it's based off of a major publisher's materials and based on a fairly standardized curriculum. But with that standardization, no doubt, there will be an immense amount of data about how college students move through this content; Knewton's adaptive learning platform will be able to take advantage of that data to fine-tune its models and its algorithms -- and better algorithms, so the argument goes, will mean a more responsive system for each student.
So, what are the implications of being able to funnel the data from this digital content -- how students interact, how they test, how they score -- into an algorithm that can deliver them level-, need-, and skill-appropriate material? Will students learn more and learn better? How will better adaptive learning software change the way in which teaching and learning happens in universities?