When confronted with technological innovation I’m kind of like one of my dogs, Oscar, when presented with a toy. Some merit nary a sniff, and go entirely unused, unless the other dog, Truman, decides it’s interesting, in which case maybe that toy isn’t so bad, but then the excitement quickly fades and Oscar realizes he doesn’t really want it.
For me, the smart phone was this way. On first appearance, I didn’t see the point, though I couldn’t help but notice the resultant frenzy and wonder if I was missing out. Once my old phone died and Verizon gave me an offer I couldn’t (but probably should have) refuse(d) I became a reasonably late adopter. While I find my iPhone occasionally useful, it’s more (human) toy than tool, and cutting that $70 a month check for my voice and data plan makes me shudder a little each time.
Other toys connect with Oscar immediately, and it’s Oh my god, this is great! Great great great! Look at this thing! Have you seen how awesome this thing is? My life wasn’t complete before I got this thing, but now, I’m the happiest creature on the planet! Yay!
(For Oscar, this involves a high-pitched singing kind of whine, wagging his tail like a maniac and uncontrolled drooling. For me, it’s just the drooling.)
TiVo was drool-worthy. I loved it immediately and intensely to the point that I began to think of my life as literally before and after TiVo. After TiVo, rather than having to be at the television at an appointed hour, a menu of my favorite shows waited for me, on-demand, and with the ability to skip commercials to boot.
If I remember correctly, the original capital outlay was something close to $800 for the box and a lifetime subscription to the service.
TiVo made watching television more expensive, but also much better, a clear winner (for me) in terms of the cost/benefit ratio, so much so, that when we upgraded to an HD TV three years ago, I bought a new, HD-compatible TiVo and lifetime subscription, another $800.
In the last year, we’ve added additional television on-demand technologies in Netflix streaming and Apple TV, both of which expand the selection of available programming, and allow us to watch what we want, when we want to.
But they also add more expense. Netflix costs just under $120 a year. The Apple TV was $99, but, more importantly when it comes to expense, it’s a gateway to on-demand purchases from iTunes. For example, if we want to watch a season of Homeland (because we do not have Showtime), it will cost $31.99 (in HD).
I shouldn’t forget our HBO Go subscription so we can watch additional movies and HBO series on demand ($120/year), or my subscription to NHL Live so I can watch Chicago Blackhawks games over Internet streaming ($150/year).
I realize that I’ve been taking the existence of cable as a given, but if I’m thinking about costs, it has to be included. My cable/internet/phone is bundled, but if cable is 1/3, that’s $50 a month (a conservative estimate of the share), that’s $600/year.
I have to stop. I’m making myself anxious. I refuse to even do the final math on our household’s full television expense because it is ugly and probably unjustifiable.
My television has become infinitely adaptable, but also considerably more expensive, particularly considering I’m old enough to remember the days when it was literally free, save the cost of electricity to run the television.
As I engage in this little thought exercise, I realize that I’ve been adopting these technologies without thinking too much about the cost/benefit ratio.
They exist, I want them, we can afford them, so we have them.
I think my adoption of adaptive television technology has gotten out of my control.
I think something similar is happening with technology and education.
Among the technologists I see a kind of “Because we can build it, we should use it” attitude, a world view that sees changing technology as a kind of de facto advancement on whatever came before.
Technology has its own momentum outside of human influence, and it’s up to us to ride that pony.
This attitude is evidenced in the recent introductory post from the new Inside Higher Ed blog, Higher Ed Beta.
In the post, the authors declare at the outset, “The educational landscape is shifting under our feet. Georgia Tech’s decision to offer a name-brand master’s degree in computer science at a strikingly low price is only the most dramatic example of the host of innovations that are reshaping higher education.”
The landscape “shifting under our feet” suggests a kind of natural process out of our control, as though some great god is shaking things up, innovation lightning bolts from on high.
With the landscape shifting, the authors look into the future, “The next 24 to 36 months are likely to see dramatic shifts in the way that education is delivered and consumed.”
First, notice the innovators’ use of “months” as our base unit of time as opposed to years, so we can be reminded that these things are happening all the time. Next, notice how they frame what’s important about education, delivery and consumption.
In this way, they are talking about education the same way I treat my television programming, except that we know that education, real education, is not limited to programming. This is a convenient rhetorical slight-of-hand meant to privilege technology over human interaction.
The rest of their introduction elucidates (though barely) some of these dramatic shifts that are on the near horizon: free, interactive textbooks; flexible start and end dates for courses; e-advising systems which, “will collect data on each student's progress and risk of failure and help students select courses”; and “Personalized adaptive pedagogies will provide many students individualized pathways through the course material in pinch point or bottleneck classes.”
In essence, what the technologists see is more technology everywhere, all the time, not because we choose it, but because the shifting landscape demands it.
The demand for these changes, according to the authors, is two-fold. One reason is economic:
“The current business model of public higher education – which depends heavily on public support and internal cross-subsidies -- is eroding as public funding lags behind rising costs, resistance to tuition increases mounts, and more students receive college credit in high school, from community colleges, and from online providers.”
This assumes that technology will somehow be cheaper for the consuming institutions than the status quo, as though software and data are created and maintained through some sort of magic, self-sustaining system and the corporations that create these things will give them away for free.
Let me frame it another way, have software and data solutions adopted for higher ed thus far – things like course management software – actually made these things cheaper? Or do we now need a whole team of network engineers managing these systems?
The second rationale for the adoption of these technologies appears to be an argument for improved quality. They see technology as a “paradigm shift” from a “faculty-centered” to “learner-centered” approach where, “a one-size-fits-all pedagogy is gradually replaced (sic) instruction tailored to individual student needs, and as approaches designed to separate the wheat from the chaff are replaced by a success agenda.”
Two thoughts on this. One, wishing doesn’t make it so, and buzzwords do not pedagogy make. (I’ll have another post exploring these notions in more detail at a future date.)
Two, while it’s popular for higher ed champions of big data solutions and adaptive software to claim that the technology is meant to help instructors, not replace them, this obviously cannot be true if we are also going to reap these economic benefits that are supposed to result from our shift towards software-oriented teaching and learning.
Indeed, a recent, largely positive, analysis of Signals, a kind of e-feedback technology to let students know how they’re doing in a particular course, ultimately admits that if such technology is going to be useful it will be not in replacing faculty teachers and mentors, but in making “the human moments in education more frequent, informed, and meaningful.”
While that sounds great and all. It also looks like more expensive software development and maintenance without any actual savings.
I am not against technology, per se. I also strongly support moves towards unshackling learning from seat-time measurements and finding ways for students to prove competencies and therefore streamline their educations. These can be done independent of new technology, however.
But I can’t help but worry about innovator rhetoric that suggests the technology is no longer in our control, but is instead something with its own momentum. The conversation needs to incorporate talk of educational values, like the fact that education itself rests on the human exchange of information, rather than just deciding that humans have no role in a disrupted universe.
Technology should be a tool, not a toy. The L.A. Unified School District is finding out the difference with their disastrous plan to use public bond money to purchase $1 billion in iPads which will cost an additional $60 million in yearly fees for software license renewal. The students quickly hacked the tablets and got busy doing what they wanted, interacting on social media, rather than churning away at the Pearson software modules.
I’m more than happy to see education technologists experiment. If there’s something software can do better than humans in education, so be it. If humans can do better work in conjunction with software, fantastic. But let’s treat these things that Higher Ed Beta sees on the horizon as possibilities, not inevitabilities.
Let’s make sure we actually need this stuff, that there is value in its adoption, and we’re not just unthinkingly heaping Apple TV and Netflix and HBO Go on top of more television than I can reasonably consume already.
Let’s not get stuck with some very expensive toys.
I once saw Twitter purely as a toy, now, increasingly, it is a tool. It's also free.