Most of the time when students tell me about their environmental concerns, an underlying optimism comes from their faith that technology can save us. Sometimes, they're attending Greenback U to learn more about one or another set of technologies, so that they can be part of the salvation effort. Others hope to learn to teach about exciting new technologies, or influence public policy to gin up more support for environmentally sensitive technologies, or conduct the basic science on which those new technologies can be based, or write books and articles about how wonderful the (upcoming) new technologies are.
When speaking with grad students, I feel free to challenge the basic assumption that a technological fix is even possible, much less likely. But with undergrads (and most of my student interaction is with undergrads), I don't feel I can do that. Undergrads are (even more than the rest of us) still figuring out who they are and what they're good at, and the learning skills they're mastering will be critical to their future success even if the specific material they focused on to master those skills turns out to be wrong. Or misguided. Or of little long-term import.
With some of the brighter undergrads, I can get away with the observation that, after all, it's technology that got us into this mess. Whatever unsustainable collective behavior (fossil fuel consumption, planned obsolescence of consumer goods made from virgin materials, industrial agriculture, plastic food packaging, excessive fishing, whatever) we're talking about, the underlying pattern is the same. Whatever we as a global society are doing to excess, we're doing to excess because we've developed and adopted technologies which seem to address a want or need we believe we have, even if those same technologies create "unintended consequences" for someone else, or somewhere else, or some-when else, or some other portion of the ecosystem. Indeed, it's the cumulative effects of these "unintended consequences" that create sustainability problems.
Technological optimism, then, is based on the presumption that some new tech can come along which will address the negative unintended consequences of existing tech, without creating new unintended consequences of similar magnitude and impact. That's a corollary to the myth of progress ("progress" just being change with good PR). But no technology is, has ever, or can ever be, created which brings with it no unintended consequences. That's not to impugn the motives of technologists, it's just a recognition of the inherent limitations of technologies themselves.
The very existence of the phrase "unintended consequences" points out that any technology has intended consequences. Technologies are developed when we recognize and address our inability to produce a particular set of desired outcomes -- the new technology is deemed successful because it allows us to do or create something of which we weren't previously capable.
But no technology operates in a vacuum. Any technology (however broadly or narrowly we choose to define that term) entails the consumption of resources (time, effort, materials, etc.) to create its desired effect. Any technology creates byproducts (leisure, heat, wastes, etc.) in addition to its desired output. Any technology, thus, effects its surrounding systems (society, ecosystem, planet) in more ways than just the creation of the one consequence its developers desired and intended. The are always unintended consequences. There always will be. And as the use of any particular technology becomes prevalent -- as it scales up by orders of magnitude -- the cumulative effects of those unintended consequences always threaten to become unsustainable. It's just a matter of degree. Of passing some limit. Of exceeding the capacity of the surrounding system to supply resources, or absorb byproducts. The limit can be explicit and direct ("sorry, there just aren't any more fish -- you took too many") or implicit and indirect ("sorry, there aren't any more fish because you put too much fertilizer on your fields which washed downstream and acidified the waters which killed off the coral reefs that had provided spawning grounds for little fish that had provided the food source for the big fish you were hoping to catch, so even though you didn't directly catch too many fish you're still out of luck"). But the limits are always there, and they're unforgiving.
Real-world technologies, thus, always have the potential to create "wicked" problems -- problems which seem to defy solution because of the high degree of interconnection among their various elements and impacts. Indeed, it's fair to say that any real-world technology, scaled up sufficiently, will inevitably create a wicked sustainability problem. The trick, then, is to avoid excessive scaling. But the paradigm of technological optimism recognizes no limits, because technologists focus only on achieving the desired result without creating immediate, apparent, and unavoidable negative outcomes. The whole technological approach revolves around finding a way to create some specific capability; if technologists simultaneously had to determine that their product not only created the desired effect but also created absolutely no deleterious effects for anyone, anywhere, any-when . . . no technology would ever get created.
Exceptions to the pattern I've just described probably exist (an idealized view of the pharmaceutical development process jumps immediately to mind), but the point is that these are, in fact, exceptions. They're exceptional. They're the gnat on the dog's tail, not the dog. It's our dogs which have gotten us into this mess, just by their very dog-ness. Dogs won't get us out of it.
A common student response to the IPAT (environmental Impact = Population * Affluence * Technology) formulation is to trust technology to save us. So when an undergraduate student cites IPAT in conversation, I try to suggest that technology has its limits, and that more efficient technologies (the most common desideratum in students' minds, I find) isn't necessarily more beneficial in the long term. If pressed, I bring up the Jevons Paradox. But I try to be careful -- to not disillusion them too much, too quickly. Disillusionment, like technology, works best when kept in check. Each is, perhaps, a necessity. Neither is the ultimate answer.