You have /5 articles left.
Sign up for a free account or log in.
As colleges offer more and more network services to their students and faculty, the capacity to support applications like e-mail, Web access, databases and course signups doesn't materialize out of thin air. Servers take up space, gobble up energy and require maintenance like any other tool or appliance.
So, facing rising demands for resources and cost concerns, information technology departments at colleges and universities are starting to adopt what is already a common practice among their corporate counterparts: a process called "server virtualization," in which the operations and data of several individual servers are combined into a single, physical machine. That not only saves space, but it allows colleges to reduce the energy spent powering mammoth racks of centralized computers and the air-conditioning systems needed to keep them cool.
If a college's servers existed inside a closed room, there would be no noticeable external difference between a series of 50 separate machines compared with five machines each performing the duties of 10. Those 10 servers would be considered "virtual" because they'd exist only as metaphorical representations within a single machine that divides its processing power and storage space among the functions that would otherwise be performed by the actual servers it replaced.
As a result of the power savings, Vanderbilt University is billing its recent initiative to adopt server virtualization as a "green effort." The university's Information Technology Services estimates a savings of 20,575 watts per hour from consolidating 35 percent of the servers it manages into virtual clusters. Officials are working toward a goal of 50 percent, with an eye toward eventually using virtual servers for three-quarters of its capacity.
"We have actually had some queries from some of the local universities here about it, so I’m pretty sure that there are other universities that are actually following this as well. But it’s definitely very pervasive in the industry as a whole nowadays. It’s a pretty hot item,” said Esfandiar Zafar, the IT department's director of application hosting.
Right now, according to Zafar, somewhere around 150 virtual servers, depending on the day, are running on only a dozen machines. A year and a half ago, before the initiative began, each of those 150 virtual servers was either housed in its own processing unit or didn't exist because the resources to host them weren't readily available.
Judging from the results so far, cramming an average of more than 10 self-contained servers onto a single hardware unit has worked without significant performance issues (although the physical machines are more powerful and likely more expensive than some of the ones they replaced). That's partially because servers, when operating individually, don't always make full use of their processing power or storage capacity. When consolidated into virtual stacks, however, the host machine can dole out resources more efficiently as they're needed by each operating system.
“For what we’re doing, we’ve had pretty good luck with performance across the board," said Kevin McDonald, Vanderbilt's assistant director for application hosting. The university's main Web site, for example, is housed entirely on a virtual server.
For the IT administrators implementing them, virtual servers have numerous other advantages as well. Upgrading a server's memory or hard drive can be done with the click of a mouse, shifting physical resources from one virtual system to another. If faulty hardware causes a system shutdown, virtual machines can seamlessly shift to another host while still running -- meaning potentially uninterrupted service and fewer network outages. Space savings, meanwhile, could theoretically approach 90 percent.
But while having a more tightly controlled, self-enclosed virtual server farm may help prevent external attacks, it could also increase the likelihood of servers interfering with each other. So far, Zafar said, that hasn't been a problem. Another drawback is that certain heavy-load setups, such as database servers, aren't suitable for virtualization.
Properly implemented, the strategy has enough benefits in cost and efficiency that many IT administrators would find it worthwhile. Still, server virtualization might not be for everyone, said Kazuto Okayasu, the manager of desktop and server support at the University of California at Irvine. Okayasu, who led a session on the topic at November's Educause conference in Seattle, noted that virtual servers might not be ideal for smaller institutions or for certain departments.
While the initial costs of planning and maintaining virtual clusters are not insubstantial, the benefits could be enough to make previously out-of-reach capabilities available to operations with smaller budgets. At Irvine, Okayasu said, power outages and continuing environmental concerns put virtualization on the agenda at a time when trends in computer use began to place greater demands on server-side applications. (With Web 2.0, for example, the focus is increasingly on Web services available anywhere and at all times, rather than replicating capabilities on people's individual computers or laptops.)
At Irvine, virtual servers offered a way to meet that increasing demand while reining in costs. But Irvine, like many universities, is a decentralized institution. Similarly, Vanderbilt's strategy is somewhat piecemeal: Administrators inform individual departments about the virtual option when they approach IT about adding a server. Departments can then purchase yearly access to a virtual server just as they would otherwise.