How to optimize data center power, cooling for cloud
By Benedict Soh, Schneider Electric 29-Mar-2012
Virtualization, the engine behind cloud computing, offers undisputed IT benefits to companies, from reduced deck footprint to disaster recovery. However, it also introduces challenges to the power and cooling infrastructure of their data centers which compromise cloud computing's benefits. By addressing the four primary challenges highlighted below with the suggested solutions, businesses can achieve leaner and more efficient cloud computing deployment and realise its fullest potential.
Isolate high density from low density
Virtualized servers, which increase CPU utilization, tend to be installed and organised in localized high-density areas. Consolidating these servers will lead to higher power densities, resulting in concerns over cooling capabilities. An efficient approach is to isolate higher density equipment from lower density equipment. Dedicated cooling air distribution and air containment can be brought to these isolated high density zone to provide predictable cooling at any given time. This method helps businesses to achieve better space utilisation, greater energy efficiency and enables maximum rack density.
Adapt to dynamic IT loads
Besides the increase in density, virtualization also allows applications to be dynamically moved, started and stopped. This results in loads that change over time and physical location, bringing about hard-to-detect shifts in the room's thermal profile. As such, businesses will require a cooling system that can adapt and match the changing power densities both in location and amount. By deploying a row-based cooling system that is instrumented to automatically detect and respond to temperature changes, cooling can be provided where and when needed, only in the amount needed, thus increasing efficiency of the virtualisation.
Right-size power and cooling plant
Virtualization in an existing data center without changes to the power and cooling infrastructure always reduces the data center's power usage effectiveness, or PUE, due to fixed losses in power consumption. These fixed losses in existing data centers can be reduced by removing unneeded UPS power modules for scalable UPS, installing air containment technologies and blanking panels to reduce air mixing. For new data centers, right-sizing the entire power and cooling plant to match the load is both feasible in terms of implementation and is likely to have the biggest impact in minimising fixed losses. Deploying this solution at the design phase will mean lower initial capital cost and maximised energy efficiency when the data center is operational.
Right-size infrastructural redundancy
A highly virtualized environment is fault-tolerant and highly recoverable, making it possible to maintain service levels even if some servers or racks become unavailable. This high fault tolerance reduces the need for a highly redundant power and cooling system. By right-sizing the infrastructural redundancy to the fault-tolerant nature of a virtualized IT environment, energy consumption, capital costs and fixed losses can be reduced while the data center's PUE can be improved.
The benefits of virtualization and cloud computing can be limited or even compromised if the consequences to the data center's physical infrastructure are not addressed. By adopting the solutions suggested, highly virtualized data centers can be kept running with greater reliability, efficiency, and thus maximising the benefits that cloud computing brings.
Benedict Soh is the vice president of Schneider Electric's IT Business in Singapore