How server virtualization impacts storage
By Andrew Sampson, Hitachi Data Systems 24-May-2010
Storage architectures that enable organizations to scale “up” and “out” have clear advantages particularly in performance-demanding environments (e.g., environments with more than 2,500 virtual machines). This type of architecture analysts contend provide the best of both worlds: the flexibility of a “scale out” architecture to meet specific application, site or geography requirements and the efficiency and cost-savings benefits of a “scale up” architecture.
Heterogeneous storage virtualization is the key enabler of “scale up and out” storage architectures. It provides organizations with access to storage capacity and performance across their data center. Thin provisioning, if available, then doles out the capacity and performance to applications as needed, and if available, dynamic tiering then stores data on the appropriate tier of disk based on its importance to the organization, risk factors and access patterns.
Acquisition costs only 20%
Price does not equal cost. In fact, in the case of storage, it’s estimated that acquisition costs account for just 20% of its total cost. Capex and opex costs such as hardware and software depreciation and maintenance; labor; floor space and power; data protection and disaster recovery; and data management/mobility account for the other 80%.
This point is extremely important point for organizations to remember when selecting storage systems for virtualized environments.
Features like thin provisioning, storage virtualization, dynamic tiering and data migration can have a significant impact on capex and opex costs; they can produce a level of cost reduction and savings for organizations that is complementary, or additive to, the benefits of server virtualization.
However, it is important to stress again that not all storage architectures are created equally. Economically superior storage architectures are able to tier, thin and virtualize heterogeneous storage environments.
Finally, it’s also important to note that cost savings are just one dimension of the transformation this type of architecture can enable. Other benefits include improved levels of efficiency, IT agility and improved application resiliency, for which end-to-end (server to storage) virtualization is a prerequisite.
85% companies in 2009 suffered major data loss
By reducing the overall number of physical servers needed to support its business, organizations are able to extend protection to more applications and data types.
Doing so has important cost-saving benefits. In challenging economic times, having proven, extensible disaster recovery and business continuity processes in place is more important than ever before. Downtime from system failures translates to lower productivity and lost revenue. In fact, an Enterprise Storage Group 2009 study finds that 85% of all companies that suffer a major data loss or significant downtime are out of business within a year. Even brief outages can drastically affect customer satisfaction and employee productivity.
While server virtualization is the primary means to improved business continuity and disaster recovery and reduced business risk, storage is an integral component. For example, how (e.g., synchronously or asynchronously) and where (e.g., at system- or host-level) the replication is done can have significant business and IT implications, affecting recovery capabilities as well as physical and virtual server performance.
Also, similar to server virtualization, storage virtualization reduces the physical resources that must be deployed, managed, and protected. This gives organizations the ability to extend protection to additional applications and data with the same or fewer resources and the same or fewer dollars.
Andrew Sampson is general manager for Hong Kong and Macau, Hitachi Data Systems