Basket £ 0.00 (0 items)
You are here: HomeArticle › Space, cost and energy savings – Choosing the right storage virtualisation technology

Space, cost and energy savings – Choosing the right storage virtualisation technology

The data centre continues to evolve with the need for storage capacity growing so quickly that many organisations struggle to keep up. Even before the global economy took a nosedive, organisations were starting to look for the best way to re-utilise their pre-existing storage capabilities. But now, with the increasing popularity of Server Consolidation, Virtualisation, Disaster Recovery and Business Continuity, technologies such as mirroring, cloning and snapshot technology add to the data explosion we are seeing and have become a top business priority.

The perpetual headache for IT managers is figuring out how best to utilise the storage they currently have available, while still delivering improvements in service and customer satisfaction. Commonly, applications and host devices have been granted storage space simply based on what they might use which has resulted in systematic over-provisioning. However, few even come close to pushing these parameters, and the ones that do only do so on a temporary basis. The result is an inefficient scenario with mass underutilisation and wasted storage. In fact, the average actual utilisation currently sits at around 40% – 50% resulting in uneconomical return on investment: a crime when the technology to reduce this wastage is readily available. The ramifications for the business are both financial and environmental, with energy and space being wasted on a vast scale.

Making better use of existing capacity is clearly the answer here, and once an organisation has recognised this, they will cut expenditure directly on additional hardware, and indirectly by reducing the man-hours required to monitor and maintain the new equipment. One of the easiest ways to do this is through storage consolidation. Employing a storage virtualisation system enables the existing infrastructure to fully optimise storage capacity, by treating it as a single resource as opposed to separate entities. A virtual management system assesses the total space available, creates a virtual storage pool from the separate storage units and proceeds to allocate volumes accordingly.

There are three major storage virtualisation architectures each delivering the same final virtualised outcome: in-band, out-of-band and host-based. Each architecture negates the need for applications or host devices to have any direct control with storage partitions themselves, as a layer of abstraction sits between them and the physical storage devices. To decide which model is best for your business, it is necessary to consider a number of factors, however the size of your data centre and the product types you are consolidating are major influencers.

In-band

When data is saved to a storage device it consists of two parts, the data itself and the control or metadata. The metadata provides essential information on the data’s size, location or ownership for example. An in-band virtualisation system, like the Hitachi USP-V and USP-VM tiered storage options, employs a virtual management device that combines the two strands into one message and then allocates the data to available space in the partitions. This is one of the most widely utilised virtualisation architectures, and only requires one management device to sit between all the applications or host electronic devices and the server area network (SAN). Host devices only have to query the virtualisation appliance to find where on the virtual drive the data is stored. The appliance then redirects the host device to the relevant storage unit. However, the disadvantage with this method is the associated bottleneck of requests that can build up (if the throughput of the device is limited), increasing network latency as a by-product. It therefore becomes necessary to make sure your virtualisation appliances are at least as fast as your fastest storage device.

Out-of-band

The second architecture type essentially works in the same way as in-band, with a device manager sitting between the SAN and the applications. However this separates the data and metadata into different strands, both of which are saved in different locations. The applications then ‘seek permission’ from the metadata controller which advises exactly where its accompanying data is saved. The advantage of an out-of-band system is once the metadata controller has granted storage authorisation, the data can use the entirety of the fibre optic SAN bandwidth without interference from its control. The LSI StoreAge SVM solution is a good example. It results in better utilisation of the SAN bandwidth, as well as excellent scalability and storage consolidation, and the ability to pool products from different vendors in one unified virtual partition. This solves the ‘bottlenecking’ issue, however does require each host device to utilise a new set of drivers that allow them to recognise the metadata controller and establish the separate control path.

Host-based

Host-based storage virtualisation is the final option to consider. Each host electronic device or application has its own virtual ‘storage focused’ interface that allocates both the core data and its control. However this requires the installation and management of specialist software on every relevant server, often at the operating system level. This technique utilises age old volume managers to present the storage pool and then divide it up as necessary for the storage requirements. While being the simplest method due to the fact that no sub-system alterations have to be implemented, it is not practical for larger organisations with substantial storage requirements and even larger data centres, due to the operating costs associated with single storage server management.

The answer to the eternal question of storage capacity inevitably lies in the consolidation of resources and therefore the virtualisation of available storage facilities. Big is not necessarily best. By deploying storage virtualisation architecture, an organisation can avoid the capital expenditure that results from constantly purchasing extra hardware when there are already vast amounts of unused storage potential available, as well as delaying the need to physically expand the data centre size in order to incorporate unnecessary equipment. Ongoing costs are also minimised by simply powering fewer storage servers, therefore minimising carbon footprint and saving on man-hours by introducing a degree of automation to a process, that is already designed to maximise its own potential.

The author

David Galton-Fenzi, Zycko UK

(ITadviser, Issue 60, Winter 2009)

 

Contact

For more information about The National Computing Centre and our services, please contact us at the details below:

Email: info@ncc.co.uk
Telephone: +44 (0)870 908 8767
Fax: +44 (0)870 134 0931

Click here for more contact information


TwitterFollow us on Twitter
Linked InJoin our LinkedIn Group
FBLike us on Facebook

 

Management Guidelines

NCC Guidelines Vol 5 No 1

more in Management Guidelines

 

Professional Development

Cloud Computing

more in Professional Development

 

Analyst Digest

September 2016 Bulletin published

more in Analyst Digest