The idea of tiered storage is as old as the hills … or at least the mainframe. Back when the primary corporate computer was a "big iron" box surrounded by peripheral devices, tiered storage consisted of a set of storage targets classed together by their performance characteristics, capacities and costs.
At the top of the ladder was system memory, a precious and limited resource where you placed data for the shortest period of time possible. Below the memory tier were direct access storage devices (DASD) -- like today's storage arrays, only with bigger price tags in adjusted dollars for significantly less capacity. Below the DASD layer was magnetic tape, the workhorse technology that is enjoying a renaissance today.
The purpose of a storage tiering strategy was to enable the movement of data to less and less expensive media based on usage characteristics and other factors. Data that was accessed frequently stayed put on more expensive -- but also faster -- storage tiers, while data that had "cooled off" (whose access frequency had diminished to occasional or nil) was parked on tape or "near online storage" that sported a substantially lower cost.
Flash memory, disk storage for a performance-hungry world
Between the interest in leveraging storage technology changes to better meet business needs and the ongoing mandate to contain the costs of storage, tiered storage strategy is getting another look.
Cost containment is one reason for the resurgence in interest in tiered storage architecture. Memory is still at the top of the ladder, but it's no longer the precious commodity it once was, and it remains expensive. Vendors are now presenting consumers with a variety of Flash memory devices -- from all-Flash arrays to hybrid Flash+disk arrays to PCIe Flash accelerator boards -- that are, in some cases, touted as replacements for all-magnetic storage in an increasingly performance-hungry world.
While CIOs wrestle to separate the truth from the hype on Flash memory products, many also confront a glut of expensive, fast storage arrays in their existing infrastructure. According to one knowledgeable industry observer, these systems -- packed with hundreds or thousands of drives running at 15,000 revolutions per minute and configured to achieve extremely fast throughput speeds -- have been rolled out in larger firms to accompany nearly every new application introduced over the past decade. For reasons of operational efficiency and power cost (every drive requires between 7 and 15 watts of electrical power and it adds up!), tiered storage is seen as a way to rationalize infrastructure and to match it to the actual workload requirements of the firm.
Disk storage has actually bifurcated into two separate tiers: the fast storage described above and slower, more capacious and less expensive disk storage leveraging Serial ATA (SATA) technology. With the arrival of technology such as bit-patterned media, demonstrated last year by Toshiba and IBM, we will soon see arrays of 2.5-inch drives with capacities of 40 terabytes per drive. These arrays will likely become a tier unto themselves, rivaled only by tape in terms of capacity and performance.
Tape still a workhorse
Tape isn't standing still. With the advent of barium ferrite (BaFe) media coatings, Fujifilm and IBM have already demonstrated a tape cartridge that will soon come to market delivering 32 terabytes of capacity uncompressed. The narrative around tape has also gotten a lot more interesting with the introduction by IBM of the linear tape file system (LTFS) a year ago that enables a tape system to be used as a file server. That offers huge capacity for storing the 55% of data that takes the form of user files in most firms, while consuming only a few light bulbs' worth of electricity -- and well worth consideration in your storage tiering strategy.
Between the interest in leveraging storage technology changes to better meet business needs and the ongoing mandate in most firms to contain the costs of storage, tiered storage strategy is getting another look. CIOs might benefit from conceptualizing storage infrastructure as capture storage and retention storage. Capture storage needs to provide the performance and throughput required to write data as quickly as applications are creating it. It's likely that products like X-IO Intelligent Storage Element (ISE) that combine Flash memory and fast disk in a hybrid array will provide a better choice for this tier of storage than, say, a 1,900-spindle array with 15K drives -- if only from a power-cost perspective.
More on storage tiering strategy
Making a business case for storage
Tiered storage: Two tiers is enough
Pre-and-post cloud storage strategies
Retention storage, the other tier that is intended to hold on to data for as long as business requirements dictate -- usually long after the data's access frequency has dropped to zero -- may well comprise very high capacity and very slow disk or LTFS tape. The latter, with the advent of 32-terabye media, could deliver hundreds of petabytes of storage on just a couple of raised floor tiles, which is density that cannot be achieved with any other storage system.
About the author:
Jon Toigo is CEO and managing principal of Toigo Partners International and chairman of the Data Management Institute.
This was first published in July 2013