A few months ago, I was visiting a customer, who also happens to be a very large, global retailer. I sat down with their storage architect to get a sense of their environment, their application and technology dependencies, and ultimately, their storage decision criteria. They had been running EMC hybrid arrays for a while.
What he shared with me about their storage management practices was enlightening, considering the common perception in the storage world about tiering.
Tiering is the practice of assigning different categories of data to different types of storage to drive costs down. Hot, active data gets stored in high-performance, more expensive storage media. Less frequently accessed data gets stored in slower, less expensive storage media. Makes since. But, consider how time consuming and complex it can get to continuously assign data to tiers. So you automate it. Add the software costs associated with that, and you get a complex an costly solution.
The storage architect proceeded to tell me they had turned on automatic tiering on the arrays to push the data to the right tier automatically, either flash for the performance-sensitive data, or disk for colder data that did not need the performance of flash.
Turns out after the automatic algorithms did their work, the flash utilization inside their array was 90% while the disks were at utilized less than 10%.
He then realized he was effectively running an “all-flash data center” on the wrong architecture. A few months later, Violin became their primary storage of choice for all data-sets. In their environment, there was little need for tiering.