Buzz surrounding solid state disks (SSD) is at an all time high and growing daily. For consumers that can pony up the price, the choice between hard disk drives (HDD) and SSDs is a slam-dunk. For enterprises, the question is a bit stickier and not just because of the per-unit sticker price.
Given that VMs are the order of the day in enterprise IT, this particular sticking point is problematic. SSDs suffer from "write amplification" and latency spikes. "Custom-designed flash has the potential to avoid some aspects of these problems, but requires using non-standard, non-commodity products, which are very expensive and require lots of custom software," said Lee. "Fortunately, intelligent software techniques can circumvent the write amplification and latency spike issues of commodity SSDs."
The biggest barriers to SSD adoption for VM applications, then, are cost per gigabyte of SSD and delivering flash technology in a form that works well for VMs.
The storage industry has been trapped within the confines of HDD so long that it's hard for them to visualize another way. There's the draw of familiarity and the fear of the new to overcome.
"SSDs are a profoundly different media than mechanical disk and promise to be a game-changing tool in continuing to improve application performance," explained Matt Kixmoeller, VP of Products at PURE Storage. "They combat the dramatic data growth and increasingly randomized I/O issues that are currently stressing the data center."
However, he said, advancements in the internal architecture of enterprise storage arrays to better handle flash "will be a crucial step in adopting this technology," because current-generation arrays that were built for rotating disks simply can't meet the demands of flash.
"SSD technologies are game changing and they drive a whole new thought pattern around persistent storage," agreed Burton Group Analyst Gene Ruth.
This has led to a prevailing preference for hybrid data centers combining both HDDs and SSDs in a multi-tiered approach. The SSDs are in the top (logical) tier, providing access to selected high-volume and high-demand data, while the HDDs are in a lower tier, supporting bulk storage and lower-demand data. Intelligent algorithms decide the data split between tiers, although there are some concerns that critical processes can accidentally end up in the wrong tier.
"The current 'best practice' in large data centers is to use (relatively) small-capacity drives and only partially load the drives," said Michael Willett, storage security strategist with Samsung. "This results in more aggregate read/write heads and better performance moving data dynamically. At these smaller capacities, SSDs are currently more price-competitive."
Thus total cost impact is likely to take precedence over unit costs. "The decision to purchase SSD is almost always driven by a compelling return on investment (ROI)," said Ron Lloyd, product marketing manager at EMC Corp. "The combination of SSD technology, SATA technology, and advanced quality of service software features has changed how customers evaluate and plan their storage investments."
Recent Improvements - Over the last 18 months, SSDs have grown to support a much larger feature set including compression, de-duplication and AES-256 encryption. "Performance has also improved with new controllers leveraging the SATA III 6Gbps interface and current generation SATA III SSDs can now achieve more than 500MB/s read/write, which is nearly double what previous generation SSDs were doing not long ago," said Alex Mei, CMO of OCZ Technology. "In addition, PCIe based SSDs are now delivering 4K random write performance of well above 100,000 IOPS making them even more viable for tiered storage solutions."