Enterprises run on production storage that enable the execution of key business processes. Typically, this is thought of as Tier 1 primary storage. However, some capacity-driven secondary storage also supports these use cases, including big data where high-velocity capture, discover and/or analysis of data is critical.

Now, while standard all-flash storage solutions play very well with Tier 1 production storage,  flash is generally too expensive for petabyte-range, capacity-focused production storage. However, there are some all-flash storage products whose benefits offer a modest (and acceptable) tradeoff in performance, making flash a viable choice. IBM’s new DeepFlash 150 is such a solution.

The need for all-flash for capacity-driven production storage

Capacity-driven production storage is essential for a number of important use cases. High performance computing (HPC) is a classic example, including workloads in oil and gas exploration, life sciences (such as genomics) and large-scale scientific research. A second example is where data is created for operational purposes, such as in media and entertainment (M&E) applications. A third example, and a rising star for such storage, is the world of big data, where advanced analytics, using SAS, Hadoop, or Spark, play a major role.

Until recently, specialty and commodity storage systems that focus on capacity-optimized hard disk drive (HDD) solutions have been the common method businesses use to meet the upwards of petabyte-plus storage requirements of those workloads. As noted, all-flash solutions would be a boon not only because of their performance, but also for the operational benefits they provide, such as lower power and cooling requirements and associated costs. However, conventional all-flash systems are too expensive for capacity-driven production use cases. A new, cost-effective flash architecture could and would change the game.

Introducing IBM’s DeepFlash 150

IBM’s DeepFlash 150 represents a new world of economically-viable all-flash storage that competes with capacity-optimized HDD solutions. As usual, flash delivers a significant performance improvement over a HDD array with which it competes. The DeepFlash 150 offers sub 1MS latency, up to 2M IOPS, and 12 GB/s throughput. This is 5X the performance of IBM’s HDD version.

But DeepFlash 150 dominates in a number of operational areas, as well. For example, IBM’s new solution offers up to 6X the density of HDD-based solutions in a single 3U chassis, starting at 128 TB and scaling up to 512 TB. More chasses can be added to crate petabyte-scale storage in a standard enclosure. That reduces the space it takes to rack and stack storage, as well as the overall data center footprint. Altogether, the rack space required is 1/3 of a comparable IBM HDD product.

Operationally, that can be very important since, although storage requirements continue to grow rapidly, expanding the size of a data center is difficult, if not impossible. The DeepFlash 150 also requires 30% to 50% lower power consumption and cooling power of an equivalent HDD array. That not only offers significant cost savings but is also strategically important as data center owners are pressured to better control their electricity consumption.

Finally, IBM’s DeepFlash 150 offers 10X the reliability of a spinning disk array. This is significant in the petabyte world as it results in far fewer device failures that IT needs to repair, resulting in a major boon for storage operations personnel.

The DeepFlash 150 is also tightly coupled with IBM’s Spectrum Scale, the company’s software-defined storage platform for managing physical hardware. Why is this important? Capacity-driven production storage typically needs file system capability that can run at petabyte-plus scale. A foundational component of Spectrum Scale is a general parallel file system (GPFS) that was designed to handle such scale-out requirements with a single global namespace and has proven its capabilities over a number of years. In other words, DeepFlash 150 customers will be in good, dependable hands with Spectrum Scale.

Mesabi Musings

Capacity-driven production storage workloads, namely those of big data, HPC and M&E, have long pined for better performance and operational efficiencies, but all-flash systems designed for performance-driven production storage have been too expensive to consider. That has changed with the introduction of a more cost-effective, all-flash architecture targeted for capacity-driven operational storage requirements: IBM’s DeepFlash 150.

The DeepFlash 150 demonstrates a product that instantiates the new architecture. Higher performance coupled with lower utility costs, less rack, stack, and footprint requirements, and greater reliability should warm the heart of both storage administration and data center management, especially since they come at an affordable price. IBM’s DeepFlash 150 demonstrates how innovative flash technologies continue to advance in the production storage space with this new solution for capacity-focused production use cases.