Author Archives: mesabigroup

IBM Reinforces Storage Portfolio for AI and Big Data

In its recent storage announcement, IBM strengthened its storage solutions for AI and big data through the introduction of the IBM Elastic Storage System (ESS) 3000 and an important new capability for IBM Spectrum Discover. The IBM Elastic Storage System 3000 enables an IT organization to take advantage not only of IBM’s software-defined-storage (SDS) capability in Spectrum Scale software, but to do so in conjunction with complementary IBM physical storage that together results in a robust and scalable storage system for AI and big data workloads. IBM Spectrum Discover, which fulfills the essential role of classification and metadata tagging in the AI (artificial intelligence) data pipeline, can now work with an organization’s own data through access to a copy of the data that is available in backups or archives.

As a corporation, IBM has long had AI as a major focus as well as analytics and big data in general. Although Watson is the face of IBM’s AI capability to the general public, the company also has a number of non-Watson-based AI initiatives. For example, on the storage infrastructure side, in December 2018 IBM announced the Spectrum Storage for AI with NVIDIA DGX reference architecture. A vendor-supplied reference architecture can help an AI project team to select a set of hardware and software products that lead to an AI infrastructure solution. (See IBM Spectrum Storage for AI with NVIDIA Reference Architecture for more detail.)

In keeping with the theme of storage for AI and big data workloads, IBM announced in July major new capabilities for Spectrum Discover and IBM Cloud Object Storage (COS). The former now supports non-IBM heterogeneous storage platforms, notably on-premises support for Dell-EMC Isilon and NetApp filers, as well as public clouds that support the S3 protocol, including Amazon, the originator of the protocol. The latter supports object storage for those AI and big data workloads that can store data as objects. One of the three COS deployment models is pre-installed on a storage array. The announcement focused on the upgrade to Gen2 for IBM-provided storage arrays. (See IBM Continues to Focus on Storage for AI and Big Data for more information,)

The IBM Elastic Storage System 3000 Makes Its Debut

IBM Elastic Storage System 3000 is the newest member of IBM’s Elastic Storage Servers which support its SDS Spectrum Scale for files. IBM Spectrum Scale’s global single namespace eliminates data silos and provides scalability that can move into the exabyte range if necessary. Being able to manage data as a single virtual pool and at any scale necessary is a good thing for many organizations.

However, an IT organization can start very much smaller than an exabyte (or even a petabyte) when using the IBM Elastic Storage System 3000! The new all flash system starts at under 25 TB and includes the introduction of end-to-end NVMe (non-volatile memory express) technology and 2U (3.5”) rackable building blocks. IT can start small with an experimental version and then scale as necessary to production-level enterprise requirements. Each 2U building block delivers a powerful 40 GB/sec (thanks to the turbocharging of the flash with NVMe) and performance scales linearly.

IBM stresses customer-friendly capabilities, such as the ability to containerize software for ease of installation and updating.

IBM Spectrum Discover Extends Its Reach into Backup Environments

IBM’s Spectrum Discover provides metadata management that along with other capabilities delivers the curation (selection, organization, and presentation) of information content that the intelligent data analytics tools require for AI and big data workloads.

At first, Spectrum Discover worked only with file data managed by IBM Spectrum Scale, including ESS storage arrays, as well as object data managed by IBM Cloud Object Storage. Then support was extended to some key heterogeneous storage systems – EMC Isilon and NetApp. With the latest release, IBM now works with IBM Spectrum Protect-and the copies of the data it manages. This means that the live active production copy of data does not need to be touched which is almost always desirable from both performance and protection perspectives. Instead, a backup or an archive copy of the data can be used. This means that the potential treasure trove of an organization’s own data can now participate as an AI or big data workload.

Spectrum Discover easily connects to Spectrum Protect metadata to discover, index, and label files of interest as well as being able to rapidly find and activate cold (unlikely to change) data in a backup/archive copy for use by analytics and AI tools.

Spectrum Discover users can also take advantage of the IBM Spectrum Discover Application Catalog. This community-supported catalog of open source action agents enhances the capability of IBM Spectrum Discover with third-party extensions that can be found and installed via CLI (with Docker Hub).

Mesabi musings

A new frontier for enterprises is the rapidly evolving need to derive value from the suddenly humongous quantity of data that they have available. AI and big data workflows use that data to craft actionable insights.

IBM Elastic Storage System 3000 now offers a SDS-managed physical instantiation that should meet the performance and scalability requirements of most AI and big data workloads. With IBM Spectrum Discover, Spectrum Protect users can seek to derive value from enterprise-owned unique data that is housed as backup or archive data. Thus, IBM is positively reinforcing its theme of providing software and hardware requirements for AI and big data workloads.

IBM Introduces DS8900F Storage System for the Mainframe

The IBM Z mainframe continues to support a very large number of the world’s most mission-critical applications and the company works diligently to ensure that the platform continues to deliver the capabilities that customers need in their rapidly-evolving data-driven hybrid multicloud enterprise. Therefore, any introduction of a new storage system, namely the IBM DS8900F, that tightly integrates with IBM Z servers, is very relevant and, by definition, newsworthy.

IBM DS8900F Storage Systems make their debut

IBM DS8900F (F stands for all-flash) represents the next generation in the evolution of IBM DS8000 mainframe-focused storage systems. Both  DS8900F and its predecessor, DS8880F (see DS8880F Storage for more information), can work with the older IBM z13 server generation, the current IBM z14 generation, and the just-announced IBM z15, but only the new family members deliver seven 9’s of availability (what this means will be discussed later), providing the ultimate in storage system reliability and uptime.

Of course, IBM DS8900F also has improved many of the solution’s speeds and feeds features.  The IBM DS8900F has two family members: namely little brother — DS8910F — and big brother — DS8950F. However, the importance of the DS8900 extends well beyond its impressive speeds. Rather, we will touch upon the tight coupling of IBM DS8900F with BM Z Systems and with the hybrid multicloud, but also illustrate the use of the new solution with a number of IBM software features that enable it to deliver exceedingly strong data protection and disaster recovery capabilities.

Unlike the mainframe itself, IBM has long had strong competition for mainframe storage sales. Therefore, it wants to emphasize its strong storage systems integration with Z Systems that its competition cannot match. To illustrate this point, IBM points out that storage latency, which is the most important metric for the transaction-based mission-critical systems that are the raison d’etre of Z Systems, strongly favors IBM. It claims that its latency is more than 5x better with its very cost effective but powerful zHyperLink technology.  Even without the zHyperLink technology, the DS8900 still out performs its competition in latency – the #1 enemy for high transaction applications and workloads.

IBM also emphasizes that its storage plays well in the modern IT world of the cloud with a secure, seamless, and transparent integration to hybrid multicloud configurations with its Transparent Cloud Tiering (TCT) capability. TCT enables hybrid multicloud storage tiering for data archiving, long term data retention, and backups See Transparent  Cloud Tiering for the DS8880F for more information on the use of TCT with IBM mainframe storage.

The focus on IBM DS8900F should not only be about what is new and different, but also include what the new storage inherits as an important legacy from its predecessors. In other words, the total package has to be taken into consideration Let’s concentrate on IT-infrastructure-operations data protection with respect to the availability, preservation, and confidentiality of the data on the storage that it manages.

Please note that this is not the data protection perspective as best illustrated by the European Union-based General Data Protection Regulation (GDPR), which focuses on how data can be properly used, such as for ensuring privacy. Rather this is the traditional American view of data protection.

Seven 9’s of availability for operational recovery at a primary site

Availability of the physical storage system to provide data access on demand comes in two flavors, operational and disaster recovery. For many years, the standard acceptable downtime for mission critical applications for a storage system at a primary site was expressed as five 9’s (99.999%), which translates into 5.26 minutes of downtime per year. The prior generation IBM DS8880F provides six 9’s (31.5 seconds of annual downtime). Now, in conjunction with IBM HyperSwap, IBM DS8900F system introduces seven 9’s (3.16 seconds/year average downtime for continually running storage systems). Why is this important when even 5+minutes downtime per year doesn’t sound so bad at first blush? The answer is that mainframe customers, including some of the world’s largest banks, financial services companies, airline reservation players, Global Fortune 500 companies, and many government agencies, require 24x7x365 mission critical systems for which any hiccup in storage and system availability is unacceptable. Seven 9’s practically means that, very often, there is absolutely no downtime in a year and that is a very good thing for these companies and government agencies.

3 or 4 Site Replication for Disaster Recovery

For disaster recovery purposes where for whatever reason a primary site fails or for safety’s sake has to be temporarily shut down (such as an impending hurricane), IBM DS8900F offers IBM’s well-proven 3 and 4 site replication capability. For a secondary site within a metro distance (roughly up to 300 km) there is no data loss, as the most sensitive transactions can operate synchronously (through IBM’s Metro Mirror capability). For a site at a greater distance where replication has to be done on an asynchronous basis, the data loss is only 3 to 5 seconds for the recovery point objective (RPO), while providing a recovery time objective (RTO) of less than a minute (using the IBM Global Mirror capability). Given the severity of a disaster that would require having to invoke a third or even a fourth site, getting an ancillary site functional should probably be the least of IT’s worries.

Safeguarded Copy prevents temporary data corruption from becoming permanent

Preservation of data is about preventing corruption i.e., unauthorized changes in data. IBM uses its Safeguarded Copy software feature to prevent non-approved modification or deletion of data due to user error or malicious third-party attacks, via malware or ransomware. Up to 500 virtual backup copies can be made with IBM DS8900F using incremental immutable snapshots on a non-production volume that allow a point-in-time before an attack restoration process to take place on a separate recovery volume. Given the increasing prevalence of cyber-attacks and the inevitability of human error, one would have to wonder why any mainframe IT organization would not put this feature on its shopping list.

Encryption  ensures the confidentiality of data

Confidentiality is about making sure that data Peeping Toms do not have a view of production data or can make or steal a usable copy of the data that they can use at their leisure for their nefarious purposes. Encryption is the cure for this problem. Encryption of data, both at rest in the customer’s data center and in public cloud platforms, is a staple capability that IBM provides. Now, in conjunction with the IBM z15, IBM DS8900F also offers data-in-flight encryption, which should be very useful in a hybrid multicloud world. Encryption on the DS8900 has NO performance impact, as it is performed in hardware (as opposed to many other storage encryption products where encryption is software based).  Additionally, the DS8900 provides 256-bit AES GCM encryption technology. The pervasive use of encryption should now be a no brainer.

Mesabi musings

If you were to ask an IBM Z Systems user who also uses the company’s mainframe storage what one characteristic or phrase, they most have come to expect, you would historically get a lot of answers, including performance, reliability, and safe. However, three words — peace of mind — encompasses those choices as well as many more. The new IBM DS8900F continues in that tradition with improved performance in orchestration with IBM Z Systems, seven 9’s of reliability and availability, and a number of safe choices for data protection. Not bad for many days of work.

IBM Continues to Focus on Storage for AI and Big Data

Any prediction worth its salt forecasts a nearly unbelievable increase in the creation of unstructured data. Mining what may be a treasure trove of potential insights is the role of intelligent data analytics, notably through such tools as those provided by AI (artificial intelligence) or big data software. Many consider intelligent data analytics to be a key driver of business value for many enterprises such as uncovering new revenue-producing opportunities. IBM Spectrum Discover and IBM Cloud Object Storage are two key IBM software products that support those intelligent analytics efforts.

Data-driven software intelligence takes center stage

Traditionally, software intelligence has been mainly application-driven. Data is created and managed to fit the needs of the application; typically, the creation of structured data is part of the application process, as in online transaction processing (OLTP) systems.

In contrast, unstructured data typically requires software intelligence that is created and managed to fit the needs of the data, which may be (and likely is) created independent of the application. Examples include AI, big data and analytical software designed to discover hidden value.

IBM’s Spectrum Discover provides metadata management that among other capabilities delivers the curation (selection, organization, and presentation) of information content that the intelligent data analytics tools require. Another key IBM software-defined-storage (SDS) software product, IBM Cloud Object Storage, stores and manages the data that the analytical tools work on as object storage. IBM’s latest storage announcement discusses updates to both of these data-driven software intelligence products.

IBM Spectrum Discover —More open, stronger data classification, and easier compliance

IBM Spectrum Discover is metadata management software (see more in our report at IBM Driving Storage Revolutions) that can be used on files or objects in conjunction with big data, AI and analytics software. Good metadata management is essential to enable those software tools to properly classify data and process voluminous quantities of data in a timely manner,

First announced in October 2018, Spectrum Discover worked originally only with IBM products — namely with file data managed by IBM Spectrum Scale or object data managed by IBM Cloud Object Storage. This new announcement includes support for key heterogeneous storage platforms —Dell-EMC Isilon, NetApp filers, Amazon S3 (and by definition other public cloud providers that support the S3 protocol), and Ceph, which is popular for its access to object storage, but also supports block and file storage. Supporting heterogeneous storage platforms has long been a key strategy for IBM’s software-defined storage (SDS) products so it should come as no surprise that Spectrum Discover should follow in their footsteps. Yes, IBM would love to sell storage hardware systems in addition to software, but selling software is profitable in and of itself. Not only that, expanding its software footprint may also give IBM opportunities to build storage hardware sales.

In addition to automatically capturing and indexing system metadata as data is created, Spectrum Discover provides for custom metadata tagging. That adds extra intelligence that can build additional value through better insights at analysis time. The new version of Spectrum Discover provides content-based data classification that applies custom metadata tags based on content. All of this would be for nothing without high speed search, but IBM states that Spectrum Discover delivers consistent low-latency searches, even on billions of files.

A key additional benefit is to make it easier for companies to follow legal or regulatory compliance rules for sensitive data, such as PII (personally identifiable information) including social security and credit card numbers. Given the increased emphasis that enterprises should be placing on sensitive data, this is a primary benefit that Spectrum Discover provides — a nice thing to have on top of its metadata support for analysis efforts.

IBM Cloud Object Storage upgrades its own IBM storage array capabilities

Although it is simplistic and understates the power and rich functionality of the product, you could think of IBM Cloud Object Storage as a “file server” for objects instead of files. IBM Cloud Object Storage has three deployment models — in the cloud, on-premises as software-only, or embedded and pre-installed in a storage array. The new announcement focuses on IBM-provided storage arrays which should expand the company’s presence in object storage sales. The new Gen2 arrays are compatible with Gen1, which provides investment protection for existing customers, thus eliminating painful and lengthy data migration processes, a critical point given the enormous size of many object storage environments. Yet these customers can also use Gen2 to accommodate growth requirements.

IBM’s Cloud Object Storage Gen2 is all about cost efficiency and what is loosely called “speeds and feeds.” Now, that may not sound very exciting, but when you are an exabyte-class storage customer (and IBM stated that it has ten such clients) or a petabyte-class customer or even a lowly hundred-plus terabyte-class customer (and I am being facetious here as this is still pretty large in my book), all those improvements are extremely relevant.  Compared to Gen1, IBM Cloud Object Storage Gen2 offers cost savings of 37% per TB compared to last year and 1.6X more write operations per second in terms of performance. In addition, Gen2 offers 26% more capacity for the largest single node (1.3PB) as well as the same percentage increase in density of a single rack (10.2 PB).

Mesabi musings

With all the talk about the importance of AI and big data analytics tools, they cannot operate in a vacuum. They require metadata management software to curate and prepare the data, as well as software that manages the efficient placement and access of stored data. IBM Spectrum Discover meets the first objective and supports the second by informing better data placement while IBM Cloud Object Storage aims at customers for whom object storage is the choice over file-based storage.

IBM Spectrum Discover, among other things, now openly supports key storage vendors Dell/EMC and NetApp as well as S3-compliant cloud providers, notably Amazon, the father of the S3 protocol. IBM’s own storage arrays upon which IBM Cloud Object Storage is pre-installed and embedded have been upgraded to Gen2, which is more cost efficient and powerful than Gen1. It all adds up to better, more seamless support for AI and big data projects. IBM customers will consider that to be a good thing while potential clients should find these features compelling and attractive.

 

IBM Strengthens Its Storwize Midrange Storage Portfolio

 

This week IBM shone a spotlight on a refresh of its Storwize midrange storage family. In addition, it emphasized the value of its Spectrum Virtualize software, upon which the Storwize systems are built, but can also be used for many other purposes, including a new capability for integrating Amazon Web Services (AWS) workloads. This illustrates the continuing innovation that IBM and others are bringing to the information storage table, and should be most pertinent and pleasing to IBM customers and channel partners, who can use Storwize and Spectrum Virtualize to build a solution that extends into the public cloud.

The term “midrange” has long been used for block-based storage systems that are not in the top “enterprise-class” echelon in terms of performance, and, of course, price. However, that term is also a little misleading and a misnomer as many large “enterprises” (both private and public) use midrange storage because of the technology’s great scalability, strong performance, and ability to support software-delivered data services and other functionalities for a wide range of use cases. Not only that but Storwize products deliver enterprise-class functionality as well as the six 9s availability (which from a business perspective in a world where every second and minute counts is really a great improvement over the standard bearer five 9s availability) as their larger brethren.

New members of the Storwize family

Moreover, IBM Storwize offers entry level, middle tier, and upper end options. In October 2018 IBM launched the StorwizeV7000 Gen 3 product, the top of the Storwize range, which introduced NVMe at the storage device level for the first time in one of its midrange products. With this new announcement, IBM has introduced a whole new lineup of products into its Storwize V5000 storage system family, including two new entry level products — the V5010E and the V5030E — which do not use NVMe, as well as midrange level products, the V5100F and the V5100, which offer NVMe end-to-end (which means both the device level and the network level).

The Storwize V5010E, as the smallest member of the family, targets edge and containerized environments. Even though IBM expects a normal system to use about 9 TB, the V5010E can scale to a whopping 12 PB. It can provide up to 2x maximum IOPS compared to its predecessor, the Storwize V5010, but at an expected 30% less price.

The Storwize V5030E targets the same use cases as its smaller brother. It has a typical expected use of about 24 TB, but can scale to an unbelievable 32 PB (23 PB in a single system). Compared to its predecessor V5030, the new offering can deliver 20% better maximum IOPS at an expected 70% of the cost. Both entry level offerings are hybrid systems that can support combinations of SAS SSDs and SAS disks according to workload and customer requirements.

The last two systems, the Storwize V5100F and V5100 are variations on a common platform; the former is an all flash system while the latter supports hybrid combinations of flash and disk. Only specially-architected flash storage can have performance turbocharging NVMe built in but that capability here is the latest example of advanced functionality first being made available on a higher end product, then migrating to a less expensive product. IBM feels that a typical use case for the V5100F/V5100 will be about 70 TB with scaling to 32 PB. Depending on configuration, the new solutions can offer 2.4x maximum IOPS of the previous generation Storwize 5030F with data reduction turned on, but at only a 10% greater price. The IBM unique FlashCore Modules have hardware enhancements to deliver both data reduction and encryption without impacting performance.

Spectrum Virtualize Serves Both the Storwize Family and the Multicloud

Recall that IBM has a broad and extensive set of software-defined-storage (SDS) products under the rubric of the IBM Spectrum Storage family. A key member of this family, Spectrum Virtualize, is IBM’s block-based storage virtualization offering. Storage virtualization is a logical representation of storage resources that creates virtualized volumes independent of the physical limitations of storage media. Spectrum Virtualize can virtualize block storage arrays, enabling all of the virtualized storage volumes to be managed as a single pool of storage with a centralized point of control.

However, IT organizations have great flexibility in how Spectrum Virtualize is deployed (i.e., storage consumption models). One model is the IBM SAN Volume Controller (SVC) appliance. A second is as a traditional storage array system — for example, the Storwize family. A Cisco and IBM converged infrastructure VersaStack deployment also includes one or more of those storage systems. Finally, another consumption model is a software-only solution that can be used, say, for supporting cloud services.

The Storwize family has a solid software foundation in Spectrum Virtualize. All Storwize products offer transparent data migration, local and remote data replication (snapshots, disaster recovery [DR], and copy/migrate to the cloud). In conjunction with IBM Spectrum Copy Data Management, data can be made available at three sites. Plus, except for the low end V5010E, all the other new Storwize family members support data reduction pools, scale-out clustering, and encryption.

Spectrum Virtualize operating on-premises with its standard list of clients — including Storwize solutions and over 450 heterogeneous storage arrays — can now run in the public cloud, initially the IBM Cloud (formerly IBM Bluemix and IBM SoftLayer). The big news is that it is now available with AWS, as well.

Spectrum Virtualize in a public cloud provides real-time DR (disaster recovery) and data migration between an on-premises data center and a public cloud. Using public cloud for DR means that if an on-premises data center becomes unavailable due to a declared disaster, IT can failover to the remote public cloud. Spectrum Virtualize runs in conjunction with the computing, storage, and networking resources at both locations, delivering a single management layer for fully-functional storage between locations.

What AWS brings to the table, in addition to its immense popularity, is its optional usage of object storage. Now why would a block-based system want to create an object-based copy? The reason is that ransomware and malware (so far, at least) have only worked with block-based data. As a result, object data acts as if it were an “air-gapped” (physically isolated from a network) copy, which means that the copy is not accessible to hacking attempts. Now, while this is not truly an air-gap (as a network is still involved) for practical purposes it may be sufficient, at least for now.

Mesabi musings

The fact that each year storage innovation and progress seem to deliver more for mostly less never grows old. IBM’s new Storwize family members serve as affirmation of this fact, such as the migration of NVMe to the V5100 products. In addition, IBM customers whose Storwize arrays use Spectrum Virtualize can now avail themselves of both the IBM Cloud and AWS public clouds to create multi-cloud environments that makes it easier to do DR. All in all, this announcement qualifies as a good day at the office for IBM, its customers and channel partners.

 

 

 

IBM Continues to Advance Storage Along Key Drivers

Every quarter IBM seems to advance the cause of storage along multiple fronts, and this is no exception with enhancements along four key drivers. The first is IBM storage for containers and the cloud. This includes reference architecture “blueprints”: IBM Storage Solutions for blockchain, IBM Cloud Private, and IBM Cloud Private for analytics. The second continues to emphasize the cause of storage in conjunction with artificial intelligence (AI). In this case AI is used to address how to improve capacity planning. The third is “modern” data management which emphasizes how data protection is needed for data offload for hybrid multicloud environments. The fourth is cyber resiliency, enabling enterprises to use their storage effectively to plan, detect and recover in the world of cyber security threats.

All four are based on the way IT organizations are rapidly moving to a more complex, but desirably more cost efficient, as well as more productive world, supporting the business objectives of increasing revenues and profits. This is accomplished by rapidly changing IT infrastructures to adopt to a hybrid multicloud world as well as by introducing new technologies, such as blockchain and containerization, that help transform the way that they do business.

Since I recently covered the use of reference architecture and AI (see https://mesabigroup.com/ibm-spectrumai-with-nvidia-dgx-reference-architecture-a-solid-foundation-for-ai-data-infrastructures/ , I will focus this piece on modern data protection and cyber resiliency.

Multicloud data protection requires modern data protection

IBM emphasizes the need for modern data protection to play in the multicloud (see https://mesabigroup.com/ibm-continues-to-deliver-new-multicloud-storage-solutions/). By modern data protection IBM means that data protection has to encompass traditional IT infrastructures (such as a local data center that also uses a remote data center for disaster recovery purposes both of which are on-premises at company facilities) with multiple public cloud instances that are off-premises, as well as the ability to reuse secondary datasets (e.g. Backups, snapshots, and replicas). This ups the ante in managing data protection for data offload in such hybrid, multicloud environments.

Using multiple public clouds in conjunction with private clouds means managing ever changing cost structures in order to determine when it is appropriate to move a data protection workload from one cloud to another. This has to be done while ensuring the necessary cybersecurity levels are met (as will be discussed under cyber resiliency for software or hardware IBM-managed-storage) as well as ensuring that the necessary service levels — such as RTO (recovery time objective) or RPO (recovery point objective) — are still met.

IBM provides a blend of Spectrum Protect (for traditional IT infrastructures) in conjunction with Spectrum Protect Plus (for virtual infrastructures) to enable those responsible for enterprise data protection to successfully raise the management ante.

The most recent IBM storage announcement enhances Spectrum Protect Plus capabilities with a focus on delivering cost-effective, secure, long-term data retention. Spectrum Protect Plus can now support key cloud providers, namely IBM Cloud Object Storage, heavy hitters Amazon Web Services (AWS) and Microsoft Azure, and on-premises object storage with IBM Cloud Object Storage. It does so through the efficient use of incremental forever offloads of only changed data. It also offers critical application/database support by adding Microsoft Exchange and MongoDB database support that complements support for existing products, such as IBM DB2, Oracle Database, and VMware ESXi.

In addition, Spectrum Protect Plus offers enhanced data offloads to Spectrum Protect to further improve the partnership blend between the two. Meanwhile, Spectrum Protect simplifies management by enabling the use of retention sets that govern both backups that are used for recovery of production data as well as longer-term retention, such as for archiving. It also offers support now for Exchange 2019.

IBM’s storage portfolio supports IBM’s cyber resiliency initiatives

The need for cybersecurity does not require a lengthy discussion as even the general public is aware of such issues as illustrated by the numerous, continuing tips-of-the-iceberg data breaches that have permeated through the media. A tremendous amount of work is being performed to deal with these issues though much more needs to be done in what appears to be a never-ending battle. IBM has long been a white-hat vendor combatting the black-hat bad guys. The latest of its efforts goes under the label of cyber resiliency that it applies to its entire storage portfolio to combat potential negative cybersecurity events.

In discussing its cyber resiliency storage portfolio, IBM shows how its work follows the NIST (National Institute of Standards and Technology, a part of the U.S. Department of Commerce) Cybersecurity Framework Version 1.1 (April 16, 2018). This standard framework aids enterprises in how to plan for and recover from a compromising cyber event, such as an identity-stealing data breach. IBM has long espoused openness (such as promoting open source and open systems), support for reference architectures, and adherence to common standards. Even though IBM naturally wants to encourage organizations to acquire its own software and hardware, it does so (and has prospered by so doing) in that openness context. Showing how it provides cyber resiliency for its storage portfolio as it fits within the open NIST Cybersecurity Framework enables organizations to clearly understand and assess what IBM brings to the table.

That is not to say that IBM meets all the framework requirements (as no one can), but organizations can carefully examine the major contributions that IBM delivers.  The NIST framework discusses five phases — identify, protect, detect, respond and recovery. IBM addresses these as plan (identify and protect), detect and recover (respond and recovery). Planning relates to what an organization should do to get ready for the inevitable compromising event. Detect is about monitoring for and alerting abnormal behavior that signals that a negative cyber event is occurring or has already taken place. Recovery is about what actions need to take place to mitigate any negative effects following the event.

Touching lightly on what IBM delivers, in the identity phase, IBM Spectrum Control and IBM Storage Insights — two of its storage infrastructure management tools — enables organizations to understand their infrastructure deployment as well as its day-to-day usage. Deployment facilitates understanding of which systems are critical to the business operation as well as where they are located. Day-to-day usage by the baseline for how those systems are “normally” used. In the detect phase, abnormal usage of storage may show that a compromising event is happening as well as isolating the currently impacted systems. IBM Spectrum Protect shows what is normally protected every day plus the attributes of that normal usage, such as number of changes and volume usage. Spectrum Protect and Spectrum Protect Plus provide key support to the protect and recover phases.

IBM emphasizes the use of “air gap” data protection, which orchestrates the ingestion and automatic creation of copies of critical data onto a secure infrastructure that is isolated from a network-based attack. That could be tape copies removed from a tape library (which is a traditional strength of IBM) or a cloud-based air gap scenario, where the data sent to the cloud is physically isolated from a network. This reduces the risk of corruption, such as due to ransomware or malware attacks.  IBM also emphasizes the use of universal data encryption – including data-at-rest encryption, encryption of tape, backup data set encryption, and encryption of primary or backup data sets when sent to cloud repositories. These, and other capabilities that IBM provides, help mitigate the risk of cyber destruction, unlawful encryption, or modification, as well as unlawful copying of sensitive data. In combination with the appropriate architecture, infrastructure, and processes, these are just some of the ways in which IBM’s storage portfolio offers cyber resiliency to deal with the inevitable attempts to compromise one’s cybersecurity efforts.

Mesabi musings

The business storage arena is in constant flux. IT infrastructures are being transformed from on-premises infrastructures to a hybrid environment that combine on premises infrastructures with cloud. Consider this along with the fact that the bad guys are always trying to compromise organizations’ cybersecurity. This increases the need for modern data protection that IBM delivers with Spectrum Protect and Spectrum Protect Plus. It also expands the need for strong cyber resiliency efforts to prevent the negative impacts of cybersecurity events. With these latest additions, IBM is focused on providing cyber resiliency across its entire storage portfolio and emphasizes the use of strategies, such as air gapping and universal encryption, to enhance cyber resiliency. There is never a dull moment as to what IBM is doing to strengthen its storage portfolio.