Author Archives: mesabigroup

IBM Driving Storage Revolutions

Business storage continues to be driven by two revolutions: one is storage systems–based and the other software-based. The former is focused on NVMe (nonvolatile memory express) networking technology that is accelerating the adoption of all-flash storage systems. In the latter case, software-driven innovation has become a driving force among virtually all major storage vendors.

 

One vendor that is making notable progress in both areas is IBM. On the systems/network side, i.e. NVMe-oF (NVM over Fabrics), IBM now supports Fibre Channel in addition to InfiniBand. Additionally, the company’s new Storwize V7000 Gen 3 has been architected for NVMe at the storage array level as well, joining the FlashSystem 9100 family (announced in July) with NVMe inside of the storage array. On the storage software side, IBM has just introduced Spectrum Discover as a new product in its IBM Spectrum Storage portfolio. Let’s examine these additions in a little more detail.

 

IBM continues to push the NVMe revolution

 

NVMe has two basic functions. NVMe-oF is the network side of the house and improves the performance of moving data between a host and a storage array. IBM initially enabled NVMe-oF for storage networks that use InfiniBand interconnects but now supports NVMe-oF with storage networks that use Fibre Channel (FC) to improve application performance and data access. This functionality runs in conjunction with the company’s Spectrum Virtualize through a straightforward, non-disruptive software upgrade. FC NVMe uses existing 16 Gb FC adapters and supports SVC (Model SV1), FlashSystem 9100, FlashSystem V9000 (Model AC3), and Storwize V7000F/V7000 Gen 2+ and Gen 3, and VersaStack that uses those storage arrays. This is likely to be important for users of those systems, as many of them likely have a FC SAN (storage area network).

 

IBM also continues to push NVMe at the storage device-level. Recall that the FlashSystem 9100, IBM’s enterprise-class entree in the virtual storage infrastructure space managed by Spectrum Virtualize was the first IBM storage system to offer NVMe at the device level. (See https://mesabigroup.com/ibm-flashsystem-9100-the-importance-of-nvme-based-storage-in-a-data-driven-multi-cloud-world/ for more detail.) Now, the new Storwize V7000 Gen 3–also managed by Spectrum Virtualize–offers the same NVMe end-to-end capability. That includes the use of the same version of IBM’s well-accepted FlashCore Modules that the FlashSystem 9100 pioneered.

 

Although the Storwize V7000 Gen 3 is technically not an all-flash solution (as users have the option to have some HDDs, such as for supporting non-performance-sensitive data), it can be configured as an all-flash system, and with the notable growth of all-flash arrays over the past few years, Mesabi Group expects a high percentage of them to be all-flash configurations. Since only flash (not hard disks) can benefit from NVMe technology at the device level, IT can maximize its use of a Storwize V7000 Gen 3 by having as much of its storage as feasible reside on flash storage modules (the new Storwize V7000 supports both IBM’s FlashCore technology as well as industry standard NVMe SSDs) instead of HDDs. If they do, Gen 3 offers up to a 2.7x throughput performance improvement over Gen 2+ as a key benefit.

 

IBM Spectrum Discover drives additional value from oceans of unstructured data

 

IT must get the most out of its investment in its physical architecture. For storage management purposes, that includes how storage arrays work in conjunction with the servers that demand services through a storage network. IBM’s storage management software, Storage Insights, is an AI-based tool that is offered through IBM Cloud to help users better manage their storage environments. For example, the latest version diagnoses storage network “gridlock” issues often referred to as “slow drain”. That gridlock occurs when a storage system attempts to send data to a server faster than the server can accept it; this is not a good thing! IBM storage technicians (who can monitor systems on behalf of clients who authorize it) are notified by Storage Insights of the problem as it is identified by AI technology. The technicians then review the situation and work with the client to resolve it.

 

Now, while Storage Insights deals with the physical side of storage as a storage management tool, recently announced IBM Spectrum Discover is an in-house data management software tool that targets the voluminous, and ever-rapidly growing amount of data, such as that created for Internet of Things (IoT), AI, and Big data analytic applications. Spectrum Discover works with file data managed by IBM Spectrum Scale or object data managed by IBM Cloud Object Storage, and enables users to get more out of their data for analytical, governance and storage investment purposes (IBM will also support Dell/EMC’s Isilon offerings in 2019).

 

How does it accomplish this? On the analytical side, getting to useful and actionable insights that would not be discovered otherwise within a data ocean of unstructured data rapidly is facilitated by such things as its ability to orchestrate machine learning and MapReduce processes. On the governance side, mitigating business risks by ensuring that data is compliant with governance policies and speeding up investigation into potentially fraudulent activities obviously may be of great value. On the investment side, the ability to facilitate the movement of “colder” (i.e., less frequently-accessed data suitable, say, for archiving) data to cheaper storage and to weed out and destroy unnecessary redundant data is financially advantageous.

 

The heart of Spectrum Discover’s power revolves around its metadata management and related processes. Any search and discover tool needs good data about data (i.e. metadata) to succeed. Spectrum Discover uses both automatically-generated system metadata at the time of data creation and custom metadata tagging that adds extra intelligence that is needed at analysis time. All that leads to automatic cataloging with the creation of an index where large quantities of data can be searched extremely rapidly for discovery purposes, thus reducing data scientist and storage admin preparation time and costs associated with that.

 

Although for different purposes (and not totally similar technologies) as an analogy, think of the search and discover capabilities of a public Internet browser for speed and flexibility for publicly-available data in contrast to the private data that Spectrum Discover deals with. Accompanying search and discover functions are a number of features and capabilities that greatly facilitate the use of the tool, including policy-driven workflows, a drill down dashboard, and an Action Agent that manages data movement and facilitates content inspection.

 

In essence, IBM Spectrum Discover is designed to significantly simplify and speed the data and storage processes required for analytics and AI processes. That should provide notable benefits for enterprises that aim to maximize the effectiveness and value of their advanced analytics investments.

 

Mesabi musings

You would think that storage innovations would show signs of slowing down after all these years, but the opposite seems to be true. In fact, IBM continues to be at the forefront of storage progress.

 

As illustrations of its continuing leadership, IBM has introduced the new NVMe-enabled Storwize V7000 Gen 3 on the systems side of storage, Spectrum Discover on the software side as a data management tool, and enhanced Spectrum Insights as a storage management tool.

 

Overall, IBM customers should be pleased with the progress IBM is making with NVMe technology, a fundamental storage systems underpinning technology on the hardware side, while Storage Discover, on the software side, continues the push toward extracting additional value from up to oceans of unstructured data.

IBM Continues to Deliver New Multicloud Storage Solutions

The ever-increasing vast quantities of data that need to be stored, distributed, and managed cost-effectively with security and reliability are looking to the multicloud for a solution. IBM recognizes this and is delivering new multicloud storage solutions in response to that need.

Overview of Multicloud and IBM Storage’s Multicloud Solutions

The movement to the multicloud represents a dramatic shift in how the majority of enterprise class IT organizations will restructure their information infrastructure now and in the coming years. A multicloud represents at least two cloud environments — say a private and public cloud — but more typically describes the use of multiple public clouds typically in conjunction with an on-premises or private cloud. Enterprises want to be able to move applications and data swiftly and easily from place to place to best handle workload requirements (such as performance and availability) while at the same time generating the best possible cost-efficiencies. That demand for agility and flexibility comes with the challenge of providing the necessary levels of data protection that prevents not only against the loss of data, but also complies with regulations that ensure the necessary privacy.

Doing all this — and much more! — is a real challenge for any IT organization and there is no one-size-fits-all solution. And this is where IBM Storage comes into play with a broad portfolio of Spectrum Storage software-defined-storage solutions, as well as all flash array and tape solutions from which an IT organization can select the right mix and combination of products (as well as services) to meet their particular needs. All of IBM’s storage and storage software solutions embrace this move to the multicloud enterprise.

IBM has already done a lot in the multicloud arena. See https://mesabigroup.com/ibm-flashsystem-9100-the-importance-of-nvme-based-stora will ge-in-a-data-driven-multi-cloud-world/ for an illustration. We will further illustrate with three examples of adding features, function, and a new product model that have come out of the recent IBM Storage announcement. These involve three areas which feature prominently in the multicloud world — namely, modern data protection with IBM Spectrum Protect and IBM Spectrum Protect Plus, mainframe storage with the introduction of the DS8882F model, and cloud object storage with enhancements to IBM Cloud Object Storage to better manage large quantities of data.

What IBM Is Doing for Multicloud Data Protection

Data protection is often a prominent use case for the multicloud. For example, storing a backup copy on a public cloud may be more cost-effective than storing it on-premises; however, storing it in a public cloud might negatively impact an RTO (recovery-time objective). IBM Spectrum Protect 8.1.6 resolves that dilemma by creating a tiering option for backup data based upon state. Data in an active state (which means the most recent backup copy) remains on-premises to help meet RTO needs while inactive data (which means previous backup copies) are stored in a cloud to reduce costs.

IBM had previously announced “solution blueprints” to help IT organizations to more easily deploy to the multicloud for a particular purpose, such as modern data protection or reuse of secondary datasets. Now blueprints are available with IBM Spectrum Protect to make it easier to deploy to popular cloud environments, namely IBM Cloud, Amazon AWS, and Microsoft Azure.

While IBM Spectrum Protect focuses on traditional “real” environments, IBM Spectrum Protect Plus focuses on data recovery and reuse in virtual machine environments (such as those managed by VMware vSphere). IBM Spectrum Protect Plus has added encryption of vSnap data repositories, as well as support for vSphere 6.7 and DB2.

Mainframe Storage Plays a Key Role in the Multicloud World

IBM has announced the introduction of the DS8882F, the latest all-flash storage system in the DS8880F mainframe enterprise-class storage family. The DS8882F fits into the same 19-inch industry standard rack with the IBM mainframe, while delivering up to 50% savings in power consumption and in physical space as compared to deploying an array as a separate item on the data center floor, while delivering from 6.4 to 368.64 TB of raw capacity. It is the first enterprise class storage system that can be integrated into IBM Z model ZR1 or IBM LinuxONE Rockhopper II. It also provides a straightforward and cost-efficient means for the upgrade of legacy mainframe storage systems, such as the non-flash DS6000, DS8000, and DS8870 systems.

Along with its fellow members of the DS8880F family, the DS8882F plays in the multicloud world through Transparent Cloud Tiering (TCT). See https://mesabigroup.com/ibm-introduces-transparent-cloud-tiering-for-ds8880-storage-systems/ for an introduction to TCT for DS8880 systems. The D8880 family in conjunction with TCT now provides a Safeguarded Copy capability that protects sensitive data copies, as well as enhanced data encryption before a DS8880 family member sends data to a cloud or IBM Cloud Object Storage.

IBM Cloud Object Storage Support of the Multicloud

IBM Cloud Object Storage (COS) software offers IT organizations a wide range of cost-effective options on how to store vast quantities of data that range from more active data, such as required for ongoing analytics, to less-frequently-used colder data, such as for archiving or backup, on a broad set of multicloud platforms, including on-premises and the public cloud.

Although COS already supports over 30 hardware server configurations from Cisco, Dell, HPE, and Lenovo among others in addition to IBM, the hardware verification process for a new server could take months. But this verification process has been reduced to weeks. With the next verification process a Quanta system that can start as an entry level solution in 1U at each of three sites and grow online while never going down (due to the very high availability of a COS system) and even upgrade to exabyte scale (technically, as very few if any are likely to reach those exalted levels).

COS plays an important role in data protection (such as backup) and lifecycle management (such as archiving) using more than 77 certified applications. IBM Cloud Object Storage includes three other IBM solutions, namely, DS8880 arrays using TCT, IBM Spectrum Scale NAS storage, which also uses TCT, and IBM Spectrum Protect enterprise backup software.

Mesabi musings

Multicloud represents an powerful and flexible evolutionary view of cloud strategy that says that enterprises need to distribute their applications and data across multiple clouds. Changing an information infrastructure dramatically always presents a number of challenges to IT organizations. IBM brings to the table a broad range of software-defined-storage software and storage systems solutions to help IT organizations address those challenges.

IBM plays an important role in multicloud and that was demonstrated in 3 different areas in this paper. The first was modern data protection, which is critical for multicloud deployments. The second is mainframe storage, where IBM is one of the leading storage providers, and which must not be neglected in a multicloud solution for those organizations using mainframes. The third is IBM Cloud Object Storage, which provides a cost-effective and reliable means of storing data in the cloud. And these are only three of a number of solutions that IBM Storage is delivering to make the move to the multicloud real and viable.

IBM FlashSystem 9100: The Importance of NVMe-based Storage in a Data-Driven Multi-Cloud World

IBM’s newly announced FlashSystem 9100 is its first NVMe (nonvolatile memory express) at the storage drive-level storage system. The FlashSystem 9100 is IBM’s enterprise-class entrée in the virtual storage infrastructure managed by Spectrum Virtualize.

But the announcement is about more than just an array solution. The true value is in how the FlashSystem 9100 makes a major contribution to the multi-cloud worlds where IT organizations increasingly play.  The FlashSystem 9100 includes an extensive set of IBM’s award-winning Spectrum Storage software and leverages that included software to create multi-cloud solution blueprints for IBM clients and channel partners.

The NVMe storage revolution

NVMe is a storage technology that accentuates, accelerates and revolutionizes the move to all-flash storage systems, as it supports solid state devices (SSDs) and not hard disk drives (HDDs). As a review of NVMe basics and IBM’s commitment to NVMe, please see http://mesabigroup.com/ibms-strong-commitment-to-the-nvme-storage-revolution/. However, to summarize, each new generation of a high technology system typically brings with it price performance benefit increases in speeds and feeds. NVMe is no exception.

The IBM FlashSystem 9100 speeds and feeds

From simply a speeds and feeds perspective, the FlashSystem 9100 offers 6X more data in the same space, 3X more performance, and 60% less energy consumption than traditional all-flash arrays. The Spectrum Virtualize-managed FlashSystem 9100 uses IBM’s FlashCore architecture at the storage module level.

No Future Worry Capacity Planning with FlashSystem 9100 Capacity Increases

However, there is a major new twist; the FlashCore storage modules in the 9100 have been redesigned to use the industry standard 2.5-inch form factor instead of IBM’s proprietary 10-inch form factor. The 9100 also uses 3D TLC (3-dimensional triple level cell) NAND flash with 64 layers instead of the 32 layers of the previous version. Finally, each FlashCore module offers built-in, performance-neutral hardware compression and data encryption.

These upgrades offer significant practical value. IBM expressly guarantees at least a 2 to 1 data reduction ratio standard without requiring the customer to submit to any testing and will flexibly guarantee up to a 5 to 1 data reduction ratio if the customer agrees to allow testing to show that the better compression ratio will actually apply to the customer workloads. Data reduction techniques include not only compression, but also deduplication and thin provisioning.

As a result, a single 2U 9100 system can hold up to 2 PB (petabytes) of data, and a fully populated cluster in a standard 42U data center rack can hold up to 32 PB. That is a mammoth amount of data to store in a small space. Most customers will not have that much data even in the foreseeable future, but the point is that with the FlashSystem 9100, you never have to worry about running out of storage capacity again!

NVMe-based Acceleration Turbocharges FlashSystem 9100 Performance

All the benefits of NVMe at the device level translates into a 3X performance increase over traditional all flash products. The latency for a single 2U array or a 4-way 8U cluster is the same at 100 microseconds, but the IOPS quadruples from 2.5 M/sec to 10 M/sec, and the bandwidth quadruples from 34 GB/sec to 136 GB/sec. The 9100 is truly a turbocharged system.

The IBM FlashSystem 9100 Comes in Two Flavors

The IBM FlashSystem 9100 comes in two models — the FS9110 and the FS9150. The former uses dual 8-core processors per controller and the FS9150 uses dual 14-core processors per controller. Otherwise the architecture is the same with up to 24 bays full of dual-ported 2.5” NVMe flash-based storage modules in 2U. There is also a minimum of two controller canisters that act in an active-active mode with failover/failback capabilities. An IT organization has to decide which model based upon how heavily the controllers would be used for their specific workloads.

IBM FlashSystem 9100 data-driven, multi-cloud solutions

The FlashSystem 9100 is about much more than speeds and feeds, such as being NVMe-accelerated. IBM is also targeting the rapidly emerging multi-cloud world where businesses are deploying private, hybrid, and public clouds in various and diverse combinations.

IBM offers customers a choice of three IBM validated “blueprints” that they can utilize to aid them in delivering a particular multi-cloud solution.

  1. The Data Re-use, Protection, & Efficiency Solution focuses not only on how to backup data in virtual or physical environments such as using IBM Spectrum Protect Plus, but also how to re-use backup and other copies for DevOps, analytics, reporting and disaster recovery (DR), while also adding in the use of IBM Spectrum Copy Data Management.
  2. The Business Continuity and Data Re-use Solution focuses on how to use storage in the public IBM Cloud as a DR target with easy migration among on-premises, private cloud, and public cloud. IBM Spectrum Virtualize for Public Cloud is used in addition to IBM Spectrum Virtualize and IBM Spectrum Copy Data Management.
  3. The Private Cloud Flexibility and Data Solution focuses on delivering on-premises or private cloud storage with cloud efficiency and flexibility for Docker and Kubernetes environments for new generation applications. IBM Cloud Private and IBM Spectrum Access Blueprint are used in the deployment process.

IBM software-defined storage targets the multi-cloud world

Software is the integrating glue that ties the NVMe-accelerated IBM FlashSystem 9100 to the multi-cloud, as enterprise-class storage systems are not only about hardware. In addition to a wide range of data services, such as snapshots and data replication, IBM includes with each FlashSystem 9100 access to the IBM AI-based Storage Insights as well as integrating four key members of its storage software and modern data protection family of Spectrum Storage solutions: namely, Spectrum Copy Data Management, Spectrum Protect Plus, Spectrum Virtualize for Public Cloud and Spectrum Connect.

IBM Storage Insights is a powerful tool for managing storage that in addition to helping with event and problem resolution management also provides infrastructure planning capabilities for forecasting capacity growth, planning purchases and optimizing data placement.

As for the four Spectrum Storage products, Spectrum Copy Data Management provides ongoing visibility into where data lives, how that data is used and who has access to it through data lifecycle management automation that delivers self-service data access along with necessary orchestration and visibility features. Spectrum Protect Plus focuses on easy-to-use backup and recovery in virtual environments. Spectrum Virtualize for Public Cloud connects on-premises and cloud storage (private or public) in order to deliver a hybrid cloud storage data replication and disaster recovery solution. Spectrum Connect enables the provisioning, monitoring, automating and orchestrating of IBM block storage in containerized (Dockers and Kubernetes), VMware and Microsoft PowerShell environments.  

Now what do all of these software products have in common? The answer is that they are integrated with the FlashSystem 9100 storage architecture that includes support for key capabilities, such as data portability between private and public clouds, native DevOps capable, containerization support, and self-service-enabled, that go beyond traditional block-based applications.

IBM states that modern IT organizations face three major challenges in the multi-cloud world that these software products address in conjunction with the FlashSystem 9100 addresses. The first is the need to modernize traditional applications in private clouds, which bring the agility, flexibility, and cost effectiveness of a public cloud, while at the same time being able to extend seamlessly to and leverage public clouds as appropriate. The second is to be able to adopt successfully new data-driven applications, such as big data and a host of analytically-oriented applications. The third is the ability to modernize applications, such as containerization in private clouds that use agile development approaches with full portability that leverages the public cloud infrastructure. The multi-cloud world is here to stay and the FlashSystem 9100 has been designed to play effectively in that world.

The IBM FlashSystem passes the litmus tests of reliability and pricing with flying colors

All of the above attests to the power of the FlashSystem 9100, but IT organizations also want to know about issues, such as reliability and pricing. On the reliability front, IBM guarantees 100% data availability for users of HyperSwap, which is a Spectrum Virtualize capability that is used in a dual-site, active-active environment. In addition, IBM offers a seven-year life on the FlashCore media itself while on warranty or extended maintenance, which should end any concern over read/write endurance.

Pricing deals with many factors, such as total cost of ownership. However, IBM believes in current customer retention and providing enticements to attract new customers for whom the multi-cloud world presents challenges that the FlashSystem 9100 can solve. Therefore, the price of the current V9000 and a 9100 are roughly equivalent. For example, if a V9000’s warranty period has expired and an IT organization is willing to buy three years of maintenance support on the system, then it could acquire a new 9100 with its warranty for approximately the same price. Now, IT would have to migrate its data to the new array, but since Spectrum Virtualize is used on both systems, that data migration could be made non-disruptively. That is what is called a good deal.

Mesabi musings

What’s not to like about IBM’s new FlashSystem 9100? NVMe-accelerated performance, solid data reduction, starting with compression, multi-cloud functionality to deal with the world that IT organizations now must face more and more each day, and multiple PB capabilities to name just a few — and all this in only a 2U box!

As a side note, NVMe at the storage device level in all-flash systems drives a final nail in the coffin for the use of hard disks for Tier 1 production storage. On the positive side, enterprise-class NVMe storage is the way to go in the rapidly growing multi-cloud world and the IBM FlashSystem 9100 is a clear illustration of why that is the case.

IBM Storage Insights: Here’s To Your Storage’s Health — And More

Storage systems are inherently complex and IT users need to manage their storage environment’s performance, capacity utilization, and health constantly. Vendors have long helped with Call Home capabilities where a storage system sends storage usage data to a vendor. Now IBM has turbocharged Call Home with Storage Insights where more data is collected, where users are able to better self-service their needs through using a feature-rich dashboard, and where IBM can provide deeper and broader technical support when the user needs that extra level of storage management support. Let’s look more deeply into IBM Storage Insights.

IBM Storage Insights Delivers a Turbocharged Call Home Capability

Call Home has long been a standard and well-accepted feature for many block-based storage systems whereby metadata (such as on performance and capacity utilization) is transmitted from a customer datacenter to a vendor site for storage monitoring purposes. The data can then be used for diagnostic, analysis, and planning purposes that can include proactive alerts to avert a potential problem (such as an early detection of a bad batch of disks that are starting to degrade below acceptable levels) or to more rapidly accelerate the resolution of a problem that has unexpectedly occurred.

Although Call Home capabilities vary among vendors, traditional systems can be limited in a number of ways:

  • Reactive alerts only to error conditions such as hardware failures as limited metadata prevents broader usage value; for example, among many other concerns, this means that proactive support for configuration optimization may not be available
  • Users do not have an interface with the system at the vendor site that allows them to self-service, self-manage the process as much as possible; that means a greater (and unnecessary) level of reliance on the vendor for support; while necessary support is valuable, you do not want to in-effect delegate decision-making to someone who is not as familiar with your storage systems as you are
  • May focus on individual storage systems rather than on all the storage systems so there is no unified pane of glass for an IT user to view all critical events easily (usually at a single glance); this makes a storage administrator’s life more difficult

The overview of IBM Storage Insights below reveals how IBM turbocharges Storage Insights to overcome those limitations and to provide even more features and functionality.

Overview of IBM Storage Insights

IBM Storage Insights is software that runs on the IBM Cloud. A lightweight data collector installed in the user data center streams performance, capacity, asset, and configuration metadata to the IBM Cloud. This data is metadata because it is data about data rather than the actual applications data, which the data collector cannot touch. Metadata flows in only one direction, from the user data center to the IBM Cloud via HTTPS.

Among the types of metadata provided are:

  • Performance metrics, including I/O rates, read and write data rates, and response times
  • Capacity information, such as used space, unassigned space, and the compression ratio
  • Storage system resource inventory and configuration information, such as volumes, pools, disks, and ports

The metadata is made available to the Storage Operations Center at the IBM Cloud. The metadata is analyzed and presented in the form of a dashboard where those authorized can see the performance (such as I/O rate, data rate, and response time), health, capacity, and call home events for all IBM block storage systems from a single, unified pane of glass. Each storage system is displayed on the dashboard in a small rectangular slab called a tile.  Numerous tiles (say 28 depending upon screen size) can be displayed on a single screen before scrolling would be necessary; however, little or no scrolling is likely to be necessary as system tiles for devices with a critical event automatically reshuffle to the top of the screen and the user can also press a filter button to see only those events that require immediate attention. Users can also create custom dashboards showing a subset of storage systems. The user, say a storage administrator, can click on the tile with a critical event and drilldown to a pull-out drawer that provides more details to help diagnose the issue. This self-service diagnostic data can often help the user resolve the problem without having to request IBM support.

IBM follows a number of security practices in the IBM Cloud, including physical, organizational, access, and security controls. IBM Cloud is ISO/IEC 27001 Information Management Security certified. Access is restricted to authorized user company personnel as well as the IBM Cloud team that is responsible for the daily operation and maintenance of IBM Cloud instances and the IBM Support team that is responsible for investigating and closing service tickets.

IBM Storage Insights overcomes the limitations of traditional Call Home systems:

  • Although some problems cannot be prevented and a reactive approach is necessary, Storage Insights provides a trouble ticket process that can speed the resolution of a problem. However, Storage Insights enables proactive insights as well. For example, a health check may show only a single connection to a server from a storage system. Although everything is running fine now, this exposes a potential future problem that could be prevented by taking action now to add an extra connection.
  • The dashboard empowers storage administrators to control their own destiny better through its self-service capabilities,
  • The dashboard also enables a storage administrator to manage all IBM block storage as a whole that is much easier that trying to manage on a piecemeal basis.

In addition, IBM offers capabilities, features, and functions far beyond traditional Call Home approaches:

  • In addition to event and problem resolution management tool that tends to be the limit of traditional Call Home systems, IT can use infrastructure planning tools to forecast capacity growth, plan purchases, and optimize data placement; this means that IT can save money through better utilization of existing storage as well as making sure that they do not overinvest in making future storage system purchases.
  • IBM Support has a wealth of data, including historical data, and can use data scientists who fish the data in a secure data lake to do analyses such as using best practice thresholds to help identify anomalies, such as why an application’s performance suffers at unexpected times.
  • Being cloud-based means that IBM can add features and functions on the fly: everyone has access to the most current version.
  • The data is protected in accordance with GDPR.

IBM Storage Insights is available for free; IBM does this as it strengthens their relationship with their customers. A fee-based Pro version is available, but IT should become familiar with the standard version first before deciding if the Pro version (such as capturing longer periods of historical data for planning purposes) is necessary.

Mesabi musings

Managing a large, complex storage infrastructure has always been a difficult task. Not only do applications have to deliver the needed performance (such as response time), but they have to do so without running out of space (capacity) and without unexpected downtime (health). Moreover, IT wants to do this while using their storage resources most efficiently from an infrastructure investment management perspective (as there is no sense in paying more than necessary for desired levels of performance).

For IBM block storage users, the process has just gotten a lot easier with the introduction of IBM Storage Insights. IBM continues to put the IBM Cloud to good use as Storage Insights enables an effective self-service capability for the IT user through a unified view of all of its IBM storage through a single pane of glass along with a rich and robust set of IBM Support (such as the use of data scientists) services as necessary.

And since the price is free, no one who uses Call Home now should hesitate to adopt IBM Storage Insights. In addition to all flash storage systems and software-defined storage, the type of features and functions that IBM Storage Insights provides is likely to see a similarly high adoption rate because of the value that it can deliver in managing the health, capacity, and performance of a storage environment.

IBM Continues to Think Ahead Clearly

IBM recently concluded its first IBM THINK conference in Las Vegas. In THINK, IBM combined several former events into one comprehensive conference that covered the breadth and depth of the entire corporation. Although the 40,000 some attendees could explore in depth particular products or services in a plethora of educational sessions or at the huge exhibition hall, in a sense THINK was a coming out party that showed how IBM is reinventing itself in what is called the “data-driven” era.

What turns the future into the era of data? The Economist has stated that the most valuable resource is no longer oil, but data, and supported this mammoth assertion in a seminal May6, 2017 article entitled “Data is giving rise to a new economy.” https://www.economist.com/news/briefing/21721634-how-it-shaping-up-data-giving-rise-new-economy

Therefore, both IBM and the Economist are responding to broader business and societal events.

Chairman’s Address

Ginni Rometty, IBM’s Chairman, President, and CEO, gave the overarching keynote presentation “Putting Smart to Work” in the Mandalay Bay Events Center to a more-than packed house (12,000 capacity).

She pointed out that business has been impacted by rapidly scaling technologies. The first example she cited was Moore’s Law that has led to an exponential growth in computing power for the last 50 years. The second was Metcalfe’s Law which explains the rationale for the exponential growth of networking. Rometty then proposed that there needs to be a new law for the data-driven era where exponential learning is built into every process. Since this is related to the rise of artificial intelligence (AI) she suggested that the new law be called “Watson’s Law” (after IBM’s Watson platform).

We recall that Moore’s Law (named after Intel co-founder Gordon Moore) says that integrated transistor density doubles roughly every two years. As Moore himself started last year, this concept is running out of steam due to limits imposed by the laws of physics and economic concerns.

Now, Metcalfe’s law (named after Robert Metcalfe, although George Gilder should also be recognized) states that the effect of a telecommunications network is proportional to the square of the number of connected users of the system. This networks effects exponential scaling law is still robust, vibrant, and growing. In fact, Rometty’s guest later in her keynote, Lowell C. McAdam, Chairman and CEO, Verizon Communications, asserted that 5G (for fifth generation wireless systems) will usher in a 4th industrial revolution.

The cherry on top is the third law which relates to the exponential scaling of learning and knowledge through the use of AI. Rometty’s proposed name of Watson for this law may not be acceptable to competitors (although if it seen as generic rather than tied to IBM it might not have a real problem). It is a lot simpler to use though than naming the law in honor of one of the AI pioneers, since Watson is more widely known than pioneers, like Edward Feigenbaum and Marvin Minsky.

Rometty also pushed the idea that business, society and IBM itself all have to be reinvented in the wake of the third law on top to the first two laws.

  • Business — organizations will need to learn exponentially; every process will be affected by having learning put into it; the result will be man and machine working together, not machines simply replacing human workers.
  • Society — 100% of jobs will change; among other things, this will result in the need for midcareer retraining.
  • IBM — the company needs to continue to deliver trust and security to its customers. In the data-driven era, existing enterprises have collected large amounts of data, so they can be incumbent disruptors in a time of change. In this regard, IBM needs to support them through the use of technologies, such as blockchain and the Watson platform.

The idea of man + machine rather than machine replacing man is a positive one and we can only hope that Rometty is correct. As one of my fellow analysts noted, he would bet the under on the statement of 100% of the jobs will change. Still, while it may be hyperbole, it would be best if all of us continue to examine in more detail what is likely to happen in open and reflective dialogue and analysis.

Now as to incumbent disruption, Clayton Christensen is noted for bringing the concept of innovative disruption to the forefront when the challenge to enterprises was from outside disruptors. However, even though a lot of external datasets have value, enterprises control their own customer and product history information, which they can choose to keep private or share as appropriate with other parties. Thus, innovative disruption should now very well lean toward incumbents doing the disruption. This should play very well into IBM’s wheelhouse as the company’s main focus has always been on dealing with established enterprises.

Modern Infrastructure Reiterate the Chairman’s Theses

In the expo area attached to the THINK conference, IBM divided its technologies into four campuses: modern infrastructure, business and AI, cloud and data, and security and resiliency. As an IT infrastructure analyst, I focused on demoes and presentations in the modern infrastructure campus.

In his presentation “Driven by Data” Ed Walsh, General Manager, IBM Storage & Software Defined Infrastructure, focused on rethinking data infastructure to accelerate innovation and competitiveness in the data-driven, multi-cloud world. His guest, Inderpal Bhandari, IBM’s Global Chief Data Officer, discussed a cognitive enterprise blueprint where in the era of data those who can harness the power of their information assets will have a competitive advantage. As part of the process of reinventing itself, IBM plans to walk the walk and deploy such capabilities as AI and multi-cloud internally. As a case study, the company will start sharing its results perhaps as soon as May of this year.

In his presentation “Storage Innovation to Monetize Your Data,” Eric Herzog, CMO & Vice President, Worldwide Channels IBM Storage and Software Defined Infrastructure, focused on how the company’s broad storage portfolio helps customers today. He illustrated with a number of customer examples. For example, ZE PowerGroup Inc. provides a data and analytics platform that enables its clients to outmaneuver their competitors. Speed is of the essence. With the help of IBM storage solutions, the company was able to increase its client base by 30% while the same time using its costs by 25%.

But a discussion of IBM’s modern infrastructure would not be complete without at least touching upon its new POWER9 processors and related server architecture. In his presentation “Accelerating Your Future with POWER9,” Bob Picciano, Senior Vice President, IBM Cognitive Systems, pointed out that the value of computing is changing from automating and scaling process to scaling knowledge via actionable insight and that requires reinventing computing for cloud and AI workloads.

Speed matters because time matters. IBM claims a breakthrough in machine learning where its POWER-based solutions deliver results 46X faster than Google and 3.5X faster than Intel. So it should come as no surprise that at the OpenPOWER Summit held in parallel with THINK 2018, Google (and OpenPOWER founder) announced that it has deployed its homegrown POWER9-based “Zaius” platform into its datacenters for production workloads.

Reinvention Will Not Change What IBM Does Well Today

Reinvention is the process of changing so that the results often appear to be entirely new. Although IBM is in the process of reinventing itself in many ways, it also needs to preserve the best hard-learned lessons of the past that have relevance now and in the future. Just some of those include:

  • Continue to THINK clearly — Thomas J. Watson introduced the slogan (i.e., tagline) THINK in December 1911 before International Business Machines received its name to emphasize that “we get paid for working with our heads.” Thinking well never grows old and is even more relevant now that broader forms of artificial intelligence are being coupled with human intelligence in a data-driven world. Naming the conference IBM THINK was a positive reinforcement for the values that IBM has long stood for.
  • Continue to invest in the future — IBM builds its capabilities through significant investments in research and development (R&D), as well as external acquisitions. IBM has always spent heavily on the basic R&D that is necessary to prime the well of innovation and so is one of the worthy successors to Ma Bell in creating technology that transforms both our global economy and society. Yet basic research takes years if not decades and may or may not result in success. Firms that fear failure or are interested only in the short-term bottom line are typically unwilling to take the risks that basic R&D entails, but for over 100 years IBM has proven that approach wrong. AI as demonstrated by Watson is simply the latest success story. The company’s quantum computing developments (via its IBM Q system) represent a technology with a world of promise. It is also one where IBM is willing to accept the risks, challenges, uncertainty, expense, and lengthy development process before the technology starts to bear any potential revenue and profitability fruit.
  • Continue to focus on the customer—Maintaining trust and security for customers was a major theme. However, IBM has long been known for its customer support and satisfaction so that focus is simply a continuation (or reinforcement) of that traditions
  • Continue to drive positive relationships — IBM recognizes that it cannot do it all; its support of Open Source technologies, like Linux and Kubernetes, has been very productive. Its far-reaching partnerships from Cisco with VersaStack to a joint venture with Maersk involving blockchain benefit IBM, its partners, and customers alike.

Mesabi musings

The world’s economy and society in general (including how we spend our time interfacing with the outside world) have been greatly impacted by the first two exponential scaling laws — Moore’s law for computing and Metcalfe’s law for network effects. Although those two laws continue to work their magic, further transformations will come about through the third exponential scaling law — the one that reflects the need to exploit learning and knowledge in the data-driven era.

At THINK 2018, IBM CEO Ginni Rometty pointed out that this will impact business (as in a more advanced man-machine interface) and society (in that all jobs will be affected in one way or another). Understanding this, IBM is reinventing itself to continue maintaining its leadership role as a broad-based information technology vendor.

Now, IBM faces strong competitive challenges on all facets of its product and services portfolio. That pressure includes the likes of Dell Technologies in enterprise IT, Intel on the computing side, Google and Amazon on the cloud side, and a whole host of other companies large and small here, there, and everywhere. That has never daunted IBM in the past and it will not in the future. 

The IBM THINK 2018 conference gave IBM the opportunity to showcase its products and services, along with its best and brightest people, and ideas that will enable it to continue to serve customers well now and in the future. That should come as no surprise. After all, thinking is all about using your head for the benefit of all, and that “head” now includes artificial, as well as human intelligence.