Author Archives: mesabigroup

IBM Storage Insights: Here’s To Your Storage’s Health — And More

Storage systems are inherently complex and IT users need to manage their storage environment’s performance, capacity utilization, and health constantly. Vendors have long helped with Call Home capabilities where a storage system sends storage usage data to a vendor. Now IBM has turbocharged Call Home with Storage Insights where more data is collected, where users are able to better self-service their needs through using a feature-rich dashboard, and where IBM can provide deeper and broader technical support when the user needs that extra level of storage management support. Let’s look more deeply into IBM Storage Insights.

IBM Storage Insights Delivers a Turbocharged Call Home Capability

Call Home has long been a standard and well-accepted feature for many block-based storage systems whereby metadata (such as on performance and capacity utilization) is transmitted from a customer datacenter to a vendor site for storage monitoring purposes. The data can then be used for diagnostic, analysis, and planning purposes that can include proactive alerts to avert a potential problem (such as an early detection of a bad batch of disks that are starting to degrade below acceptable levels) or to more rapidly accelerate the resolution of a problem that has unexpectedly occurred.

Although Call Home capabilities vary among vendors, traditional systems can be limited in a number of ways:

  • Reactive alerts only to error conditions such as hardware failures as limited metadata prevents broader usage value; for example, among many other concerns, this means that proactive support for configuration optimization may not be available
  • Users do not have an interface with the system at the vendor site that allows them to self-service, self-manage the process as much as possible; that means a greater (and unnecessary) level of reliance on the vendor for support; while necessary support is valuable, you do not want to in-effect delegate decision-making to someone who is not as familiar with your storage systems as you are
  • May focus on individual storage systems rather than on all the storage systems so there is no unified pane of glass for an IT user to view all critical events easily (usually at a single glance); this makes a storage administrator’s life more difficult

The overview of IBM Storage Insights below reveals how IBM turbocharges Storage Insights to overcome those limitations and to provide even more features and functionality.

Overview of IBM Storage Insights

IBM Storage Insights is software that runs on the IBM Cloud. A lightweight data collector installed in the user data center streams performance, capacity, asset, and configuration metadata to the IBM Cloud. This data is metadata because it is data about data rather than the actual applications data, which the data collector cannot touch. Metadata flows in only one direction, from the user data center to the IBM Cloud via HTTPS.

Among the types of metadata provided are:

  • Performance metrics, including I/O rates, read and write data rates, and response times
  • Capacity information, such as used space, unassigned space, and the compression ratio
  • Storage system resource inventory and configuration information, such as volumes, pools, disks, and ports

The metadata is made available to the Storage Operations Center at the IBM Cloud. The metadata is analyzed and presented in the form of a dashboard where those authorized can see the performance (such as I/O rate, data rate, and response time), health, capacity, and call home events for all IBM block storage systems from a single, unified pane of glass. Each storage system is displayed on the dashboard in a small rectangular slab called a tile.  Numerous tiles (say 28 depending upon screen size) can be displayed on a single screen before scrolling would be necessary; however, little or no scrolling is likely to be necessary as system tiles for devices with a critical event automatically reshuffle to the top of the screen and the user can also press a filter button to see only those events that require immediate attention. Users can also create custom dashboards showing a subset of storage systems. The user, say a storage administrator, can click on the tile with a critical event and drilldown to a pull-out drawer that provides more details to help diagnose the issue. This self-service diagnostic data can often help the user resolve the problem without having to request IBM support.

IBM follows a number of security practices in the IBM Cloud, including physical, organizational, access, and security controls. IBM Cloud is ISO/IEC 27001 Information Management Security certified. Access is restricted to authorized user company personnel as well as the IBM Cloud team that is responsible for the daily operation and maintenance of IBM Cloud instances and the IBM Support team that is responsible for investigating and closing service tickets.

IBM Storage Insights overcomes the limitations of traditional Call Home systems:

  • Although some problems cannot be prevented and a reactive approach is necessary, Storage Insights provides a trouble ticket process that can speed the resolution of a problem. However, Storage Insights enables proactive insights as well. For example, a health check may show only a single connection to a server from a storage system. Although everything is running fine now, this exposes a potential future problem that could be prevented by taking action now to add an extra connection.
  • The dashboard empowers storage administrators to control their own destiny better through its self-service capabilities,
  • The dashboard also enables a storage administrator to manage all IBM block storage as a whole that is much easier that trying to manage on a piecemeal basis.

In addition, IBM offers capabilities, features, and functions far beyond traditional Call Home approaches:

  • In addition to event and problem resolution management tool that tends to be the limit of traditional Call Home systems, IT can use infrastructure planning tools to forecast capacity growth, plan purchases, and optimize data placement; this means that IT can save money through better utilization of existing storage as well as making sure that they do not overinvest in making future storage system purchases.
  • IBM Support has a wealth of data, including historical data, and can use data scientists who fish the data in a secure data lake to do analyses such as using best practice thresholds to help identify anomalies, such as why an application’s performance suffers at unexpected times.
  • Being cloud-based means that IBM can add features and functions on the fly: everyone has access to the most current version.
  • The data is protected in accordance with GDPR.

IBM Storage Insights is available for free; IBM does this as it strengthens their relationship with their customers. A fee-based Pro version is available, but IT should become familiar with the standard version first before deciding if the Pro version (such as capturing longer periods of historical data for planning purposes) is necessary.

Mesabi musings

Managing a large, complex storage infrastructure has always been a difficult task. Not only do applications have to deliver the needed performance (such as response time), but they have to do so without running out of space (capacity) and without unexpected downtime (health). Moreover, IT wants to do this while using their storage resources most efficiently from an infrastructure investment management perspective (as there is no sense in paying more than necessary for desired levels of performance).

For IBM block storage users, the process has just gotten a lot easier with the introduction of IBM Storage Insights. IBM continues to put the IBM Cloud to good use as Storage Insights enables an effective self-service capability for the IT user through a unified view of all of its IBM storage through a single pane of glass along with a rich and robust set of IBM Support (such as the use of data scientists) services as necessary.

And since the price is free, no one who uses Call Home now should hesitate to adopt IBM Storage Insights. In addition to all flash storage systems and software-defined storage, the type of features and functions that IBM Storage Insights provides is likely to see a similarly high adoption rate because of the value that it can deliver in managing the health, capacity, and performance of a storage environment.

IBM Continues to Think Ahead Clearly

IBM recently concluded its first IBM THINK conference in Las Vegas. In THINK, IBM combined several former events into one comprehensive conference that covered the breadth and depth of the entire corporation. Although the 40,000 some attendees could explore in depth particular products or services in a plethora of educational sessions or at the huge exhibition hall, in a sense THINK was a coming out party that showed how IBM is reinventing itself in what is called the “data-driven” era.

What turns the future into the era of data? The Economist has stated that the most valuable resource is no longer oil, but data, and supported this mammoth assertion in a seminal May6, 2017 article entitled “Data is giving rise to a new economy.”

Therefore, both IBM and the Economist are responding to broader business and societal events.

Chairman’s Address

Ginni Rometty, IBM’s Chairman, President, and CEO, gave the overarching keynote presentation “Putting Smart to Work” in the Mandalay Bay Events Center to a more-than packed house (12,000 capacity).

She pointed out that business has been impacted by rapidly scaling technologies. The first example she cited was Moore’s Law that has led to an exponential growth in computing power for the last 50 years. The second was Metcalfe’s Law which explains the rationale for the exponential growth of networking. Rometty then proposed that there needs to be a new law for the data-driven era where exponential learning is built into every process. Since this is related to the rise of artificial intelligence (AI) she suggested that the new law be called “Watson’s Law” (after IBM’s Watson platform).

We recall that Moore’s Law (named after Intel co-founder Gordon Moore) says that integrated transistor density doubles roughly every two years. As Moore himself started last year, this concept is running out of steam due to limits imposed by the laws of physics and economic concerns.

Now, Metcalfe’s law (named after Robert Metcalfe, although George Gilder should also be recognized) states that the effect of a telecommunications network is proportional to the square of the number of connected users of the system. This networks effects exponential scaling law is still robust, vibrant, and growing. In fact, Rometty’s guest later in her keynote, Lowell C. McAdam, Chairman and CEO, Verizon Communications, asserted that 5G (for fifth generation wireless systems) will usher in a 4th industrial revolution.

The cherry on top is the third law which relates to the exponential scaling of learning and knowledge through the use of AI. Rometty’s proposed name of Watson for this law may not be acceptable to competitors (although if it seen as generic rather than tied to IBM it might not have a real problem). It is a lot simpler to use though than naming the law in honor of one of the AI pioneers, since Watson is more widely known than pioneers, like Edward Feigenbaum and Marvin Minsky.

Rometty also pushed the idea that business, society and IBM itself all have to be reinvented in the wake of the third law on top to the first two laws.

  • Business — organizations will need to learn exponentially; every process will be affected by having learning put into it; the result will be man and machine working together, not machines simply replacing human workers.
  • Society — 100% of jobs will change; among other things, this will result in the need for midcareer retraining.
  • IBM — the company needs to continue to deliver trust and security to its customers. In the data-driven era, existing enterprises have collected large amounts of data, so they can be incumbent disruptors in a time of change. In this regard, IBM needs to support them through the use of technologies, such as blockchain and the Watson platform.

The idea of man + machine rather than machine replacing man is a positive one and we can only hope that Rometty is correct. As one of my fellow analysts noted, he would bet the under on the statement of 100% of the jobs will change. Still, while it may be hyperbole, it would be best if all of us continue to examine in more detail what is likely to happen in open and reflective dialogue and analysis.

Now as to incumbent disruption, Clayton Christensen is noted for bringing the concept of innovative disruption to the forefront when the challenge to enterprises was from outside disruptors. However, even though a lot of external datasets have value, enterprises control their own customer and product history information, which they can choose to keep private or share as appropriate with other parties. Thus, innovative disruption should now very well lean toward incumbents doing the disruption. This should play very well into IBM’s wheelhouse as the company’s main focus has always been on dealing with established enterprises.

Modern Infrastructure Reiterate the Chairman’s Theses

In the expo area attached to the THINK conference, IBM divided its technologies into four campuses: modern infrastructure, business and AI, cloud and data, and security and resiliency. As an IT infrastructure analyst, I focused on demoes and presentations in the modern infrastructure campus.

In his presentation “Driven by Data” Ed Walsh, General Manager, IBM Storage & Software Defined Infrastructure, focused on rethinking data infastructure to accelerate innovation and competitiveness in the data-driven, multi-cloud world. His guest, Inderpal Bhandari, IBM’s Global Chief Data Officer, discussed a cognitive enterprise blueprint where in the era of data those who can harness the power of their information assets will have a competitive advantage. As part of the process of reinventing itself, IBM plans to walk the walk and deploy such capabilities as AI and multi-cloud internally. As a case study, the company will start sharing its results perhaps as soon as May of this year.

In his presentation “Storage Innovation to Monetize Your Data,” Eric Herzog, CMO & Vice President, Worldwide Channels IBM Storage and Software Defined Infrastructure, focused on how the company’s broad storage portfolio helps customers today. He illustrated with a number of customer examples. For example, ZE PowerGroup Inc. provides a data and analytics platform that enables its clients to outmaneuver their competitors. Speed is of the essence. With the help of IBM storage solutions, the company was able to increase its client base by 30% while the same time using its costs by 25%.

But a discussion of IBM’s modern infrastructure would not be complete without at least touching upon its new POWER9 processors and related server architecture. In his presentation “Accelerating Your Future with POWER9,” Bob Picciano, Senior Vice President, IBM Cognitive Systems, pointed out that the value of computing is changing from automating and scaling process to scaling knowledge via actionable insight and that requires reinventing computing for cloud and AI workloads.

Speed matters because time matters. IBM claims a breakthrough in machine learning where its POWER-based solutions deliver results 46X faster than Google and 3.5X faster than Intel. So it should come as no surprise that at the OpenPOWER Summit held in parallel with THINK 2018, Google (and OpenPOWER founder) announced that it has deployed its homegrown POWER9-based “Zaius” platform into its datacenters for production workloads.

Reinvention Will Not Change What IBM Does Well Today

Reinvention is the process of changing so that the results often appear to be entirely new. Although IBM is in the process of reinventing itself in many ways, it also needs to preserve the best hard-learned lessons of the past that have relevance now and in the future. Just some of those include:

  • Continue to THINK clearly — Thomas J. Watson introduced the slogan (i.e., tagline) THINK in December 1911 before International Business Machines received its name to emphasize that “we get paid for working with our heads.” Thinking well never grows old and is even more relevant now that broader forms of artificial intelligence are being coupled with human intelligence in a data-driven world. Naming the conference IBM THINK was a positive reinforcement for the values that IBM has long stood for.
  • Continue to invest in the future — IBM builds its capabilities through significant investments in research and development (R&D), as well as external acquisitions. IBM has always spent heavily on the basic R&D that is necessary to prime the well of innovation and so is one of the worthy successors to Ma Bell in creating technology that transforms both our global economy and society. Yet basic research takes years if not decades and may or may not result in success. Firms that fear failure or are interested only in the short-term bottom line are typically unwilling to take the risks that basic R&D entails, but for over 100 years IBM has proven that approach wrong. AI as demonstrated by Watson is simply the latest success story. The company’s quantum computing developments (via its IBM Q system) represent a technology with a world of promise. It is also one where IBM is willing to accept the risks, challenges, uncertainty, expense, and lengthy development process before the technology starts to bear any potential revenue and profitability fruit.
  • Continue to focus on the customer—Maintaining trust and security for customers was a major theme. However, IBM has long been known for its customer support and satisfaction so that focus is simply a continuation (or reinforcement) of that traditions
  • Continue to drive positive relationships — IBM recognizes that it cannot do it all; its support of Open Source technologies, like Linux and Kubernetes, has been very productive. Its far-reaching partnerships from Cisco with VersaStack to a joint venture with Maersk involving blockchain benefit IBM, its partners, and customers alike.

Mesabi musings

The world’s economy and society in general (including how we spend our time interfacing with the outside world) have been greatly impacted by the first two exponential scaling laws — Moore’s law for computing and Metcalfe’s law for network effects. Although those two laws continue to work their magic, further transformations will come about through the third exponential scaling law — the one that reflects the need to exploit learning and knowledge in the data-driven era.

At THINK 2018, IBM CEO Ginni Rometty pointed out that this will impact business (as in a more advanced man-machine interface) and society (in that all jobs will be affected in one way or another). Understanding this, IBM is reinventing itself to continue maintaining its leadership role as a broad-based information technology vendor.

Now, IBM faces strong competitive challenges on all facets of its product and services portfolio. That pressure includes the likes of Dell Technologies in enterprise IT, Intel on the computing side, Google and Amazon on the cloud side, and a whole host of other companies large and small here, there, and everywhere. That has never daunted IBM in the past and it will not in the future. 

The IBM THINK 2018 conference gave IBM the opportunity to showcase its products and services, along with its best and brightest people, and ideas that will enable it to continue to serve customers well now and in the future. That should come as no surprise. After all, thinking is all about using your head for the benefit of all, and that “head” now includes artificial, as well as human intelligence.

ioFABRIC’s Data Fabric Software Weaves Together the Virtualized Storage Infrastructure

New ideas often take time to percolate into the collective consciousness. So it is with the concept of “data fabric,” a concept of which ioFABRIC is a chief proponent. A data fabric uses software to create a virtualized storage infrastructure which improves (or even makes possible what was difficult at best) IT’s ability to economically set and manage service levels. Those functions include capacity, performance, and data protection across not only multiple storage systems at a single site, but also on multiple sites or even multiple clouds.

The Status Quo for Storage Infrastructures Is Not Acceptable

Traditionally, IT bought storage on a system by system basis as the need to support a particular application or data set or increase capacity or performance (such as to improve latency with an all-flash storage array) was required. While the decision may have been “locally” optimal for particular reasons, over time the overall storage architecture is very likely not “globally” (i.e., wholly and geographically) optimized. For example, one array may run out of storage space while there is still plenty of capacity available in other arrays.

Planning ahead for future capacity and performance needs is difficult in and of itself and even decisions that were correct at the time may fail as the dynamic requirements of applications and their associated data change. The result is that multiple storage systems tend to result in multiple silos of storage where as a whole (the global one-site sense) storage is not optimized as regards capacity or performance.

This is further eroded when one on more copies of the data have to be stored or used in a multi-site or even multi-cloud basis as one-off decisions to move a particular set of data may not be best cost-wise for all the data under management.

In addition, the location where data is stored is critical for possible regulatory reasons as the storage system where application data was originally created and held may be subject to change. That information was typically not required for traditional IT environments since the stored data for a system were presumed to always stay with the original system that did not move from its original location. That situation changes markedly for many companies utilizing public cloud services.

Enter data fabric software as turbocharged software-defined storage (SDS) solution capable of dealing with those issues.

Introducing ioFABRICs Vicinity 3.0 Software

Data fabric software is essentially a hypervisor for storage that manages information as a unified pool across system boundaries in single-sites and across multiple sites and clouds, as necessary.

ioFABRIC’s Vicinity data fabric software creates automated quality of service (QoS) driven storage. Administrators define policies for what storage must deliver to the organization in terms of performance, capacity, protection, and cost across the existing storage infrastructure. For example, Vicinity can help increase the storage percentage utilization to take advantage of existing capacity without having to add additional capacity as well as get more performance out of existing storage, such as IOPS, where possible.

Naturally, ioFABRIC cannot violate the laws of physics. If enough high-performance storage is not available to meet a latency QoS objective, the two choices are to buy more high-performance storage or change the objective to something that is feasible with the existing storage environment. Vicinity agents also discover all storage media and systems and profile them in terms of their performance, capacity, protection, and cost characteristics, The Vicinity storage fabric then creates QoS-based volumes, which it manages as a single entity.

ioFABRIC claims that data in Vicinity environments is always available, always evergreen, and always protected.  With its latest release, Vicinity 3.0, the concept of always-on data availability has been extended from single sites to a multi-site/multi-cloud basis for business continuity. The failure of an entire site will not prevent the data from being available at another site or cloud. Vicinity 3.0 has a self-healing capability that automatically heals and routes around failures across storage and networks.

In addition, ioFABRIC’s “always evergreen” concept means that Vicinity 3.0 supports policy-driven automatic migration of data between sites and clouds. This is very useful as such processes can be major headaches for IT organizations. In addition, Vicinity 3.0 transforms a local storage infrastructure into an on-premises private cloud that can integrate with public clouds to form a hybrid storage cloud. From the always protected perspective, Vicinity 3.0 enables the use of immutable snapshots that cannot be accessed or altered by ransomware, and also creates automatic SnapCopy snapshots for disaster recovery purposes.

Moreover, with Vicinity 3.0, users can use policy management to place data in specific locations based on usage and cost models. ioFABRIC believes that it has a unique method for automatically using least cost media while retaining all requested storage service level agreements (SLAs). The company uses an AI technique called “swarm intelligence” to map cost optimization across the entire data fabric.

Easily Meeting GDPR Requirements

The European Union’s General Data Protection Regulation (GDPR) regulations to harmonize data privacy laws across Europe, which protect and empower EU citizens’ privacy will affect not only European companies. When GDPR becomes law on May 25, 2018 it will generally impact companies anywhere that have data related to EU citizens. In other words, companies can’t play a game of “Where’s Waldo?” with your data. They have to know what data they have and where it can be legitimately stored to be in compliance with GDPR guidelines. If not, they run the risk of investigations and possible, substantial fines.

With ioFABRIC Vicinity, policies can be set specifying geographical locations where specific data is allowed to reside and tailoring data migration protection rules as necessary. That should substantially relieve much of the stress and many of the complexities of GDPR compliance.

Mesabi musings

Data may be the new oil, but the storage infrastructure in which it resides tends to have been built incrementally over time in a manner such that the overall infrastructure may not be fully or optimally used. Moreover, no one could have foreseen all the changes that would take place, such as the movement to a multi-cloud world that incorporates both an on-premises private cloud with a public cloud nor the need to carefully manage where data is located because of governmental regulations.

Enter ioFABRIC’s Vicinity 3.0 data fabric software that weaves together a storage infrastructure as a seamless whole and enable the QoS automation of storage. With Vicinity 3.0, enterprises can get the most bang for the buck out of their storage resources from performance, capacity, and data protection perspectives both on premises and in the cloud.

dinCloud Continues to Forge a Path in Hosted Workplaces

The “cloud” continues to manifest itself in a very wide range of incarnations and use cases. Specialty clouds in the form of [whatever]-as-a-service address special purpose needs. For example, Los Angeles-based dinCloud plays in the desktop-as-a-service (DaaS) arena as part of its larger focus on hosted workspaces and cloud infrastructure services.

Hosted Workspaces: Offering VDI as DaaS

In essence, DaaS is a virtual desktop infrastructure (VDI) hosted as a cloud service. DaaS has found its greatest success in small to medium businesses (SMBs) so dinCloud targets the mid-market of say 100 to 700 users where the IT staff is typically very small, but the targeted businesses have to have the same requirements as much larger organizations.

With VDI, a desktop operating system is hosted on a virtual machine (VM) that runs on a centralized server where all processes, applications and data reside and run. The primary benefits for customers are in reduced administrative burdens as trying to upgrade, provision and manage a large number of devices — not only desktops, but other devices such as laptops, tablets and smartphones in a BYOD (bring-your-own-device) world — can be a real headache.

The challenges that face VDI from an IT perspective are maintaining security, avoiding downtime, and the general complexities and high initial costs of VDI purchase and deployment. In contrast to on-premises offerings, a cloud-hosted VDI solution can provide the necessary security, high levels of uptime and greatly reduce complexity, while at the same time providing economic benefits.

A roll-your-own VDI infrastructure also tends to be CAPEX (capital expense) heavy whereas a DaaS solution contained within a hosted workspaces cloud is OPEX (operating expense) friendly, with a monthly subscription fee per user model. Organizations can thus easily plan their monthly expenses and alter them to account for unexpected changes in headcount which is always desirable.

In its hosted workspaces model, dinCloud includes not only DaaS per se, but also the data and the applications — most notably Microsoft applications, such as Office 365 (which is also subscription-based). But dinCloud does not stop there as it wants to further leverage its cloud-based environments to offer more services to its hosted workspaces customers, as well as provide potential services to non-DaaS customers. It does so under the label of cloud infrastructure, including dinServer (hosted virtual server) and dinSQL (SQL database-as-a-service),

Premier BPO acquires dinCloud

This ability to exploit and leverage its cloud-based services infrastructure may very well have been one of the reasons why Premier BPO acquired dinCloud in February 2018 on undisclosed terms. Clarksville. Tennessee-based Premier BPO is an outsourcing firm that provides back office processing services to businesses, such as B2B (business-to-business) and B2C (business-to-consumer) collections, billing and employee benefit processing.

Both companies target the same size customers as well as certain verticals, namely transportation and logistics, financial services and healthcare. However, a major difference between the two is in their go-to-market strategy. Premier BPO uses a direct-sale model without any partners, whereas dinCloud uses a channel model with about 200 hundred value-added resellers (VARs) and managed-service providers (MSPs). This should not be a major issue, however, as the new CEO of dinCloud, Mark Briggs, (who is also CEO of Premier BPO) has an extensive channel background.

The announced strategy is to continue within their respective areas of specialization, but with the plans to provide a broader portfolio over time that evolves from [whatever]-as-a-service to [everything]-as-a-service. This will result in challenges to the combined companies, but don’t count them out in being able to fully leverage and further extend Premier BPO’s business processing outsourcing expertise in concert with dinCloud’s cloud infrastructures experience.

How dinCloud can compete with the big boys

The broader scope of services and the increased scale should serve dinCloud in good stead when competing with three large companies with DaaS offerings: Amazon WorkSpaces, Citrix XenDeskstop and VMware Horizon Air. These all have good company name recognition obviously, as well as relatively enormous marketing muscle. However, history has shown that smaller, nimbler competitors that target niche markets can often effectively compete with larger companies, especially in the SMB space. Having more IT services available to it in conjunction with Premier BPO may help dinCloud get into the bidding discussion with more companies.

Mesabi musings

For most companies, interacting both internally and externally through personal computing devices either in a fixed position, such as a literal desktop computer, or through a mobile device, such as a laptop or tablet, is a way of life. However, building, implementing, maintaining and running the necessary IT infrastructure to provide the consistent levels of services to a large pool of users can be a major challenge. That’s especially true for smaller businesses and their nearly non-existent IT staff. Enter dinCloud with DaaS that eases that burden while at the same time providing the required service levels. Plus, as part of Premier BPO, dinCloud should be able to offer even more in the way of services. Small to mid-sized companies should pay close attention.

Dell Technologies Surveys the Digital 2030 Future

As it has for past future-focused studies, Dell teamed up with the well-respected Institute for the Future (IFTF) to forecast how emerging technologies — notably artificial intelligence (AI) and the Internet of Things (IoT) — may change the way we live and work by 2030.

To extend that work, Dell Technologies commissioned Vanson Bourne, an independent UK research firm, to conduct a survey-based research study to gauge business leader predictions and preparedness for the future. The Realizing 2030 survey was quite large and wide in scope and reach, extending to 17 countries in the Americas, Asia Pacific and Japan, and Europe, Middle East, and Africa. Secondly, more than 10 industries including financial services, private healthcare and manufacturing were covered.

Finally, the survey had 3800 complete responses from director and c-suite executives in midsized and enterprise organizations involved in key functions, including finance, sales and R&D in addition to IT. That is an impressive number of respondents and thus should be considered statistically reliable across a number of dimensions.

The Realizing 2030 survey shows a deep division on many issues, but general agreement on others

On many questions, survey participants were divided into two evenly split camps. For example, on the question of whether or not automated systems will free-up our time, 50% agreed and 50% disagreed. Dell ascribes this polarization to oppositional perspectives about our future; a pessimistic anxiety-driven issue of human obsolescence and an optimistic view that technology will solve our greatest social problems.

Forecasts of where our lives would be impacted in 2030 was one of the areas where this dichotomy of opinion occurred, such as in how we will absorb and manage information in completely different ways. Forecasts of our work was another area, such as we’ll be more productive by collaborating more and we’ll have more job satisfaction by offloading the tasks that we don’t want to do to intelligent machines. A third area where the split occurred was in the business forecast for 2030, such as, the more we depend upon technology, the more we’ll have to lose in the event of a cyber -attack and we’ll be part of a globally connected, remote workforce.

Although the survey suggested that we are entering the next phase of human-machine partnerships, it also showed that our business leaders are clearly divided on what this means for them, their business and the world at large. They are also struggling with the pace of change as 42% don’t know whether they will be able to compete over the next decade. More importantly for IT vendors, a huge (93%) majority of respondents said that they’re battling some form of barrier to becoming a digital business. Lack of a digital vision and strategy (61%) and lack of workforce readiness (61%) are the two top challenges they cited to transforming digitally.

The good news for vendors is that despite those obstacles, business leaders are unified in the need for digital transformation. Top tips to accelerate digital transformation include gaining employee buy-in (90%) and making customer experience a boardroom concern (88%).  Other notable points included that in the next five years businesses plan to triple their investments in advanced AI. In addition, the number of companies investing in VR/AR (virtual reality/augmented reality) will go from 27% to 78%.

This piece has only touched upon the survey results. For more information, visit Dell Technologies at:

Dell Technologies’ pragmatic approach to Realize 2030 is through four transformations

Dell Technologies does not invest in survey-based research simply for the fun of it. No, the company is very pragmatic. The Realizing 2030 study complements and advances its knowledge and understanding of what needs to be done to bring about a future where the company plays a prominent role. The research also helps Dell engage in an on-going and open dialogue with existing and potential customers and shows the depth of its creative and innovative thinking while simultaneously listening to what those companies have to say. In other words, the company strives for conversation that is a dynamic dialogue.

As part of that process, Dell Technologies can bring to the table its understanding and product/service focus on transforming areas of interest to every organization. The company’s goal is to put itself in the best position to act as the IT infrastructure company that best enables businesses to transform themselves across:

  • Workforce — leverage IT solutions to enhance employee productivity, such as through more mobility and more connectedness
  • Digital — deploy innovative technologies to further the human-machine partnership, such as AI and VR/AR as appropriate
  • IT — build a highly scalable infrastructure that uses a software-defined architecture to change/adjust pools of hardware assets dynamically as needs change
  • Security — build a security infrastructure that is resilient, adaptable and unified.

Successful transformations will impact all four areas since they are interwoven, not linear.

Mesabi musings

Dell Technologies should be commended for its sponsorship of the Realizing 2030 research as well as its opening up an ongoing dialogue with customers about the results. Although the survey shows that business leaders are deeply divided into pessimistic and optimistic camps regarding the impact of future technology on our life, work, and business, they are united on the need to move forward. The future may not be “ours to see,” but business leaders don’t plan to stand idly by without trying to transform their organizations for the better. And that is a good thing.