Author Archives: mesabigroup

IBM Continues to Think Ahead Clearly

IBM recently concluded its first IBM THINK conference in Las Vegas. In THINK, IBM combined several former events into one comprehensive conference that covered the breadth and depth of the entire corporation. Although the 40,000 some attendees could explore in depth particular products or services in a plethora of educational sessions or at the huge exhibition hall, in a sense THINK was a coming out party that showed how IBM is reinventing itself in what is called the “data-driven” era.

What turns the future into the era of data? The Economist has stated that the most valuable resource is no longer oil, but data, and supported this mammoth assertion in a seminal May6, 2017 article entitled “Data is giving rise to a new economy.” https://www.economist.com/news/briefing/21721634-how-it-shaping-up-data-giving-rise-new-economy

Therefore, both IBM and the Economist are responding to broader business and societal events.

Chairman’s Address

Ginni Rometty, IBM’s Chairman, President, and CEO, gave the overarching keynote presentation “Putting Smart to Work” in the Mandalay Bay Events Center to a more-than packed house (12,000 capacity).

She pointed out that business has been impacted by rapidly scaling technologies. The first example she cited was Moore’s Law that has led to an exponential growth in computing power for the last 50 years. The second was Metcalfe’s Law which explains the rationale for the exponential growth of networking. Rometty then proposed that there needs to be a new law for the data-driven era where exponential learning is built into every process. Since this is related to the rise of artificial intelligence (AI) she suggested that the new law be called “Watson’s Law” (after IBM’s Watson platform).

We recall that Moore’s Law (named after Intel co-founder Gordon Moore) says that integrated transistor density doubles roughly every two years. As Moore himself started last year, this concept is running out of steam due to limits imposed by the laws of physics and economic concerns.

Now, Metcalfe’s law (named after Robert Metcalfe, although George Gilder should also be recognized) states that the effect of a telecommunications network is proportional to the square of the number of connected users of the system. This networks effects exponential scaling law is still robust, vibrant, and growing. In fact, Rometty’s guest later in her keynote, Lowell C. McAdam, Chairman and CEO, Verizon Communications, asserted that 5G (for fifth generation wireless systems) will usher in a 4th industrial revolution.

The cherry on top is the third law which relates to the exponential scaling of learning and knowledge through the use of AI. Rometty’s proposed name of Watson for this law may not be acceptable to competitors (although if it seen as generic rather than tied to IBM it might not have a real problem). It is a lot simpler to use though than naming the law in honor of one of the AI pioneers, since Watson is more widely known than pioneers, like Edward Feigenbaum and Marvin Minsky.

Rometty also pushed the idea that business, society and IBM itself all have to be reinvented in the wake of the third law on top to the first two laws.

  • Business — organizations will need to learn exponentially; every process will be affected by having learning put into it; the result will be man and machine working together, not machines simply replacing human workers.
  • Society — 100% of jobs will change; among other things, this will result in the need for midcareer retraining.
  • IBM — the company needs to continue to deliver trust and security to its customers. In the data-driven era, existing enterprises have collected large amounts of data, so they can be incumbent disruptors in a time of change. In this regard, IBM needs to support them through the use of technologies, such as blockchain and the Watson platform.

The idea of man + machine rather than machine replacing man is a positive one and we can only hope that Rometty is correct. As one of my fellow analysts noted, he would bet the under on the statement of 100% of the jobs will change. Still, while it may be hyperbole, it would be best if all of us continue to examine in more detail what is likely to happen in open and reflective dialogue and analysis.

Now as to incumbent disruption, Clayton Christensen is noted for bringing the concept of innovative disruption to the forefront when the challenge to enterprises was from outside disruptors. However, even though a lot of external datasets have value, enterprises control their own customer and product history information, which they can choose to keep private or share as appropriate with other parties. Thus, innovative disruption should now very well lean toward incumbents doing the disruption. This should play very well into IBM’s wheelhouse as the company’s main focus has always been on dealing with established enterprises.

Modern Infrastructure Reiterate the Chairman’s Theses

In the expo area attached to the THINK conference, IBM divided its technologies into four campuses: modern infrastructure, business and AI, cloud and data, and security and resiliency. As an IT infrastructure analyst, I focused on demoes and presentations in the modern infrastructure campus.

In his presentation “Driven by Data” Ed Walsh, General Manager, IBM Storage & Software Defined Infrastructure, focused on rethinking data infastructure to accelerate innovation and competitiveness in the data-driven, multi-cloud world. His guest, Inderpal Bhandari, IBM’s Global Chief Data Officer, discussed a cognitive enterprise blueprint where in the era of data those who can harness the power of their information assets will have a competitive advantage. As part of the process of reinventing itself, IBM plans to walk the walk and deploy such capabilities as AI and multi-cloud internally. As a case study, the company will start sharing its results perhaps as soon as May of this year.

In his presentation “Storage Innovation to Monetize Your Data,” Eric Herzog, CMO & Vice President, Worldwide Channels IBM Storage and Software Defined Infrastructure, focused on how the company’s broad storage portfolio helps customers today. He illustrated with a number of customer examples. For example, ZE PowerGroup Inc. provides a data and analytics platform that enables its clients to outmaneuver their competitors. Speed is of the essence. With the help of IBM storage solutions, the company was able to increase its client base by 30% while the same time using its costs by 25%.

But a discussion of IBM’s modern infrastructure would not be complete without at least touching upon its new POWER9 processors and related server architecture. In his presentation “Accelerating Your Future with POWER9,” Bob Picciano, Senior Vice President, IBM Cognitive Systems, pointed out that the value of computing is changing from automating and scaling process to scaling knowledge via actionable insight and that requires reinventing computing for cloud and AI workloads.

Speed matters because time matters. IBM claims a breakthrough in machine learning where its POWER-based solutions deliver results 46X faster than Google and 3.5X faster than Intel. So it should come as no surprise that at the OpenPOWER Summit held in parallel with THINK 2018, Google (and OpenPOWER founder) announced that it has deployed its homegrown POWER9-based “Zaius” platform into its datacenters for production workloads.

Reinvention Will Not Change What IBM Does Well Today

Reinvention is the process of changing so that the results often appear to be entirely new. Although IBM is in the process of reinventing itself in many ways, it also needs to preserve the best hard-learned lessons of the past that have relevance now and in the future. Just some of those include:

  • Continue to THINK clearly — Thomas J. Watson introduced the slogan (i.e., tagline) THINK in December 1911 before International Business Machines received its name to emphasize that “we get paid for working with our heads.” Thinking well never grows old and is even more relevant now that broader forms of artificial intelligence are being coupled with human intelligence in a data-driven world. Naming the conference IBM THINK was a positive reinforcement for the values that IBM has long stood for.
  • Continue to invest in the future — IBM builds its capabilities through significant investments in research and development (R&D), as well as external acquisitions. IBM has always spent heavily on the basic R&D that is necessary to prime the well of innovation and so is one of the worthy successors to Ma Bell in creating technology that transforms both our global economy and society. Yet basic research takes years if not decades and may or may not result in success. Firms that fear failure or are interested only in the short-term bottom line are typically unwilling to take the risks that basic R&D entails, but for over 100 years IBM has proven that approach wrong. AI as demonstrated by Watson is simply the latest success story. The company’s quantum computing developments (via its IBM Q system) represent a technology with a world of promise. It is also one where IBM is willing to accept the risks, challenges, uncertainty, expense, and lengthy development process before the technology starts to bear any potential revenue and profitability fruit.
  • Continue to focus on the customer—Maintaining trust and security for customers was a major theme. However, IBM has long been known for its customer support and satisfaction so that focus is simply a continuation (or reinforcement) of that traditions
  • Continue to drive positive relationships — IBM recognizes that it cannot do it all; its support of Open Source technologies, like Linux and Kubernetes, has been very productive. Its far-reaching partnerships from Cisco with VersaStack to a joint venture with Maersk involving blockchain benefit IBM, its partners, and customers alike.

Mesabi musings

The world’s economy and society in general (including how we spend our time interfacing with the outside world) have been greatly impacted by the first two exponential scaling laws — Moore’s law for computing and Metcalfe’s law for network effects. Although those two laws continue to work their magic, further transformations will come about through the third exponential scaling law — the one that reflects the need to exploit learning and knowledge in the data-driven era.

At THINK 2018, IBM CEO Ginni Rometty pointed out that this will impact business (as in a more advanced man-machine interface) and society (in that all jobs will be affected in one way or another). Understanding this, IBM is reinventing itself to continue maintaining its leadership role as a broad-based information technology vendor.

Now, IBM faces strong competitive challenges on all facets of its product and services portfolio. That pressure includes the likes of Dell Technologies in enterprise IT, Intel on the computing side, Google and Amazon on the cloud side, and a whole host of other companies large and small here, there, and everywhere. That has never daunted IBM in the past and it will not in the future. 

The IBM THINK 2018 conference gave IBM the opportunity to showcase its products and services, along with its best and brightest people, and ideas that will enable it to continue to serve customers well now and in the future. That should come as no surprise. After all, thinking is all about using your head for the benefit of all, and that “head” now includes artificial, as well as human intelligence.

ioFABRIC’s Data Fabric Software Weaves Together the Virtualized Storage Infrastructure

New ideas often take time to percolate into the collective consciousness. So it is with the concept of “data fabric,” a concept of which ioFABRIC is a chief proponent. A data fabric uses software to create a virtualized storage infrastructure which improves (or even makes possible what was difficult at best) IT’s ability to economically set and manage service levels. Those functions include capacity, performance, and data protection across not only multiple storage systems at a single site, but also on multiple sites or even multiple clouds.

The Status Quo for Storage Infrastructures Is Not Acceptable

Traditionally, IT bought storage on a system by system basis as the need to support a particular application or data set or increase capacity or performance (such as to improve latency with an all-flash storage array) was required. While the decision may have been “locally” optimal for particular reasons, over time the overall storage architecture is very likely not “globally” (i.e., wholly and geographically) optimized. For example, one array may run out of storage space while there is still plenty of capacity available in other arrays.

Planning ahead for future capacity and performance needs is difficult in and of itself and even decisions that were correct at the time may fail as the dynamic requirements of applications and their associated data change. The result is that multiple storage systems tend to result in multiple silos of storage where as a whole (the global one-site sense) storage is not optimized as regards capacity or performance.

This is further eroded when one on more copies of the data have to be stored or used in a multi-site or even multi-cloud basis as one-off decisions to move a particular set of data may not be best cost-wise for all the data under management.

In addition, the location where data is stored is critical for possible regulatory reasons as the storage system where application data was originally created and held may be subject to change. That information was typically not required for traditional IT environments since the stored data for a system were presumed to always stay with the original system that did not move from its original location. That situation changes markedly for many companies utilizing public cloud services.

Enter data fabric software as turbocharged software-defined storage (SDS) solution capable of dealing with those issues.

Introducing ioFABRICs Vicinity 3.0 Software

Data fabric software is essentially a hypervisor for storage that manages information as a unified pool across system boundaries in single-sites and across multiple sites and clouds, as necessary.

ioFABRIC’s Vicinity data fabric software creates automated quality of service (QoS) driven storage. Administrators define policies for what storage must deliver to the organization in terms of performance, capacity, protection, and cost across the existing storage infrastructure. For example, Vicinity can help increase the storage percentage utilization to take advantage of existing capacity without having to add additional capacity as well as get more performance out of existing storage, such as IOPS, where possible.

Naturally, ioFABRIC cannot violate the laws of physics. If enough high-performance storage is not available to meet a latency QoS objective, the two choices are to buy more high-performance storage or change the objective to something that is feasible with the existing storage environment. Vicinity agents also discover all storage media and systems and profile them in terms of their performance, capacity, protection, and cost characteristics, The Vicinity storage fabric then creates QoS-based volumes, which it manages as a single entity.

ioFABRIC claims that data in Vicinity environments is always available, always evergreen, and always protected.  With its latest release, Vicinity 3.0, the concept of always-on data availability has been extended from single sites to a multi-site/multi-cloud basis for business continuity. The failure of an entire site will not prevent the data from being available at another site or cloud. Vicinity 3.0 has a self-healing capability that automatically heals and routes around failures across storage and networks.

In addition, ioFABRIC’s “always evergreen” concept means that Vicinity 3.0 supports policy-driven automatic migration of data between sites and clouds. This is very useful as such processes can be major headaches for IT organizations. In addition, Vicinity 3.0 transforms a local storage infrastructure into an on-premises private cloud that can integrate with public clouds to form a hybrid storage cloud. From the always protected perspective, Vicinity 3.0 enables the use of immutable snapshots that cannot be accessed or altered by ransomware, and also creates automatic SnapCopy snapshots for disaster recovery purposes.

Moreover, with Vicinity 3.0, users can use policy management to place data in specific locations based on usage and cost models. ioFABRIC believes that it has a unique method for automatically using least cost media while retaining all requested storage service level agreements (SLAs). The company uses an AI technique called “swarm intelligence” to map cost optimization across the entire data fabric.

Easily Meeting GDPR Requirements

The European Union’s General Data Protection Regulation (GDPR) regulations to harmonize data privacy laws across Europe, which protect and empower EU citizens’ privacy will affect not only European companies. When GDPR becomes law on May 25, 2018 it will generally impact companies anywhere that have data related to EU citizens. In other words, companies can’t play a game of “Where’s Waldo?” with your data. They have to know what data they have and where it can be legitimately stored to be in compliance with GDPR guidelines. If not, they run the risk of investigations and possible, substantial fines.

With ioFABRIC Vicinity, policies can be set specifying geographical locations where specific data is allowed to reside and tailoring data migration protection rules as necessary. That should substantially relieve much of the stress and many of the complexities of GDPR compliance.

Mesabi musings

Data may be the new oil, but the storage infrastructure in which it resides tends to have been built incrementally over time in a manner such that the overall infrastructure may not be fully or optimally used. Moreover, no one could have foreseen all the changes that would take place, such as the movement to a multi-cloud world that incorporates both an on-premises private cloud with a public cloud nor the need to carefully manage where data is located because of governmental regulations.

Enter ioFABRIC’s Vicinity 3.0 data fabric software that weaves together a storage infrastructure as a seamless whole and enable the QoS automation of storage. With Vicinity 3.0, enterprises can get the most bang for the buck out of their storage resources from performance, capacity, and data protection perspectives both on premises and in the cloud.

dinCloud Continues to Forge a Path in Hosted Workplaces

The “cloud” continues to manifest itself in a very wide range of incarnations and use cases. Specialty clouds in the form of [whatever]-as-a-service address special purpose needs. For example, Los Angeles-based dinCloud plays in the desktop-as-a-service (DaaS) arena as part of its larger focus on hosted workspaces and cloud infrastructure services.

Hosted Workspaces: Offering VDI as DaaS

In essence, DaaS is a virtual desktop infrastructure (VDI) hosted as a cloud service. DaaS has found its greatest success in small to medium businesses (SMBs) so dinCloud targets the mid-market of say 100 to 700 users where the IT staff is typically very small, but the targeted businesses have to have the same requirements as much larger organizations.

With VDI, a desktop operating system is hosted on a virtual machine (VM) that runs on a centralized server where all processes, applications and data reside and run. The primary benefits for customers are in reduced administrative burdens as trying to upgrade, provision and manage a large number of devices — not only desktops, but other devices such as laptops, tablets and smartphones in a BYOD (bring-your-own-device) world — can be a real headache.

The challenges that face VDI from an IT perspective are maintaining security, avoiding downtime, and the general complexities and high initial costs of VDI purchase and deployment. In contrast to on-premises offerings, a cloud-hosted VDI solution can provide the necessary security, high levels of uptime and greatly reduce complexity, while at the same time providing economic benefits.

A roll-your-own VDI infrastructure also tends to be CAPEX (capital expense) heavy whereas a DaaS solution contained within a hosted workspaces cloud is OPEX (operating expense) friendly, with a monthly subscription fee per user model. Organizations can thus easily plan their monthly expenses and alter them to account for unexpected changes in headcount which is always desirable.

In its hosted workspaces model, dinCloud includes not only DaaS per se, but also the data and the applications — most notably Microsoft applications, such as Office 365 (which is also subscription-based). But dinCloud does not stop there as it wants to further leverage its cloud-based environments to offer more services to its hosted workspaces customers, as well as provide potential services to non-DaaS customers. It does so under the label of cloud infrastructure, including dinServer (hosted virtual server) and dinSQL (SQL database-as-a-service),

Premier BPO acquires dinCloud

This ability to exploit and leverage its cloud-based services infrastructure may very well have been one of the reasons why Premier BPO acquired dinCloud in February 2018 on undisclosed terms. Clarksville. Tennessee-based Premier BPO is an outsourcing firm that provides back office processing services to businesses, such as B2B (business-to-business) and B2C (business-to-consumer) collections, billing and employee benefit processing.

Both companies target the same size customers as well as certain verticals, namely transportation and logistics, financial services and healthcare. However, a major difference between the two is in their go-to-market strategy. Premier BPO uses a direct-sale model without any partners, whereas dinCloud uses a channel model with about 200 hundred value-added resellers (VARs) and managed-service providers (MSPs). This should not be a major issue, however, as the new CEO of dinCloud, Mark Briggs, (who is also CEO of Premier BPO) has an extensive channel background.

The announced strategy is to continue within their respective areas of specialization, but with the plans to provide a broader portfolio over time that evolves from [whatever]-as-a-service to [everything]-as-a-service. This will result in challenges to the combined companies, but don’t count them out in being able to fully leverage and further extend Premier BPO’s business processing outsourcing expertise in concert with dinCloud’s cloud infrastructures experience.

How dinCloud can compete with the big boys

The broader scope of services and the increased scale should serve dinCloud in good stead when competing with three large companies with DaaS offerings: Amazon WorkSpaces, Citrix XenDeskstop and VMware Horizon Air. These all have good company name recognition obviously, as well as relatively enormous marketing muscle. However, history has shown that smaller, nimbler competitors that target niche markets can often effectively compete with larger companies, especially in the SMB space. Having more IT services available to it in conjunction with Premier BPO may help dinCloud get into the bidding discussion with more companies.

Mesabi musings

For most companies, interacting both internally and externally through personal computing devices either in a fixed position, such as a literal desktop computer, or through a mobile device, such as a laptop or tablet, is a way of life. However, building, implementing, maintaining and running the necessary IT infrastructure to provide the consistent levels of services to a large pool of users can be a major challenge. That’s especially true for smaller businesses and their nearly non-existent IT staff. Enter dinCloud with DaaS that eases that burden while at the same time providing the required service levels. Plus, as part of Premier BPO, dinCloud should be able to offer even more in the way of services. Small to mid-sized companies should pay close attention.

Dell Technologies Surveys the Digital 2030 Future

As it has for past future-focused studies, Dell teamed up with the well-respected Institute for the Future (IFTF) to forecast how emerging technologies — notably artificial intelligence (AI) and the Internet of Things (IoT) — may change the way we live and work by 2030.

To extend that work, Dell Technologies commissioned Vanson Bourne, an independent UK research firm, to conduct a survey-based research study to gauge business leader predictions and preparedness for the future. The Realizing 2030 survey was quite large and wide in scope and reach, extending to 17 countries in the Americas, Asia Pacific and Japan, and Europe, Middle East, and Africa. Secondly, more than 10 industries including financial services, private healthcare and manufacturing were covered.

Finally, the survey had 3800 complete responses from director and c-suite executives in midsized and enterprise organizations involved in key functions, including finance, sales and R&D in addition to IT. That is an impressive number of respondents and thus should be considered statistically reliable across a number of dimensions.

The Realizing 2030 survey shows a deep division on many issues, but general agreement on others

On many questions, survey participants were divided into two evenly split camps. For example, on the question of whether or not automated systems will free-up our time, 50% agreed and 50% disagreed. Dell ascribes this polarization to oppositional perspectives about our future; a pessimistic anxiety-driven issue of human obsolescence and an optimistic view that technology will solve our greatest social problems.

Forecasts of where our lives would be impacted in 2030 was one of the areas where this dichotomy of opinion occurred, such as in how we will absorb and manage information in completely different ways. Forecasts of our work was another area, such as we’ll be more productive by collaborating more and we’ll have more job satisfaction by offloading the tasks that we don’t want to do to intelligent machines. A third area where the split occurred was in the business forecast for 2030, such as, the more we depend upon technology, the more we’ll have to lose in the event of a cyber -attack and we’ll be part of a globally connected, remote workforce.

Although the survey suggested that we are entering the next phase of human-machine partnerships, it also showed that our business leaders are clearly divided on what this means for them, their business and the world at large. They are also struggling with the pace of change as 42% don’t know whether they will be able to compete over the next decade. More importantly for IT vendors, a huge (93%) majority of respondents said that they’re battling some form of barrier to becoming a digital business. Lack of a digital vision and strategy (61%) and lack of workforce readiness (61%) are the two top challenges they cited to transforming digitally.

The good news for vendors is that despite those obstacles, business leaders are unified in the need for digital transformation. Top tips to accelerate digital transformation include gaining employee buy-in (90%) and making customer experience a boardroom concern (88%).  Other notable points included that in the next five years businesses plan to triple their investments in advanced AI. In addition, the number of companies investing in VR/AR (virtual reality/augmented reality) will go from 27% to 78%.

This piece has only touched upon the survey results. For more information, visit Dell Technologies at: https://www.delltechnologies.com/en-us/perspectives/realizing-2030.htm

Dell Technologies’ pragmatic approach to Realize 2030 is through four transformations

Dell Technologies does not invest in survey-based research simply for the fun of it. No, the company is very pragmatic. The Realizing 2030 study complements and advances its knowledge and understanding of what needs to be done to bring about a future where the company plays a prominent role. The research also helps Dell engage in an on-going and open dialogue with existing and potential customers and shows the depth of its creative and innovative thinking while simultaneously listening to what those companies have to say. In other words, the company strives for conversation that is a dynamic dialogue.

As part of that process, Dell Technologies can bring to the table its understanding and product/service focus on transforming areas of interest to every organization. The company’s goal is to put itself in the best position to act as the IT infrastructure company that best enables businesses to transform themselves across:

  • Workforce — leverage IT solutions to enhance employee productivity, such as through more mobility and more connectedness
  • Digital — deploy innovative technologies to further the human-machine partnership, such as AI and VR/AR as appropriate
  • IT — build a highly scalable infrastructure that uses a software-defined architecture to change/adjust pools of hardware assets dynamically as needs change
  • Security — build a security infrastructure that is resilient, adaptable and unified.

Successful transformations will impact all four areas since they are interwoven, not linear.

Mesabi musings

Dell Technologies should be commended for its sponsorship of the Realizing 2030 research as well as its opening up an ongoing dialogue with customers about the results. Although the survey shows that business leaders are deeply divided into pessimistic and optimistic camps regarding the impact of future technology on our life, work, and business, they are united on the need to move forward. The future may not be “ours to see,” but business leaders don’t plan to stand idly by without trying to transform their organizations for the better. And that is a good thing.

Spectra Logic Stacks Up Well in Tier-1 Storage Offloading

A lot of exciting things are happening in the storage business, notably the strong adoption of all-flash arrays as the replacement for hard disks for active Tier-1 production storage and the move toward software-defined options. Still most of the explosive growth in data leads to infrequently or never-again-accessed information that needs to be kept for long periods of time in a more cost-effective manner than Tier-1 storage typically delivers.  

Those use cases include general backup and archiving, information-intensive vertical industries, like media and entertainment (M&E) and horizontal information intensive applications, such as video surveillance. In other words, secondary storage remains very important and so it is worth considering Spectra Logic, which is a leader in that space.

Spectra Logic targets secondary storage in a number of ways and has done so successfully for nearly 40 years. It would be hard to dispute its claim to be the “setter of the standard” for Tier-1 storage offloading, especially with over 20,000 storage solutions installed worldwide.

Although Spectra Logic is by no means disk-phobic (as, for example, its Spectra Verde® NAS product proves) and is very strong in the important non-Tier-1 storage trend of object storage (BlackPearl® Converged Storage System), its primary business remains tape automation, especially in the high-end. In this regard, the company just announced the Spectra T950v Tape Library which offers the same enterprise-class characteristics as the existing T950 library but at a lower price. But our focus today will be on another new product, the Spectra Stack Tape Library.

The Spectra Stack Tape Library

Spectra Logic’s stackable tape library fits within part or all of a standard 42U (73.5 inches) height by 19 inches wide rack by 1 meter (39.375 inches) deep. This is in contrast to some other stackable tape libraries that are too deep to fit in a standard rack.

The Spectra Stack Tape Library starts off with a minimum of a 6U high control module. To this can be added (i.e., stacked up) as needed up to six 6U high expansion modules. The advantages of the stackable tape library are ease of use in that it is user installable, scalable, and serviceable (which is a very useful customer self-management approach). Plus, the user can start with a minimal investment and grow the environment as needed.

The supposed disadvantage of a stackable tape library is scalability, but conceivably the Spectra Stack Tape Library could grow from about a minimum of 120 TB compressed to up to 6.7 PB (16.75 PB compressed) with a maximum of 560 LTO-8 tapes. This is a remarkable range that can accommodate numerous requirements and use cases, from standard IT backup and archiving to bulk storage requirements, such as for video surveillance. In addition, an organization that outgrows the storage capacity that is offered by the Spectra Stack can ‘TranScale’ to a larger Spectra library, enabling  the media and drives from the Spectra Stack to be installed in one of the enterprise libraries (Spectra T950v, T950, or TFinity ExaScale) to provide virtually limitless storage while preserving the initial drive and media investment made in their Spectra Stack library.

Whatever Spectra Logic does tends to be creative and innovative, not simply a copy of what competitors are doing. Consider how the notion of “flexibility” is often overused.  In the case of investment protection, that is exactly what Spectra Logic delivers. The Spectra Stack Tape Library uses Linear Tape Open (LTO)-compliant tape drives and tape media that follows the specifications of the LTO Consortium. Tape drives come in half-height (1U) and full-height (2U) versions. Each module in the Spectra Stack Library can use one or the other.

This ability to mix is something that not all competitors offer. Moreover, the system can support LTO-5 through LTO-8 drives and LTO-3 to LTO-8 tape media. What this means is that a user does not necessarily have to buy new drives (a substantial cost in any tape library) or tape media, but can simply retire an older non-Spectra Logic tape library regardless of the height or generation (within reason) of the tape drives.

But the company does not stop there. Spectra Logic has long recognized the importance of integrated library management software and provided it to its larger tape libraries in Spectra Enterprise BlueScale. BlueVision, which is adapted from BlueScale, provides those same management features for the Spectra Stack Tape Library.

Users can access the library either through a color touchscreen or a remote web access. The library can be divided into up to 20 partitions (where each partition acts as if it were a separate library, which means that a different software application — say backup, archiving, object storage — can have its own partition). Tape encryption can be defined as appropriate. In addition, BlueVision provides library and tape drive diagnostics. For media, it provides a MLM (media lifecycle management) feature that is exclusive to Spectra Logic and is available with the use of Spectra certified media.

Other differentiators from competitors’ stackable libraries include the fact that the Spectra Stack Tape Library is designed for a 100% duty cycle (i.e., 24x7x365) and can work with the company’s BlackPearl product, which enables object storage tape (and object storage on tape can be a big deal because of its ability to scale very cost-effectively).

Mesabi musings

As always, it has to be reiterated that tape is not dead. Nor is innovation in tape libraries a thing of the past. Spectra Logic’s new Spectra Stack Tape Library’s scalability, investment protection flexibility, and customer self-management should be welcomed by those who already use tape and want an upgrade, those who may not currently use tape libraries, but need an inexpensive option for long-term storage of humongous amounts of data, and those who need to start off small and grow incrementally.

Secondary storage may not be in the limelight, but it deserves more attention because of its great ongoing importance. That point highlights why and how Spectra Logic has long prospered by focusing on Tier-1 storage offloads.