2019 IT trend: Enterprises will begin closing their data centers

Is the enterprise data center dying? How dead is it?

Working as an IT architect for much of my life, this is a surprising question to hear from me.  In truth, the data center within a single enterprise has been yielding to leveraged, multitenant data centers for quite some time. The leverage that can be attained from power, connectivity, larger administration ratios, and high degrees of automation in a (public) cloud data center make a single tenant data center seem like bad business. Further, more recently, cloud providers have been standing up massive capacity that can be utilized at a very low price.  The tools to secure and connect have reached the point that enterprises are now not afraid to use this “cheap” capacity.

Moving domains of data interaction

Most organizations have been moving workloads to virtual machines (VMs) for years, so leveraged hardware within the data center has become the norm. Most of those VM workloads are just as easily run on a public cloud. One issue is that these workloads talk to each other inside the data center in volumes that are only limited by the core switches. Application architects and designers rarely worry about the size and speed of service calls.

However, when we start to geographically spread workloads out to Azure or AWS, we need to pay far more attention to the possible effects of latency. It becomes a question of moving domains of data interaction one at a time. In other words, find the right groupings of workloads and data to move together to a cloud zone.

Visual modelling techniques that show the frequency and size of interaction let you to see the “thin threads” and the “trunks and roots.” Once you find the right grouping of services and data to move together, the resulting performance should be similar to that of a single data center.

A bigger issue are those parts that don’t move well, such as legacy mainframes. Airlines, credit card processors, banks and hotels still operate IBM z/TPF and zOS mainframes, which date back to the 1960s. Having seen these my entire career and having worked on countless projects to modernize or eliminate the mainframes, I get how “sticky” they are.

The easy one: Inbound traffic

I won’t labor on the issue of why these are still there, but rather focus on how to deal with the latency requirements of being anchored to these centralized systems. First, the easy one. Inbound traffic to the mainframes, I contend, is not an issue for cloud-based front ends at all. These systems were developed for users connecting through as little as 9.6 kbit/s and through satellite links. Hence the inbound messages are quite small and the responses are a screen of text. Moving these front ends to cloud providers is an easy use case. In a way, these front ends were built like services.

The dilemma: Outbound traffic

The more challenging is the back-end outbound traffic. As other technologies have become available, there have been significant efforts to send transaction events from these mainframe systems to allow external extensions and data replication. These processes need to move extremely large volumes, with potentially 50 – 100 milliseconds of additionally latency. This can be a challenge and might warrant refactoring or even rewriting the more modern downstream systems to more efficiently deal with latency.

Thus, the dilemma is that the focus of projects in the cloud has always been the front-end systems. Because of the customer-centric drivers in digital transformation, typically any system that interacts with the customer is easily justified for migration to the cloud (and the development and money to do so). However, those are also the easiest to “lift and shift” into the cloud. On the reverse side, the back-end systems that certainly need to be migrated to the cloud are the lower priority. This will change over time. In the meantime, the workaround is to use cloud-ready frameworks (within the data center) like Red Hat OpenShift and Pivotal Cloud Foundry to allow refactoring and redevelopment “in place,” with disaster recovery or scale up in the cloud.

More lift and shift

So, yes the enterprise data center is dying, but the question is when?

Over the next few years the hybrid data center with direct channels to cloud providers will be the norm. Generally, the most common targets will be disaster recovery systems in the cloud along with a great number of new “built for cloud” applications.

As organizations discover that their huge investments in power infrastructure, networking, real estate, and staff are untenable, the move will accelerate. I do see a lot of “lift and shift” in 2019, tipping the scales further towards cloud. Ryanair’s move to AWS is an example. Non-cloud private data centers over the next 3-5 years will become virtual compartments in various clouds and thus be shut down except for the mainframe workloads. The mainframe workloads will migrate to specialized data centers that are heavily leveraged until they go away (someday).

John Tsucalas is DXC Technology’s account chief technologist at American Airlines, leads several key airline industry projects and is a Distinguished Engineer. For nearly a decade John has been an industry thought leader in cloud and service-based architectures. He pioneered and evolved the use of service-oriented architecture and cloud-based approaches in the airline industry. His work became the foundation for much of DXC Technology’s transportation and hospitality digital transformation. Connect with John on LinkedIn.

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.