In 2006 there was AWS EC2. It was used by forward-thinking application developers who soon saw it as a way to quickly deploy and automate the scaling of infrastructure. Then came the advent of open source IaaS platforms like Cloud.com and Eucalyptus. Some brave souls ventured out to implement private and public clouds with the basic AWS-like services for internal and external consumption, people like Tata Communications and Korea Telecom (I worked with both of these customers).
When OpenStack entered the scene in 2010—largely as an antidote to the perceived shortcomings of the other open source cloud projects—people were skeptical at first. Many derided the entire exercise as misguided or wrong headed. Open source “science projects” worked fine on laptops and in controlled settings, but failed in the data center. Still, the community grew, developers persevered, and soon the big boys joined the party, granting legitimacy to OpenStack.
Today, with the upcoming 13th release of the software, code-named Mitaka, OpenStack is mature, stable and well on its way to becoming THE choice for enterprises implementing agile private cloud infrastructures.
Open Infrastructure 1.0
The story of Cloud.com, Eucalyptus, and OpenStack is the story of Open Infrastructure 1.0—the basic, infrastructure-as-a-service resources behind the firewall that power a private cloud. Armed with basic functionality comparable to AWS (compute, block and object storage, basic networking, authentication, etc.), these services were soon enhanced with SDN, software-defined storage, some timid attempts at PaaS, and workload orchestration templates, among many others.
Today, there are seven core projects with 50+ active projects in the OpenStack “big tent,” the term given to the broad governance structure for all projects, including the seven that are part of “Core” OpenStack (Nova compute, Cinder block storage, Swift object storage, Neutron networking, Keystone authentication, Horizon dashboard, and Heat image management).
The team at Solinea helped enterprises implement some of the first Open Infrastructure 1.0 platforms. While these clouds leveraged some legacy hardware investments and were behind the firewall, they did not necessarily touch legacy infrastructure, organization and processes. These early cloud projects were governed under a philosophy that essentially said, “It’s infrastructure. We have a set of processes that have worked for 20 years. Let’s force fit the cloud into those processes.”
When cloud was new, processes generally remained legacy.
Some of these enterprises implemented successful clouds, among them a top 5 auto maker, a leading research institute, and a leading global network provider. Solinea worked with all of them.
But we soon saw that there was something missing—the benefits of deploying open infrastructure were not as drastic as we’d all anticipated. Where were the big cost savings? Why wasn’t speed to market for new apps faster?
Provisioning times for infrastructure came down, but not much. Existing operations teams were having a difficult time keeping the cloud running smoothly; outside help was needed. Application workloads were moving onto the new platforms, albeit slowly. The expected deluge in demand did not materialize. Perhaps most concerning was that the development, test and operations teams were still as disjointed and uncommunicative as before—write code, throw it over the fence to test it, send it back with a bug report, with no empowerment on the operations side.
Everyone was still executing the same way they were before. Customers were deploying cloud as a technology. #Fail.
Then, sometime in late 2013, things started to change. “DevOps” was entering the vernacular of our prospects and customers, application migration to the cloud started to become a primary driver, cost no longer was a driving factor, but speed and agility instead; people started looking beyond the technology—reconfiguring processes, re-training operations teams, restructuring their siloed organizations, and incubating cross-functional cloud teams.
We began to engage with customers to address these challenges. In effect, they needed to go beyond OpenStack as a technology. We expanded our team of smart engineers and architects—pros that had been at the forefront of the DevOps movement who knew how to architect massively scalable infrastructures, both private and hybrid. In short, we built a team that understood application architectures for cloud, and knew how to manage large cross-functional programs.
These changes put Solinea at the forefront of creating what we call Open Infrastructure 2.0. The definition is simple. We defined it at a high level in my prior blog post:
Now we expand the definition to a more granular level:
Open Infrastructure 2.0 unlocks the agility, efficiency and cost advantages of open infrastructure.
As one of our customers likes to say: “If the Infrastructure is the highway, and applications are the cars, we need to get more cars on the highway faster to justify the investment and achieve our agility objectives.”
Making the Leap to Open Infrastructure 2.0
We are well on our way to helping enterprises make the leap from Open Infrastructure 1.0 to 2.0. We’ve worked and are working with companies like Deutsche Boerse, Yahoo! Japan America, and a leading US media and entertainment company to architect and enable OpenStack clouds AND to ensure that CI/CD automation, containerization of applications, microservices, and the orchestration of these services all happen in a new, frictionless environment where barriers between the siloes dissolve, processes change, and skills develop—all driven by the relentless pursuit of business and technology agility and operational efficiency—leading to more “cars on the highway.”
Of course, change takes time, and this is mostly about cultural change after all (see Seth Fox’s excellent blog post on process change).