OpenStack – Solinea https://solinea.com Tue, 28 Nov 2017 18:11:25 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.1 53200147 Asian Telco Dials in App Performance with CI Pipeline for Deploying Cloud https://solinea.com/resources/case-studies/asian-telco-dials-app-performance-ci-pipeline-deploying-cloud Wed, 29 Mar 2017 22:20:49 +0000 https://solinea.com/?p=2884 Solinea built platform to boost application performance for large telecommunications firm with container-based development platform.

The post Asian Telco Dials in App Performance with CI Pipeline for Deploying Cloud appeared first on Solinea.

]]>
Solinea built platform to boost application performance for large telecommunications firm with container-based development platform.

Download the case study


See how Solinea is helping this leading Asian telco through digital transformation designing and integrating end-to-end CI/CD for automated deployment and testing, improved overall infrastructure stability, and simplified management.

Read this case study to learn how Solinea helped:

  • Reduce time to market and error rates by Integrating end-to-end CI/CD for automated deployment and testing.
  • Boost infrastructure resilience and scalability while simplifying management by implementing cloud management tools (OpenStack on Kubernetes).
  • Ensure higher quality product plus end user comfort and adoption with best practices training regarding code acceptance and testing.
  • Engage and integrate client with several Open Source community projects to boost quality and drive new feature sets.

The post Asian Telco Dials in App Performance with CI Pipeline for Deploying Cloud appeared first on Solinea.

]]>
2884
Unitas Global and Solinea Partner to Offer a Holistic OpenStack Hybrid Cloud Solution for Enterprises https://solinea.com/news-events/unitas-global-solinea-partner-offer-holistic-openstack-hybrid-cloud-solution-enterprises Tue, 25 Oct 2016 20:35:10 +0000 https://solinea.com/?p=2341  Offering Eliminates OpenStack Implementation Challenges by Combining Fully Managed Cloud Infrastructure with Enterprise Deployment, Implementation and Adoption Services LOS ANGELES, CA–(Marketwired – Oct 25, 2016) – Unitas Global, the leading enterprise cloud solutions provider, today announces a new partnership with Solinea, a provider of open infrastructure solutions for deployment and adoption of production clouds, helping […]

The post Unitas Global and Solinea Partner to Offer a Holistic OpenStack Hybrid Cloud Solution for Enterprises appeared first on Solinea.

]]>

 

Offering Eliminates OpenStack Implementation Challenges by Combining Fully Managed Cloud Infrastructure with Enterprise Deployment, Implementation and Adoption Services

LOS ANGELES, CA–(Marketwired – Oct 25, 2016) – Unitas Global, the leading enterprise cloud solutions provider, today announces a new partnership with Solinea, a provider of open infrastructure solutions for deployment and adoption of production clouds, helping enterprises take advantage of the benefits OpenStack provides without the associated challenges. The partnership combines Unitas’ custom cloud architecture, management expertise and global operations with Solinea’s integration and adoption solutions enabling enterprises to deploy and fully adopt more modern, agile cloud infrastructure and processes.

Unitas Global designs, deploys and operates cloud services for enterprise organizations looking to take advantage of secure, OpenStack hybrid cloud computing without traditional vendor lock-in and the challenges associated with building and operating production-grade cloud environments. Unitas also offers customers 24x7x365 support and full visibility into the health and performance of their OpenStack environment through the Unitas Atlas™ unified monitoring platform and Cloud Management Center.

“Many enterprises are considering OpenStack solutions to take advantage of enhanced agility, flexibility and lack of vendor lock-in, but often find difficulty in implementing the strategy to drive successful adoption of these cloud environments,” comments Grant Kirkwood, Founder and CTO of Unitas Global. “By partnering with Solinea, we are able to offer enterprise businesses not only the infrastructure, guidance and ongoing management of OpenStack clouds, but also the post-deployment integration, training, and culture transformation to ensure seamless adoption within the organization.”

An ideal complement to Unitas’ solutions, Solinea’s adoption services include role and process definition, process and tools implementations, operations runbooks, training and mentorship. These services extend beyond the technology implementation to ensure that the cloud is fully integrated and adopted by the enterprise, while addressing customers’ security, compliance and regulatory needs as well as challenging operating requirements. The joint offering from Unitas and Solinea helps enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.

“Partnering with Unitas Global is the right decision for our current and future customers looking to architect and deploy open, vendor-agnostic cloud solutions,” says Solinea CEO, Francesco Paola. “As we work with leading global enterprises and service providers to architect and deploy cloud, container and microservices solutions at scale that drive agility into the organization through DevOps adoption and cut infrastructure costs, it is important for us to work with a leading enterprise cloud solution provider that understands our customers’ needs, has an exceptional team and delivers solid results. Unitas Global is a great choice.”

The Unitas Global and Solinea teams will discuss the partnership and their respective offerings at OpenStack Summit, taking place October 25-28 in Barcelona, Spain. Mr. Kirkwood will also be giving two presentations at the OpenStack Summit. On Tuesday, October 25, he will present one way to learn OpenStack from the inside out, in a session titled “From How-To to POC to Production: Learning by Building.” On Wednesday, October 26, Mr. Kirkwood will present, “OpenStack in the Wild! How to Make OpenStack a Reality in Your Company.” The presentation will arm enterprise IT leaders with tangible action items they can use to successfully deploy and adopt OpenStack within their organizations. Refer to the latest OpenStack Summit schedule time and location of these presentations.

To request a meeting with Unitas at the event, please email unitas@imillerpr.com.

To learn more about Unitas Global, visit www.unitasglobal.com.

About Unitas Global

Unitas Global is a leading provider of enterprise cloud solutions. Each solution provides clients with custom, highly secure and dedicated cloud-based IT environments that are easy-to-consume, fully managed and backed by an end-to-end SLA, guaranteeing application uptime. By offloading day-to-day infrastructure operations to Unitas Global, our clients are able to refocus and optimize their internal IT resources toward their business-centric initiatives. Unitas is headquartered in Los Angeles, with clients and locations spanning the globe. For more information, please visit www.unitasglobal.com.

About Solinea

Solinea is the leading software and services company that accelerates adoption of open infrastructure solutions. Through its proven consulting and training offerings, we help enterprises adopt cloud computing solutions and the supporting ecosystems including Cloud Infrastructure, DevOps & Automation, Containers & Microservices, driving Application Migration to the cloud. Led by experts who have designed, deployed and operated production open infrastructure at scale for clients worldwide, Solinea is technology agnostic, helping clients select the right infrastructure platforms, design the right agile processes and toolchains, and adopt orchestrated containers in production to achieve their business objectives. For more information, please visit www.solinea.com.

 

 

The post Unitas Global and Solinea Partner to Offer a Holistic OpenStack Hybrid Cloud Solution for Enterprises appeared first on Solinea.

]]>
2341
Solinea Customer Case on Kubernetes.io: How We Architected and Run Kubernetes on OpenStack at Scale at Yahoo! JAPAN https://solinea.com/blog/solinea-kubernetes-io Tue, 25 Oct 2016 03:47:35 +0000 https://solinea.com/?p=2323  Originally posted on kubernetes.ioEditor’s note: today’s post is by the Infrastructure Engineering team at Yahoo!

The post Solinea Customer Case on Kubernetes.io: How We Architected and Run Kubernetes on OpenStack at Scale at Yahoo! JAPAN appeared first on Solinea.

]]>
 

Originally posted on kubernetes.io

Editor’s note: today’s post is by the Infrastructure Engineering team at Yahoo! JAPAN, talking about how they run OpenStack on Kubernetes. This post has been translated and edited for context with permission — originally published on the Yahoo! JAPAN engineering blog


Intro
This post outlines how Yahoo! JAPAN, with help from Google and Solinea, built an automation tool chain for “one-click” code deployment to Kubernetes running on OpenStack. 

We’ll also cover the basic security, networking, storage, and performance needs to ensure production readiness. 

Finally, we will discuss the ecosystem tools used to build the CI/CD pipeline, Kubernetes as a deployment platform on VMs/bare metal, and an overview of Kubernetes architecture to help you architect and deploy your own clusters. 

Preface
Since our company started using OpenStack in 2012, our internal environment has changed quickly. Our initial goal of virtualizing hardware was achieved with OpenStack. However, due to the progress of cloud and container technology, we needed the capability to launch services on various platforms. This post will provide our example of taking applications running on OpenStack and porting them to Kubernetes.

Coding Lifecycle
The goal of this project is to create images for all required platforms from one application code, and deploy those images onto each platform. For example, when code is changed at the code registry, bare metal images, Docker containers and VM images are created by CI (continuous integration) tools, pushed into our image registry, then deployed to each infrastructure platform.

website-yja-kubernetes_ci-pipeline

We use following products in our CICD pipeline:

Function

Product

Code registry

GitHub Enterprise

CI tools

Jenkins

Image registry

Artifactory

Bug tracking system

JIRA

deploying Bare metal platform

OpenStack Ironic

deploying VM platform

OpenStack

deploying container platform

Kubernetes

Image Creation. Each image creation workflow is shown in the next diagram.

VM Image Creation:

website-yja-kubernetes_vm-image-creation

push code to GitHub
hook to Jenkins master
Launch job at Jenkins slave 
checkout Packer repository
Run Service Job
Execute Packer by build script
Packer start VM for OpenStack Glance 
Configure VM and install required applications
create snapshot and register to glance
Download the new created image from Glance
Upload the image to Artifactory

 


Bare Metal Image Creation:

website-yja-kubernetes_bare-metal-image-creation
push code to GitHub
hook to Jenkins master
Launch job at Jenkins slave 
checkout Packer repository
Run Service Job
Download base bare metal image by build script
build script execute diskimage-builder with Packer to create bare metal image
Upload new created image to Glance
Upload the image to Artifactory

Container Image Creation:

website-yja-kubernetes_container-image-creation

push code to GitHub
hook to Jenkins master
Launch job at Jenkins slave 
checkout Dockerfile repository
Run Service Job
Download base docker image from Artifactory
If no docker image found at Artifactory, download from Docker Hub
Execute docker build and create image 
Upload the image to Artifactory

Platform Architecture

Let’s focus on the container workflow to walk through how we use Kubernetes as a deployment platform. This platform architecture is as below.
website-yja-kubernetes_platform-architecture

Function

Product

Infrastructure Services

OpenStack

Container Host

CentOS

Container Cluster Manager

Kubernetes

Container Networking

Project Calico

Container Engine

Docker

Container Registry

Artifactory

Service Registry

etcd

Source Code Management

GitHub Enterprise

CI tool

Jenkins

Infrastructure Provisioning

Terraform

Logging

Fluentd, Elasticsearch, Kibana

Metrics

Heapster, Influxdb, Grafana

Service Monitoring

Prometheus

We use CentOS for Container Host (OpenStack instances) and install Docker, Kubernetes, Calico, etcd and so on. Of course, it is possible to run various container applications on Kubernetes. In fact, we run OpenStack as one of those applications. That’s right, OpenStack on Kubernetes on OpenStack. We currently have more than 30 OpenStack clusters, that quickly become hard to manage and operate. As such, we wanted to create a simple, base OpenStack cluster to provide the basic functionality needed for Kubernetes and make our OpenStack environment easier to manage.

Kubernetes Architecture

Let me explain Kubernetes architecture in some more detail. The architecture diagram is below.

website-yja-kubernetes_kubernetes-architecture

Product

Description

OpenStack Keystone

Kubernetes Authentication and Authorization

OpenStack Cinder

External volume used from Pod (grouping of multiple containers)

kube-apiserver

Configure and validate objects like Pod or Services (definition of access to services in container) through REST API

kube-scheduler

Allocate Pods to each node

kube-controller-manager

Execute Status management, manage replication controller

kubelet

Run on each node as agent and manage Pod

calico

Enable inter-Pod connection using BGP

kube-proxy

Configure iptable NAT tables to configure IP and load balance (ClusterIP)

etcd

Distribute KVS to store Kubernetes and Calico information

etcd-proxy

Run on each node and transfer client request to etcd clusters

Tenant Isolation

To enable multi-tenant usage like OpenStack, we utilize OpenStack Keystone for authentication and authorization.

Authentication

With a Kubernetes plugin, OpenStack Keystone can be used for Authentication. By Adding authURL of Keystone at startup Kubernetes API server, we can use OpenStack OS_USERNAME and OS_PASSWORD for Authentication.

Authorization

We currently use the ABAC (Attribute-Based Access Control) mode of Kubernetes Authorization. We worked with a consulting company, Solinea, who helped create a utility to convert OpenStack Keystone user and tenant information to Kubernetes JSON policy file that maps Kubernetes ABAC user and namespace information to OpenStack tenants. We then specify that policy file when launching Kubernetes API Server. This utility also creates namespaces from tenant information. These configurations enable Kubernetes to authenticate with OpenStack Keystone and operate in authorized namespaces.

Volumes and Data Persistence

Kubernetes provides “Persistent Volumes” subsystem which works as persistent storage for Pods. “Persistent Volumes” is capable to support cloud-provider storage, it is possible to utilize OpenStack cinder-volume by using OpenStack as cloud provider.

Networking

Flannel and various networking exists as networking model for Kubernetes, we used Project Calico for this project. Yahoo! JAPAN recommends to build data center with pure L3 networking like redistribute ARP validation or IP CLOS networking, Project Calico matches this direction. When we apply overlay model like Flannel, we cannot access to Pod IP from outside of Kubernetes clusters. But Project Calico makes it possible. We also use Project Calico for Load Balancing we describe later.

In Project Calico, broadcast production IP by BGP working on BIRD containers (OSS routing software) launched on each nodes of Kubernetes. By default, it broadcast in cluster only. By setting peering routers outside of clusters, it makes it possible to access a Pod from outside of the clusters.

website-yja-kubernetes_networking

External Service Load Balancing

There are multiple choices of external service load balancers (access to services from outside of clusters) for Kubernetes such as NodePort, LoadBalancer and Ingress. We could not find solution which exactly matches our requirements. However, we found a solution that almost matches our requirements by broadcasting Cluster IP used for Internal Service Load Balancing (access to services from inside of clusters) with Project Calico BGP which enable External Load Balancing at Layer 4 from outside of clusters.

website-yja-kubernetes_loadbalancing

Service Discovery

Service Discovery is possible at Kubernetes by using SkyDNS addon. This is provided as cluster internal service, it is accessible in cluster like ClusterIP. By broadcasting ClusterIP by BGP, name resolution works from outside of clusters. By combination of Image creation workflow and Kubernetes, we built the following tool chain which makes it easy from code push to deployment.

website-yja-kubernetes_workflow_k8s_all

Summary

In summary, by combining Image creation workflows and Kubernetes, Yahoo! JAPAN, with help from Google and Solinea, successfully built an automated tool chain which makes it easy to go from code push to deployment, while taking multi-tenancy, authn/authz, storage, networking, service discovery and other necessary factors for production deployment. We hope you found the discussion of ecosystem tools used to build the CI/CD pipeline, Kubernetes as a deployment platform on VMs/bare-metal, and the overview of Kubernetes architecture to help you architect and deploy your own clusters. Thank you to all of the people who helped with this project. –Norifumi Matsuya, Hirotaka Ichikawa, Masaharu Miyamoto and Yuta Kinoshita. This post has been translated and edited for context with permission — originally published on the Yahoo! JAPAN engineer blog where this was one in a series of posts focused on Kubernetes.

The post Solinea Customer Case on Kubernetes.io: How We Architected and Run Kubernetes on OpenStack at Scale at Yahoo! JAPAN appeared first on Solinea.

]]>
2323
YJ America Inc. Partners with Solinea to Implement One of the Largest Production Kubernetes Deployments Outside of Google https://solinea.com/news-events/yj-america-inc-partners-solinea-implement-one-largest-production-kubernetes-deployments-outside-google Mon, 24 Oct 2016 18:04:40 +0000 https://solinea.com/?p=2308 Web-scale Platform Built on 1,000+ Node OpenStack Cluster Demonstrates Open Infrastructure Expertise.SAN FRANCISCO ( Business Wire ) // October 24, 2016 // Solinea, a global provider of open infrastructure solutions for deployment and adoption of production clouds, in partnership with ITOCHU Techno-Solutions America, Inc., have deployed a container orchestration platform based on Kubernetes, together with a continuous […]

The post YJ America Inc. Partners with Solinea to Implement One of the Largest Production Kubernetes Deployments Outside of Google appeared first on Solinea.

]]>

Web-scale Platform Built on 1,000+ Node OpenStack Cluster Demonstrates Open Infrastructure Expertise.

SAN FRANCISCO ( Business Wire ) // October 24, 2016 // Solinea, a global provider of open infrastructure solutions for deployment and adoption of production clouds, in partnership with ITOCHU Techno-Solutions America, Inc., have deployed a container orchestration platform based on Kubernetes, together with a continuous integration and continuous deployment (CI/CD) set of processes and supporting toolchain to enable YJ America Inc., a subsidiary of Yahoo! JAPAN, to rapidly deploy applications and services on top of their existing OpenStack cloud platform. The Kubernetes platform and agile CI/CD processes will enable YJ America to achieve their objectives of increased agility (speed to market), efficiency, enhanced customer satisfaction and cost reduction.

Solinea led the architecture, design and implementation of both the container orchestration platform based on Kubernetes and the CI/CD processes and toolchain infrastructure. The latter was crafted using open source solutions including Docker, Packer, Jenkins, jFrog’s Artifactory and github. To solve the challenges of container networking, Solinea incorporated Project Calico into the solution fabric. The resulting orchestration layer, CI/CD toolchain and networking fabric was then integrated into the existing YJ America OpenStack cloud.

“YJ America has very specific objectives in how it expands its capabilities in the US in order to support its customers,” said Norifumi Matsuya, YJ America Inc. VP of Cloud infrastructure engineering Division. “One of these objectives is to accelerate our ability to adopt new infrastructure technologies to increase speed to market of new services for our customers, such as big data analytics platforms, in a cost effective manner. Solinea was instrumental in architecting and deploying the resulting DevOps and container orchestration solution, from developing the proof of concept to rolling out the production platform.”

“Solinea’s knowledge of the technologies and expertise at designing the right solution for our needs helped YJ America achieve its objectives in the right timeframe. Through Solinea’s partnership and expertise around open source ecosystem for the hyper scale architecture, YJ America and ITOCHU Techno-Solutions America, Inc. could make this happen. We would like to expand this case study and success to the Japan market with Solinea,” said Hisatomo Tanaka, Director of International Sales and Solution Engineering, ITOCHU Techno-Solutions America, Inc.

“YJ America had very specific targets for the new infrastructure,” said Francesco Paola, Solinea CEO. “It needed to integrate into the existing OpenStack cloud, enable the rapid deployment of new applications and services, scale on demand and it had to be secure. Solinea’s expertise in DevOps, Docker, Kubernetes and OpenStack was the right combination to implement a complex solution in record time.”

The feature is built upon seamless, auto-aggregated source data from the core OpenStack components, enabling analysis and exploration of difficult-to-navigate core data.

About Yahoo! JAPAN

Yahoo Japan Corporation was founded in 1996 and operates Yahoo! JAPAN, one of the largest portal sites, in Japan. Offering more than 100 services including search, news and e-commerce via PC and smart devices, Yahoo! JAPAN serves more than 80 million daily unique browsers, and 60 billion monthly page views on average. The total number of downloads for iOS and Android applications released by Yahoo Japan Corporation exceeds 270 million. For more information, please visit http://ir.yahoo.co.jp/en/ 

About Solinea

Solinea is the leading software and services company that accelerates adoption of open infrastructure solutions. Through its proven consulting and training offerings, we help enterprises adopt cloud computing solutions and the supporting ecosystems including Cloud Infrastructure, DevOps & Automation, Containers & Microservices, driving Application Migration to the cloud. Led by experts who have designed, deployed and operated production open infrastructure at scale for clients worldwide, Solinea is technology agnostic, helping clients select the right infrastructure platforms, design the right agile processes and toolchains, and adopt orchestrated containers in production to achieve their business objectives. For more information, please visit: www.solinea.com 

About ITOCHU Techno-Solutions America, Inc. 

ITOCHU Techno-Solutions America, Inc. is the wholly owned subsidiary of ITOCHU Techno-Solutions Corporation, one of the largest system integrators in Japan. ITOCHU Techno-Solutions Corporation has extensive track records to bring US technologies to Japan and other Asian markets and has forged successful business relationships with US-based technology companies.

 ITOCHU Techno-Solutions America, Inc. is providing international business development and operation capabilities with technology partners and service providers in the US. For more information, please visit: http://www.ctc-america.com/ 

About Kubernetes 

Kubernetes (commonly referred to as “k8s”) is an open source container cluster manager originally designed by Google and donated to the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. Simply put it is a project gaining momentum to provide better ways of managing related, distributed components across varied infrastructure. For more information, please visit: www.kubernetes.io

 

Media Contact:

Andreas Ryuta Stenzel

415-513-0703

andreas@solinea.com

Solinea Inc.

The post YJ America Inc. Partners with Solinea to Implement One of the Largest Production Kubernetes Deployments Outside of Google appeared first on Solinea.

]]>
2308
Repost: Why Enterprises Want Containers Now — And Why You Should Too https://solinea.com/blog/repost-enterprises-want-containers-now Sat, 03 Sep 2016 06:13:05 +0000 https://solinea.com/?p=2147 Originally Posted by TheNewStack  About SolineaSolinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.Better processes and tools equals better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.

The post Repost: Why Enterprises Want Containers Now — And Why You Should Too appeared first on Solinea.

]]>
Originally Posted by TheNewStack

 

About Solinea

Solinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.

Better processes and tools equals better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.


Historically, large organizations have been slow to adopt new technologies. For example, when virtualization became available, it was a massive, quantum leap from the mindset of one physical server, one operation system, one workload. But it took enterprises a decade or more to embrace virtualization.

And virtualization was a relatively easy sell. After all, it offered energy savings, a reduced data center footprint, faster provisioning, increased uptime and more. With virtualization, cost compression was the name of the game, and no one played the game better than VMware.

Later, when the public cloud (AWS, GCE, etc.) and open source private cloud options (OpenStack, CloudStack, etc.) came along, enterprises had to fundamentally change the way they thought about application development and deployment, as well as infrastructure operations. This, too, was a monumental shift, and this, too, took time.

And yet, today, in stark comparison to those two previous examples of plodding progress, enterprises are adopting container technologies at light speed (relatively speaking). Only two years ago enterprises were asking, “What’s this DevOps thing all about?” and now many are asking, “Do we even need VMs?”

That’s a big shift in a short timeframe. Why have containers caught on so quickly? All the conditions are just right for rapid adoption of container technologies:

1. Virtualization Paves the Way

First, the adoption of virtualization itself has facilitated the adoption of container technologies. With virtualization, enterprises started to become accustomed to faster application development and testing time and have seen application deployment times reduced from days or weeks to hours and minutes. Virtualization made possible, for many enterprises, their dramatic IT transformation from waterfall development to agile and DevOps.

Like infants who take their first steps one day and are cruising around the living room the next, enterprises have experienced the benefits of speed and agility, thanks to virtualization; now, there is no stopping their quest for more. Containers offer the next big leap beyond virtualization. You can further optimize your server resources with containers than you can with VMs. Containers are more efficient at maximizing memory use and CPUs than VMs running a similar workload, and containers initiate in milliseconds as opposed to the minutes that it takes for VMs to boot up.

Plus, for app developers, containers deliver “easy” on a platter: containers are portable, consistent environments for development, testing and deployment, enabling developers to build once and deploy anywhere with a single click.

2. Open Source Goes Mainstream

Second, open source technologies are becoming more prevalent in large enterprises. Open source has been making its way into the enterprise at a pretty steady pace. If you go back over 15 years, you would see open source in places like mail relays and web servers. Slowly but surely, the use of open source has migrated into more and more areas of the enterprise.

Some organizations can consume open source from the code base and operate it on their own. There are some clear advantages here, especially if your organization is willing and able to contribute code back to a project. Most organizations, however, consume open source via a standard support and maintenance model, otherwise known as a distribution (or distro, for short). This is very common, and we can point to historical examples ranging from Linux distributions to OpenStack distributions.

Today you will be hard pressed to find a large enterprise that is not using open source for significant parts of their infrastructure. In fact, open source technologies are now recognized and valued by enterprises not only for scalability and avoidance of vendor lock-in but also for quality, adaptability, innovation and feature development. Strength of the development community is also a critical factor. Even the security concerns that some enterprises cited once upon a time have given way to the high value placed on transparency of the code base as well as the software development process.

Because enterprises have now embraced open source — a Future of Open Source survey in 2015 reported 78 percent of respondents are running their businesses on open source software enterprises are leaning in on Docker and other container technologies at a faster pace than they have leaned in on other technologies in the past.

3. Enterprises Get Cloudy

Third, enterprises have taken the leap to build “cloudy” applications, and this has reduced the intellectual barrier to adopting containers. Not long ago, “Shadow IT” was a big enterprise IT issue; developers secretly stepped out beyond the corporate IT boundaries to find suitable testing grounds for their applications. Enterprises used to be surprised and alarmed when they discovered that their developers were using AWS; now, we’re all surprised if they are not.

As enterprises have embraced the cloud, Shadow IT is going mainstream. Now developers and IT operators are linking arms and asking, “Now that we have cloud, how can we move faster?” You might even say that the cloud has spawned the DevOps mindset: How can I converge development and operational processes to capitalize on the cloud infrastructure that is now available?

These three developments — virtualization, open source and cloud — have transformed the enterprise IT mentality and opened the floodgates to container land.

Should Containers Be in Your Future?

If you are the kind of enterprise that builds your own apps, and these apps are designed to make your business money, containers are one bandwagon you’d be well advised to hop on. Containers are real; the technology has a low barrier for entry, and the economy of running containers at scale will deliver real value.

That said, the answer to “Do we even need VMs?” is yes. Virtual machines will have their place in the enterprise and are not going away in the near term. Not all applications are ready for containers, and yet those applications still make money and add value to your business. So for the time being, we still need VMs to support some of the packaged applications that run our business.

A first project? Why not start by tackling one of the biggest IT constraints enterprises face: developer wait cycles. In most enterprises, developers have to wait inordinately for anything ranging from getting a laptop to getting a server to test their code. Containers are a great tool for overcoming these wasteful and maddening bottlenecks.

Consider the benefits containers can offer here. With a container framework, developers can test code on their desktops, or on shared server hardware. Their deployment code can be used in many locations for testing. The control groups and namespace isolation that are core to containerland make this a very reasonable approach for development teams. Not only can developers gain real time savings here, but also operators can achieve a tremendous reduction in idle infrastructure. Plus, if you are developing this way, the leap to test/prod environments is much shorter as well. Provance and immutability are key tenets of any CI/CD process; this is core to the container framework.

Start with an application that can utilize this new container framework in a sane manner. Find an internally facing application where people can learn by making mistakes and get away with it, without putting external relationships at risk. There are many new constructs from orchestration to operations that will be different on a container platform, so give the whole organization, from the developers to the operators, the time to learn what these new frameworks mean. Find a way for the team to cut their teeth on this stuff before you decide that you should roll out containers to customer facing applications.

Lessons Learned

The rapid adoption of a “new” technology typically comes with numerous toe-stumps. Here are a few boulders to look out for as you journey into container land.

Tooling: Tooling in the container space is, to put it mildly, immature by enterprise standards. The ecosystem has not yet found a unified course and is running in many directions at once. In the month of December, according to GitHub stats, the Kubernetes project alone has seen 592 code commits from 117 contributors. In that same time, Docker had 408 code commits from 97 contributors.

Distributions: As we have seen with open source projects in the past, distributions will form in this space as well. Today when you are assembling the tools you need to support your container environment, you will be getting components from different sources. A sampling of these are listed here: 

  • API Driven Infrastructure Services — AWS, GCE, OpenStack
  • Container Host — CoreOS, RancherOS, CentOS, Ubuntu, RedHat Atomic
  • Container Networking — Project Calico, Flannel, Weave
  • Container Engine — Docker, Rocket (rkt)
  • Container Registry — Docker Trusted Registry, Quay, Artifactory, Nexus (beta at the time of this writing)

Change Management: For most organizations the alignment to a DevOps model is a real change and therefore a real challenge. Getting the operations teams to believe in the magic of containers is a daunting task. These guys have been on the front lines of the enterprise code base for many years and have been key to keeping environments up and running as other technology promises have fallen flat. Container frameworks “tout” new ways to simplify operational life cycles, but, based on their past experiences, most operators will need to see it to believe it. 

 

Are You Ready for Containers?
You’d better be.

If you tell me your enterprise does not build applications that will run in containers, you are looking at the problem from the wrong angle. Perhaps you don’t do it now, but you will be doing this in the future. What I am talking about are applications that are “cloud ready’— that is, stateless, shared-nothing applications that rely on back-end data sets to persist data. If your apps don’t behave that way, they should — and they will over time. And be warned: if you are not building container-based applications and microservices in the next few years, you will not be able to hire the developers that you need to take your business forward.

Author: Seth Fox

 

Interested in learning more? Sign up for the September 29 Webinar on this topic today!

 

Solinea specializes in 3 areas: 

  • Containers and Microservices –  Now enterprises are looking for ways to drive even more efficiencies, we help organizations with Docker and Kubernetes implementations – containerizing applications and orchestrating the containers in production.
  • DevOps and CI/CD Automation –  Once we build the infrastructure, the challenge is to gain agility from the environment, which is the primary reason people adopt cloud. We work at the process level and tool chain level, meaning that we have engineers that specialize in technologies like Jenkins, Git, Artifactory, Cliqr and we build these toolchains and underlying processes so organizations can build and move apps more effectively to the cloud.
  • Cloud Architecture and Infrastructure –  We are design and implementation experts, working with a variety of open source and proprietary, and have built numerous private, public, and hybrid cloud platforms for globally-recognized enterprises for over three years.

The post Repost: Why Enterprises Want Containers Now — And Why You Should Too appeared first on Solinea.

]]>
2147
Avoiding Obstacles to Cloud Adoption – Part 2 https://solinea.com/blog/avoiding-obstacles-cloud-adoption-part-2 Sat, 06 Aug 2016 19:17:37 +0000 https://solinea.com/?p=2040 About SolineaSolinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.Better processes and tools equals better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.Part 2 of a series: Technology and Security Security […]

The post Avoiding Obstacles to Cloud Adoption – Part 2 appeared first on Solinea.

]]>

About Solinea

Solinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.

Better processes and tools equals better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.



Part 2 of a series: Technology and Security


Security Obstacles


Like the infrastructure aspect, Cloud security can be viewed as familiar technical measures with a new operational approach. The biggest problems arise from assuming the old familiar approach will fit. Here are some of the common failed approaches to a successful cloud adoption:

Security as Police. The security team see their role as policing established security conventions rather than as contributors to a new cloud security architecture. This leads to a weaker architecture and missed opportunities to improve the security model. It may also result in the entire cloud being boxed in as an ‘Application’ under the existing security model. Failure to adapt policy. The existing security policy extends to specific technical measures (legacy security products) and these do not fit the cloud model. An example might be a specific firewall product with no compatible FWaaS API. The policy may also require a single manual workflow and no automation. Involved too late. The security team does not get fully involved until implementation time, after technology choices and architecture are fixed. At this stage it is too late to create a security architecture that will enable the intended use cases for the cloud.

These approaches can lead to the following problems: Failure to deliver on objectives and improve KPIs. The intended use cases cannot be performed with automation resulting in a cloud that operates exactly the same as the legacy infrastructure it was intended to replace. The cloud platform scope becomes constrained, limited to low-value use-cases or a small group of users because of failure to incorporate an enterprise security architecture.

Solution: Bring in Governance and Security Teams Early

The challenge is to make security a part of the whole picture and integral to the success of the project. In many enterprises the security teams only deliver and enforce security solutions. For building cloud platforms, security is an essential aspect of the overall design.

The solution is to bring the security architect role into the development of the overall cloud platform architecture, participating in the same methodology described above alongside governance, development and infrastructure.

For OpenStack clouds there are many resources for IT security teams. The OpenStack project has a strong security focus with a dedicated security team focusing on securing the OpenStack Cloud platform. They help maintain the OpenStack Security Guide along with vendors and experiences users of the platform.

Check back for our follow-up post where we will summarize some fundamental security. 


Solinea specializes in 3 areas: 

  • Cloud architecture and infrastructure design and implementation, with a specific focus on OpenStack – We have been working with OpenStack since its inception, wrote the first book on how to deploy OpenStack, and have built numerous privateand public cloud platforms based on the technology.
  • DevOps and CI/CD Automation – Once we build the infrastructure, the challenge is to gain agility from the environment, which is the primary reason people adopt cloud. We work at the process level and tool chain level, meaning that we have engineers that specialize in technologies like Jenkins, Git, Artifactory, Cliqr and we build these toolchains and underlying processes so organizations can build and move apps more effectively to the cloud.
  • Containers and Microservices – Now enterprises are looking for ways to drive even more efficiencies, we help organizations with docker and kubernetes implementations – containerizing applications and orchestrating the containers in production. 

The post Avoiding Obstacles to Cloud Adoption – Part 2 appeared first on Solinea.

]]>
2040
Avoiding Obstacles to Cloud Adoption – Part 1 https://solinea.com/blog/avoiding-obstacles-cloud-adoption-part-1 Sat, 23 Jul 2016 04:25:11 +0000 https://solinea.com/?p=2024 Problems with adopting cloud computing models have received plenty of coverage in recent years, however there is also a large and growing number of success stories; it is definitely possible to do it right.

The post Avoiding Obstacles to Cloud Adoption – Part 1 appeared first on Solinea.

]]>


About Solinea

Solinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.

Better processes and tools equal better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.

 

The Blog:

This is part 2 in a series. Take a look at Part 1 to get the whole story.
 
Avoiding Obstacles to Cloud Adoption

Part 1 of a series: Technology and Security

Problems with adopting cloud computing models have received plenty of coverage in recent years, however there is also a large and growing number of success stories; it is definitely possible to do it right. In this series of posts we will look in detail at some of the failure paths and how to recognize and avoid them – with an emphasis on private cloud implementations. This first post will focus on the technology implementation approach and will be followed by a second post on security. Future topics will cover more detail on building a successful architecture for your cloud.

Many of the “Top Reasons Cloud Implementations Fail” stories correctly observe that cloud computing is based on technology that IT people are already familiar with: x86 servers, virtualization, storage and networking. This leads some to mistakenly conclude that cloud implementation projects can be treated exactly the same as any technology project, and this is the problem that we will begin this post with.

For a recap on the prerequisites for a successful cloud implementation take a look at the two-part series on Making the Case for OpenStack: You Would be Surprised What Enterprises Overlook and Critical Success Factors, and the follow-up Five Cloud (or Open Infrastructure) Critical Success Factors – Redux.

Technology Obstacles

Your objectives are clear, executive sponsorship secured, KPIs are defined and agreed and the development team is waiting for their cloud APIs; so what comes next? If you’re planning to run the implementation the same way you would any other technology project, then this is a good time to pause and revisit the project goals. If those goals include reaping the benefits of cloud models for development and operations, or if your KPIs include deployment speed and agility then you might be about to make a wrong decision.

Here are some common mistakes when Technology teams select cloud platforms:

  • Use cases are overlooked. It is often assumed that all use cases can be supported simply by “maxing out” the technical specifications.
  • Over engineered technical requirements. Teams come up with an exhaustive list of technical must-haves. Extreme IOPS, maximum network performance, huge memory VMs, sub-second live migration, HA everywhere, and an assortment of the latest published features. The result is either impossible to implement or extremely complex and expensive. Product selection by market research reports. Unsure of technology, the implementation team studies all the available research reports and picks the ‘leaders’ in all the cloud categories and performs a weighted selection based on local preferences.
  • Designed around a legacy operational model. Non-functional requirements are gathered from how current infrastructure is operated. Automation is viewed only as removing repetitive tasks.
  • Proof of concept only involves infrastructure. The implementation team runs a PoC but does not involve Governance, Development, Security, and Operations teams.

The outcome of any of these decisions could be a cloud platform that does not meet the overall goals and KPIs. This may seem counterintuitive if you believe that OpenStack will smooth over any technology differences.

It won’t; here are three reasons why:

  • Vendor Selection: Not all OpenStack vendors’ offerings are alike. This is not to say some are better than others but that they have different opinions about how things should be implemented and which features and vendor integrations they are prepared to support.
  • Cloud Architecture: Not all cloud architectures are alike. Different use cases, security models, and non-functional requirements can call for very different architectures in both the control plane and the infrastructure components.
  • Technology Components: Your technology selection will indirectly force an overall architecture. If you did not plan a cloud architecture with your development, security, and operations teams then you will have one imposed at implementation time by the technology choices. 

Here is where cracks begin to appear. The security team doesn’t like the CMP, it won’t federate with their IdP. Governance can’t get chargeback data or enforce usage controls. The network team says there’s no way they’ll ever support user-initiated overlay networks. The developer automation tools are unable to use the platform API because of insufficient privilege separation. Security Groups are not considered sufficient and external manual firewalls must be used. Automation is impossible. All of these things turned out to be essential for making your cloud use cases work.

The reason these issues surface late is that they are not typical infrastructure or system software problems. At this stage of the project it can be very difficult to make rectifying architectural changes because many of the technology decisions constrain the solution. Additionally, other project stakeholders may be

Solution: don’t treat cloud just as a technology implementation

The corollary of cloud being based upon standard technology is that the most important aspect of cloud is not about the technology; it’s about how technology is consumed and the processes for supporting and managing that.

For a team accustomed to running traditional virtual infrastructure this is an easy mistake to make. At a single point in time an application running in a cloud looks exactly like an application running on your legacy hypervisor. The point overlooked is how did that application get there.

The solution is to make the technology selection decisions last, after the new cloud operational model is understood and a high level architecture is established, and then confirmed only after a proof of concept implementation.

Our preferred methodology for cloud platform deployment follows this order:

  • From the objectives and strategy, define the use cases that must be supported. These use cases should cover the process for developing/testing/deploying applications and well as governance, security, and operational use cases related to the overall objectives.
  • Architect a solution and technical requirements to support the use cases, reviewing with the development, governance, security and operations teams. 
  • Make a provisional technology selection and plan a PoC or Pilot implementation, selecting a target application to demonstrate use cases supporting project objectives.
  • Based upon PoC conclusion, confirm the technology selection and proceed with implementation strategy. 

This methodology greatly reduces the risk of technology failure and ensures that the cloud platform will deliver on the program objectives and KPIs.

The OpenStack documentation site contains a detailed Architecture Design Guide that covers design considerations for many examples. In the next post in this series we will summarize some of the more common architecture patterns for enterprise cloud deployments.

Check back for our follow-up post where we will summarize some fundamental security. 

Solinea specializes in 3 areas: 

  • Cloud architecture and infrastructure design and implementation, with a specific focus on OpenStack – We have been working with OpenStack since its inception, wrote the first book on how to deploy OpenStack, and have built numerous private and public cloud platforms based on the technology.
  • DevOps and CI/CD Automation – Once we build the infrastructure, the challenge is to gain agility from the environment, which is the primary reason people adopt cloud. We work at the process level and tool chain level, meaning that we have engineers that specialize in technologies like Jenkins, Git, Artifactory, Cliqr and we build these toolchains and underlying processes so organizations can build and move apps more effectively to the cloud.
  • Containers and Microservices – Now enterprises are looking for ways to drive even more efficiencies, we help organizations with docker and kubernetes implementations – containerizing applications and orchestrating the containers in production. 


The post Avoiding Obstacles to Cloud Adoption – Part 1 appeared first on Solinea.

]]>
2024
451 Research: OpenStack-related Business Models to Exceed $4BN by 2019 https://solinea.com/resources/industry-reports/451-research-openstack-related-business-models-exceed-4bn-2019 Mon, 18 Jul 2016 17:24:13 +0000 https://solinea.com/?p=2016 451 Research tracks the evolving OpenStack business models through its Market Monitor offering, a market-sizing and forecasting service that offers a bottom-up market size, share and forecast analysis for the rapidly evolving marketplace for OpenStack-related products and services.

The post 451 Research: OpenStack-related Business Models to Exceed $4BN by 2019 appeared first on Solinea.

]]>

Solinea Launches Enterprise Platform for Management, Audit, and Monitoring for OpenStack

Download the July 2016 451 Research Impact Report on Solinea: “Solinea’s Goldstone Enterprise Targets Enterprise Openstack Users”


451 Research tracks the evolving OpenStack business models through its Market Monitor offering, a market-sizing and forecasting service that offers a bottom-up market size, share and forecast analysis for the rapidly evolving marketplace for OpenStack-related products and services. The service provides detailed information on the 56 vendors in the OpenStack marketplace, including a listing of each vendor and its products, and a view of the competitive landscape. This service tracks key market segments whose vendors support OpenStack or base their services on the OpenStack framework: service providers, IT services, distributions and training. The OpenStack Market Monitor no longer includes PaaS and cloud management vendor revenue associated with OpenStack. Those estimates are fully included in our Cloud Enabling Technologies forecasts.

The 451 Take

Our OpenStack market estimate and forecast was derived using a bottom-up analysis of each vendor’s current revenue generation and growth prospects. Of the 56 companies included within our estimate, eight out of 10 have directly provided revenue guidance. Based on our research, we continue to believe the market is still in the early stages of enterprise use and revenue generation. Our Market Monitor service expects total OpenStack-related revenue to exceed $4bn by 2019. Revenue overwhelmingly comes from the service provider space, with in increasing portion coming from private cloud deployments rather than public IaaS. We expect an uptick in revenue from all sectors, especially from OpenStack products and distributions that are primarily targeting enterprises.

Get insights on the OpenStack market: 

    • The market 35% CAGR
    • An overview of OpenStack service providers
    • An overview of IT Services, Training, and Distributions
    • The OpenStack market outlook

Get The Report

The post 451 Research: OpenStack-related Business Models to Exceed $4BN by 2019 appeared first on Solinea.

]]>
2016
DockerCon 16: Takeaways and Must-sees https://solinea.com/blog/dockercon-16-takeaways-must-sees Thu, 30 Jun 2016 23:41:31 +0000 https://solinea.com/?p=1869 Well, here we are.

The post DockerCon 16: Takeaways and Must-sees appeared first on Solinea.

]]>

About Solinea

Solinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.

Better processes and tools equal better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.

The Blog:

This is part 3 in a 6-part series on containers, microservices, and container orchestration to help tech and business professionals of all levels understand and embrace this transformative technology and the best practices processes to maximize your investment in it.

Here is an outline:

  1. Intro: Why Containers and Microservices Matters to Business – Executive thought leadership perspective – Coming Soon!
  2. Getting started with Kubernetes – How to start with a POC, weave k8s into your existing CI/CD pipelines, build a new pipeline
  3. Intermediate level post – Ready to kick the tires? Key Takeaways from Dockercon 2016,  K8s, Ansible, Terraform media/entertainment enterprise case study
  4. Advanced tips and tricks – Take things further with “Tapping into Kubernetes Events“, “Posting Kubernetes Events to Slack“, and “Chatbots for Chatops set up on Gcloud w Container engine
  5. Scaling Docker and Kubernetes in production to thousands of nodes on Kubernetes in one of the largest consumer web properties – Coming Soon!


Well, here we are. In the afterglow of yet another tech conference. But this time it’s a bit strange for me, because it’s not The OpenStack Summit.  It’s DockerCon. And man, was that just a wildly different vibe and a totally different focus when it came to sessions. I was able to attend DockerCon with another Solinea colleague and I wanted to take some time to document what I thought were some very interesting takeaways, talk about where I feel the container ecosystem is heading, and finally, link to some must-see sessions that sum up the most interesting parts of the show for me.

It was a whirlwind two days in Seattle and, admittedly, between talking to so many folks, browsing the expo floor, and attending sessions it’s already a bit tough to recall all of the things that we saw. As I refer back to my bulleted list I’ve been keeping on my laptop I realize just how much went down. Here’s some of the keys.

Takeaways:


1. Docker for Mac/Win is ready for prime time: The beta release of Docker for Mac and Docker for Windows has been out for quite a while. And although I was lucky enough to get into the beta early, the software was certainly a little touch and go at first. However, after a constant stream of updates, it seems far more stable for me day to day and others must be feeling the same. As such, during the first day’s keynote, Solomon Hykes announced that the beta period was over and this software would be generally available. This is certainly a blessing for us here at Solinea, since our training classes were previously using Docker Toolbox, which always seemed a bit kludgey and we always had a problem student or two.

2. The new Docker engine is pretty slick: Immediately after announcing the GA for Docker for Mac/Win, Solomon proceeded to change course to something folks have been anticipating for a while. Built-in orchestration for Docker. New with the v1.12 release of Docker engine, orchestrating containers is a built-in (but optional) part of the package. This means that I, as a new user, could take a few machines, create a swarm with them, and launch services against them in a matter of minutes. This also includes the ability to scale those services up and down easily and includes some features like network overlay and service discovery that have historically been a secondary part of the swarm environment. 

3. Seriously, the new Docker engine is pretty slick: As somewhat of an extension to the previous point, the new Docker engine supports DABs (no, not that goofy dance). DABs are Docker Application Bundles. One can now take an existing docker-compose definition, and create a bundle out of it. This allows the bundle to be used and distributed evenly against the new swarm. Once each tier exists on the swarm, they can then be scaled up and down just like any other service. This certainly offers a pretty interesting deployment path from a dev laptop all the way to a production swarm.

4. There’s always an app store: Finally, one of the interesting announcements came on the second day of keynotes. This was this Docker Store. The Docker Store sounds like a single source for known good Docker deployments and images. It appears to be an extension to the previous “official” images that were in Docker Hub. This seems like a much needed extension as the official designations were never really all that clear and having the unofficials unable to mix in to the Store is a good thing. It also appears to include a paid tier for Docker images, similar to what one may see in the Amazon AWS marketplace. The Docker Store is currently in closed beta, but you can apply for access here.

General Thoughts:


With the takeaways covered, it’s now time for my unsolicited two cents. After spending two days neck deep in Docker-land, I have a few gut feelings/hunches/whatever that may or may not be even remotely true, as well as a some general observations that I thought were interesting at the time.

1. The community is still confused: What I mean here is not that community members do not understand Docker and its benefits. They clearly do and are very excited about them. What I mean is that there’s still a lot of scrambling around in regards to the “one true way” to do Docker, what tools to use, which pipeline tool to use to enable deployments, etc.. I think some of this stems from the fact that Docker itself never offered some of these tools before now and them doing so definitely threw a grenade of confusion into the crowd. As an extension…

2. The battle for container orchestration is officially here: If you have used Docker on more than on your laptop, you’ve realized that orchestrating lots of containers is hard. The de facto answer for a large scale, production cluster has been Kubernetes up to this point. Now, with the new Docker Swarm, it will be interesting to see what folks decide to do and if the built-ins are “good enough” for their production use cases. There’s been a lot of companies that have bet big on Kubernetes, so it will be an exciting time to see where people land and how the K8s team answers.

3. MSFT has bet big on Docker: Microsoft was everywhere at DockerCon. From keynotes, to booths, to sessions. It’s clear that they see Docker as a huge opportunity and it seems that they have put in a lot of work to make Docker on Windows a real contender. As anyone in the enterprise knows, there’s still a lot of Windows Server apps out there in the world. Being able to Dockerize them could be compelling for some companies.

4. Hybrid cloud makes a lot more sense: Hybrid cloud has always been a compelling case for business. However, historically, it’s been easier said than done when there are intricacies and differences that occur between, say, OpenStack on-prem and Amazon. That said, a lot of the application level aspects seem to be getting easier with the new Docker Swarm allowing any Docker engine to join whatsoever, as well as the increased interest in the Ubernetes project. I’m looking forward to this being a solved problem for sure! Give me a cluster, allow me to throw containers at it. It should be as easy as that, regardless of where the nodes in the cluster live.

Must-sees:

  • The first keynote of course is a must-see. The announcements around orchestration and the new and improved Docker Swarm is worth watching:

  • The Day 2 Keynote was a great session, with highlights like container image security scanning, a great example of cross-cloud Swarm (with AzureStack!):

The post DockerCon 16: Takeaways and Must-sees appeared first on Solinea.

]]>
1869
Solinea’s Ken Pepple Presents at OpenStack Day Ireland https://solinea.com/blog/solineas-ken-pepple-presents-openstack-day-ireland Thu, 30 Jun 2016 14:17:05 +0000 https://solinea.com/?p=1903 About SolineaSolinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.Better processes and tools equal better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.The Blog:Solinea’s Ken Pepple Presents at OpenStack Day […]

The post Solinea’s Ken Pepple Presents at OpenStack Day Ireland appeared first on Solinea.

]]>

About Solinea

Solinea services help enterprises build step-by-step modernization plans to evolve from legacy infrastructure and processes to modern cloud and open source infrastructure driven by DevOps and Agile processes.

Better processes and tools equal better customer (and employee) satisfaction, lower IT costs, and easier recruiting, with fewer legacy headaches.

The Blog:

Solinea’s Ken Pepple Presents at OpenStack Day Ireland.

Solinea had the great pleasure of attending and presentation at today’s OpenStack Days Ireland, the first major OpenStack event in Dublin. The sold-out affair was held in Dublin’s Silicon Docks district at the Marker Hotel.

The day began with messages from the sponsors of the day: The OpenStack Foundation, Intel, SUSE, Intel and Ammeon. Jonathan Bryce gave the kickoff that presented the emerging trends in cloud and the state of OpenStack. This was followed with discussions about the OpenStack roadmap, modelling applications and updates from the latest OpenStack Summit.


Website-Openstack-Ireland-Walmart


Solinea’s CTO, Ken Pepple, spoke to the crowd about the challenges of “Managing OpenStack in the Enterprise”.



The afternoon brought a focus on networking covering NFV, Kubernetes integration, DPDK and performance. In addition, Workday presented the case study of their OpenStack deployment and operations.


Website-Openstack-Ireland-Reference-Architecture



See Ken’s entire presentation here:




Solinea specializes in 3 areas: 

  • Cloud architecture and infrastructure design and implementation, with a specific focus on OpenStack – We have been working with OpenStack since its inception, wrote the first on how to deploy OpenStack, and have built numerous private and public cloud platforms based on the technology.
  • DevOps and CI/CD Automation – Once we build the infrastructure, the challenge is to gain agility from the environment, which is the primary reason people adopt cloud. We work at the process level and tool chain level, meaning that we have engineers that specialize in technologies like Jenkins, Git, Artifactory, Cliqr and we build these toolchains and underlying processes so organizations can build and move apps more effectively to the cloud.
  • Containers and Microservices – Now enterprises are looking for ways to drive even more efficiencies, we help organizations with docker and kubernetes implementations – containerizing applications and orchestrating the containers in production We offer consulting and training services to help customers. 

The post Solinea’s Ken Pepple Presents at OpenStack Day Ireland appeared first on Solinea.

]]>
1903
Solinea in Techtarget – OpenStack infrastructure embraces containers, but challenges remain https://solinea.com/news-events/solinea-techtarget-openstack-infrastructure-embraces-containers-challenges-remain Mon, 20 Jun 2016 06:33:23 +0000 https://solinea.com/?p=1849 OpenStack continues to gain modest traction despite its ongoing growing pains.

The post Solinea in Techtarget – OpenStack infrastructure embraces containers, but challenges remain appeared first on Solinea.

]]>

Despite a recent push to embrace container technology, OpenStack still has a number of hurdles to clear before boosting enterprise adoption.

OpenStack continues to gain modest traction despite its ongoing growing pains. Now, users see a lifeline in a technology some view as the death knell for the open source project.

OpenStack leaders push the software as the underlying cloud framework to connect the disparate bits of enterprise IT, and, increasingly, containers are the central piece to that roadmap. The technology has been prominent at each of the last three user conferences and some think it will fulfill OpenStack’s long-held promise to avoid vendor lock-in and seamlessly share public and private cloud resources.

The rise of Docker and containers as a way to package applications is seen by some as offering the same benefits as OpenStack without the headaches and costs of deploying the cloud operating system. But instead of positioning it as an alternative, the OpenStack community has embraced both open source technologies.

Read the entire article on Techtarget.

The post Solinea in Techtarget – OpenStack infrastructure embraces containers, but challenges remain appeared first on Solinea.

]]>
1849
End-to-end Training for Open Infrastructure https://solinea.com/blog/training-critical-modern-open-infrastructure Fri, 17 Jun 2016 23:23:56 +0000 https://solinea.com/?p=1820 Why Training is Critical to Modern Open Infrastructure

Enterprises and service providers need to move faster than ever to stay competitive in today’s markets.

The post End-to-end Training for Open Infrastructure appeared first on Solinea.

]]>
Why Training is Critical to Modern Open Infrastructure

Enterprises and service providers need to move faster than ever to stay competitive in today’s markets. IT agility, efficiency, and repeat-ability are paramount to companies’ success and are seen as business and revenue drivers by many executive teams and boards.

One way to achieve these goals is by adopting Open Infrastructure, which we define as:

Graphic-Open-Infrastructure-050916

While the business benefits for open infrastructure are significant and include removing technical debt, saving resources and money, and accelerating time to market. Challenges as we all know it is early, very complex, and incredibly resource constrained.

That said, we’ve helped organizations adopt open infrastructure, from Legacy through 2.0

Website-OI2

A few specific examples of moving infrastructure forward include:

  • We’ve worked with one of the largest car companies in the world in an end-to-end engagement to conceive, architect, and implement a production, enterprise-wide private cloud to support a big data initiative to track and model automobile feature usage to inform their product roadmap.
  • We’ve also worked with the leading genomics organization, The Broad Institute, to plan, build, and help them adopt a similar big data analytics infrastructure with a 3-week POC, and 6-week production launch as highlights.
  • We’ve worked with a global stock exchange to transform their legacy IT and build an enterprise hybrid cloud with real, department chargeback modeling, and an application migration strategy and implementation. Simply building these is one thing, ensuring the organization is ready with the proper training on DevOps process and OpenStack training is another. We trained departments on being able to manage and scale the infrastructure on their own.

A few CI/CD, Containers and Microservices examples:

  • We’ve worked with a worldwide web property to design, pilot, and deploy in production a very large container orchestration platform (Kubernetes) together with the DevOps and CI/CD toolchain and processes to support it ongoing. We believe this is one of the largest in the world outside of Google.
  • We’ve worked and continue to work with a leading media company to developed the IT strategy that has driven their hybrid cloud solution, supported by DevOps processes and automation to enable agility and embrace a rapidly growing customer base.

Solinea helps organizations adopt DevOps and containers and microservices through consulting services where we help enterprises adopt open infrastructure by designing, building and deploying the solutions, and then through training our customers to become self sufficient with technical training for operators, and process training for management.

Why Solinea Training?

Open Infrastructure is hard, you need to partner with experts like Solinea to sell the initiative internally to sometimes even get started, then to define and architect the adoption roadmap, and finally to train your teams to upskill your capabilities.

That is why our training offerings are 50% hands-on lab work, and have been developed covering rigorous and comprehensive open infrastructure learning paths across Architecture, Administration, Implementation, and Design to enable the necessary right skills and facilitate adoption.

We recognize that OpenStack is but just one of the fundamental components of Open Infrastructure solutions, which is why you’ll find the Solinea team contributing code to Docker, deploying packages using Ansible or RunDeck and managing cloud native applications under Kubernetes.

These training offerings span three basic categories:

Website-Training-Curriculum

Here is a high-level diagram of the training tracks Solinea offers by category…

  • Containers and Microservices Curriculum
    • Our expertise in containers and microservices and extensive first-hand experience architecting and deploying Kubernetes solutions in the enterprise gives you a real-world view on how to architect, install, manage and scale Docker and Kubernetes on your own.
    • Our containers and microservices curriculum provides IT practitioners (Software Developers, System Architects, System Engineers, Operations & System Administrators, and Cloud End-Users) the requisite knowledge and skills to understand, architect, deploy, and use Docker and Kubernetes to manage their containerized applications.
  • Configuration Management & Automation Curriculum
    • These courses introduce your application developers and Cloud IT/Operations teams to the fundamentals of Continuous Integration and Continuous Delivery (CI/CD) utilized to develop online services with a comprehensive indoctrination to Ansible with a focus on the application of Ansible plays, modules, playbooks, and playbook development. The courses are taught using a combination of interactive lectures leveraging PowerPoint, whiteboards, core references, instructor solo and follow-me demonstrations, and hands-on lab to ensure you know how to deploy Ansible to automate the configuration management of your IT infrastructure.
    • The courses are again taught by Solinea Architects that are active Ansible practitioners and have extensive first-hand experience deploying Ansible at scale in the enterprise.
  • OpenStack Core and Specialty Curriculum
    • As OpenStack pioneers, Solinea understands the specific needs of the enterprise; we know how to architect and integrate OpenStack clouds into existing legacy environments to support vertical (e.g,. service provider, telco, automotive/manufacturing, healthcare, internet, and financial services) and horizontal (e.g., Big Data, mobile, HPC, streaming media, dev/test) applications and workloads. We understand the enterprise has security, compliance and regulatory needs as well as challenging operating requirements. That’s why we provide training curriculum that extends beyond the technology implementation, to ensure that the cloud is fully integrated and adopted by the enterprise.
    • The courses are again taught by Solinea Architects that are active Ansible practitioners and have extensive first-hand experience deploying OpenStack at scale in the enterprise.

In summary, we’ve partnered with the largest network equipment provider in the world, the leading US telco and other global enterprise customers to train their engineers on OpenStack, DevOps, Docker and Kubernetes to ensure their teams are ready for what’s next.

Set up a 20-minute overview with our training team to see how Solinea training offerings can help your organization prepare for open infrastructure.

The post End-to-end Training for Open Infrastructure appeared first on Solinea.

]]>
1820
Disaster Recovery (DR) Options in Cloud Environments https://solinea.com/blog/disaster-recovery-dr-options-cloud-environments Sat, 04 Jun 2016 06:01:33 +0000 https://solinea.com/?p=1707 If more organizations adopted this method of thinking, OpenStack would cease to be the risky adventure many put it out to be and instead be something that could enable an actual shift in the modern data center and how we all consume technology.

The post Disaster Recovery (DR) Options in Cloud Environments appeared first on Solinea.

]]>

At the OpenStack Summit in Austin (2016), several presentations occurred regarding Disaster Recovery; most interesting was, “Application Level Consistency Snapshot and Disaster Recovery.” My time spent in the DR realm of a Fortune 100 financial services company, I am quite passionate about Business Resumption and Business Continuity. With the release of OpenStack Newton in October 2016, it appears application consistency may get baked-in. Application Level Consistency Snapshot is exciting news for those of us who are passionate about Disaster Recovery, over the coming months, I’ll be keeping an eye on this.

As an organization looks to deploy OpenStack integrators, providers, and operators may find it beneficial to place workloads in various cloud “buckets.” These buckets make workload placement a lot easier when working towards an OpenStack-based cloud. These basic very broad buckets are:

Non-cloud Friendly

Workloads that require everything to be done manually, including the request for the server (including make, model, CPU count, RAM, storage), storage (SAN/NAS/NFS/none), the network  (with supporting cabling diagram), with supporting processes that are well suited for extended deployment times. This process might also use Excel spreadsheets, paper request forms, any maybe an old sneaker-net chat applet.

Cloud Friendly

These are workloads that use distributed services and are virtualized. They use a source code manager with their application code, and some configuration management and automation engines (Ansible, Puppet, Chef, etc.). The support processes are designed to support agility and consistency, more than the legacy processes used to support bare metal applications with really long deployment times.

Cloud Native

Workloads built with OpenStack or another cloud at its core. These workloads are built from the ground up using a mature deployment and development pipeline (e.g. a CICD pipeline).

Note: “OpenStack does not equal CICD.”

Each workload has several moving parts, in very broad terms:

processing (CPU, CUDA, etc.)

memory (RAM)

storage (NAS, NFS, SAN)

network

application

support teams (storage team, server team, DB team, app team, etc.)

business unit

 

For the Recovery of workloads, cloud-native workloads just re-deploy, assuming a mature deployment pipeline of course. Many DIY OpenStack deployments are being designed for Cloud Native workloads while becoming the landing place for “non-cloud friendly” workloads. A legacy workload might be an application coming from a 100% virtual environment where the deployment method uses the same process as an application running on a physical bare metal server. When this occurs, these DIY OpenStack operators are quickly overwhelmed by outdated non-cloud friendly processes and support mechanisms.

In the instance of recovery, in a time long past, there were three dominating methods:

Recovery-by-Rebuild

Backup the data, rebuild the server, re-install the application, re-hydrate the data, cross fingers. This method is often used in conjunction with a recovery runbook, finding the backups (be it tapes, virtual tapes, etc.), and a run book on how to install the application (often from when it was first installed), the OS, what the network configuration might look like, etc. In my experience, this has yielded a 25% success rate with an RTO of “best effort.”

Recovery-by-Re-image

Backup the system image, re-image the server, re-hydrate the data, cross fingers. This method is often used in conjunction with a recovery runbook, finding the images, etc. This method is more predictable than the Recovery-by-Rebuild method. On a per server basis, reimaging yields better results, although, as fortune would have it, I’ve not needed to do this on a large scale.

Recovery-by-Restore

Create a snapshot of the system’s state and data, restore the system to a working restore point, cross fingers. This method yields the best results, especially when this method is supported by a GUI of sorts that allow almost anyone to execute a restore with a variable level of training (e.g. VMWare Site Recovery Manager, NetBackup, TSM for Virtual Environments, EMC AVAMAR).

The evolution of cloud models and processes, be it OpenStack, AWS, GCE, CI/CD, etc., has introduced a fourth method into the recovery space:

Recovery-by-Re-Deploy

This is where a recovery is the redeployment of a workload in a very rapid fashion, often fully orchestrated with minimal to no downtime. This method requires a mature deployment and support pipeline. In the context of OpenStack, this is the objective. Using OpenStack doesn’t magically eliminate an organization’s need for Disaster Recovery or Data Protection.

Note: “OpenStack does not equal Recovery-not-Required.”

This fourth method enables an organization to mature how it deploys and builds applications, with a benefit output being rapid re-deployment and quick recovery. At the OpenStack Summit in Austin, there was a keynote that described there being more than tools to change when integrating OpenStack into one’s portfolio of capabilities. As has been my opinion for nearly a decade, the success of driving change into any organization is dependent on the update of four core areas.

  • Processes – These are the processes that define how a workload comes to live, is supported during that life, and the decommissioning of the workload.
  • Culture – Very much like the process component, this is the institution’s organizational culture. An example might be, the current culture depends on external companies (outsource) to resolve internal problems. Example, a company with a corporate culture that doesn’t learn new things, support new initiatives, attempted to adopt OpenStack. Often these companies think OpenStack will magically allow them to take on a “Cloud Culture.”
  • Mindset – Tied closely to the culture, the mindset is an individual’s mindset as set by the organizational culture. Often these are in lock step; however,  a mindset must change and influence the culture. We’ve all met leaders, peers, subordinates, engineers, architects, who were the architects of the legacy methods. Thus in the mindset, OpenStack must accommodate the existing environment as such, unrealistic requirements get pushed into the project, and the whole initiative ends up in the red. Namely, because more time is being spent trying to retrofit OpenStack to accommodate workloads are anything but cloud friendly or cloud-native.
  • Technology – This is the technology (be it software or hardware), arguably, this is the easiest component to change. Unfortunately, this technology shift often leads to unrealistic visions of becoming Google, NetFlix, or AWS. Because it is so easy and has minimal barriers, companies sometimes purchase a product that promises “DevOps,” or “Cloud,” all the while only realizing additional support costs.

Note: “OpenStack does not equal DevOps.”

Many organization still use a process that enables technology silos; these are organizations where different teams “own” a piece of something (think network team, directory team, server team, os team, DB team, etc.). In these sorts of systems, application deployments require many gates and approvals, so much so that this single mindset/culture hinders rapid deployment.

OpenStack is a product, a product that enables something. What that something is, is up to us, as OpenStack integrators, operators, and contributors, we must work beyond the next shiny object, the next buzzword, and work behind the scenes in updating processes, changing restrictive cultural thinking and mindsets, and integrate better technology.

OpenStack works fantastic when enabling workloads designed to run on OpenStack leveraging an upgraded and mature processes. These are applications where application teams, developers, and operations work together using a cloud-friendly process. If more organizations adopted this method of thinking, OpenStack would cease to be the risky adventure many put it out to be and instead be something that could enable an actual shift in the modern data center and how we all consume technology.

Author: Lindis Webb, Cloud Architect, Solinea

The post Disaster Recovery (DR) Options in Cloud Environments appeared first on Solinea.

]]>
1707
Why Gartner’s Mode 1 / Mode 2 is Dangerous Thinking https://solinea.com/blog/gartners-mode-1-mode-2-dangerous-thinking Fri, 13 May 2016 04:06:59 +0000 https://solinea.com/?p=1668 It’s safe to say that no keynote speech in the past year has generated more conversation and controversy as Donna Scott’s keynote address before a record-breaking crowd of 7,500+ at the OpenStack Summit in Austin.

The post Why Gartner’s Mode 1 / Mode 2 is Dangerous Thinking appeared first on Solinea.

]]>
It’s safe to say that no keynote speech in the past year has generated more conversation and controversy as Donna Scott’s keynote address before a record-breaking crowd of 7,500+ at the OpenStack Summit in Austin.

To be sure, Donna has cred. She’s a Vice President and Distinguished Analyst at Gartner. She’s covered private cloud and OpenStack almost from the beginning. She has a following. People listen to her. Donna and other Gartner analysts (including Alan Waite of Gartner for Technical Professionals) have a balanced, pros-and-cons take on OpenStack.

Donna’s talk was met with raised eyebrows and objections. If you’re interested in reading up on those objections, there are some great threads on Twitter, and Gartner analyst Alan Waite even wrote a blog post in response. OpenStack Foundation exec Lauren Sell also addressed the kerfuffle in a post-Summit wrap up blog.

But I want to focus on Bimodal IT. It’s Gartner’s map of how it advises enterprise IT leadership to think about transformation from legacy infrastructure and app dev models (sequential, emphasizing safety and accuracy) to agile, cloud-first models (exploratory and nonlinear, emphasizing agility and speed). Operationally, it can be thought of as the practice of managing two separate, coherent modes of IT delivery: mode one is focused on stability and mode two on agility. If you know nothing about agile, and your entire world revolves around supporting legacy apps on legacy infrastructure, the concept of Bimodal IT can be useful.

Unfortunately, that’s the only place it’s useful. Here’s why.

Bimodal IT essentially segregates the “good” technology, processes and skill sets (mode 1) from the “experimental” (mode 2). If competitors are out-innovating you with new products and value delivered from agile methodologies, then mode 2 is where you must go. Embrace mode 2 when you have no other alternative.

However, if you’re talking about “bet the company” applications, then mode 1 is where serious IT leaders go to do serious work. Gartner will disagree emphatically with this assessment, but any rational observer must conclude that, regardless of the intent, bimodal IT reads as the difference between serious IT (mode 1) and play time (mode 2).

The result? Bimodal IT alienates people in enterprise IT who see cloud and agile as nothing short of the next generational IT wave. Mainframes,  client/server, e-commerce…. Bimodal IT bills itself as a roadmap for technology adoption, but in reality, the concept picks winners and losers, causing confusion on priorities and strategies.

Yes, some legacy environments will take time to go away, like low-latency trading systems. However, this doesn’t mean that all aspects of mode 1 remain as they are – for example, cycle times can get shorter, approach can move to agile (remember, you don’t need “cloud” to go agile) etc.

But innovation and speed shouldn’t just happen in an isolated environment, in order for enterprises to succeed in today’s competitive environment, they MUST move towards agile deployment of services, they must foster innovation across the organization, they should move IT decisions closer to the business and developers – leaving a decaying group within the enterprise will only slow it down.

Think about what their definition of modes 1 and 2 are, the first focuses on safety and accuracy, the second on agility and speed. Last time I checked with a customer, all the new services they are rolling out that require agility and speed also require to be safe and accurate.

We work with many global 1000 organizations to enable them to adopt Open Infrastructure solutions. In fact, we recently postulated that today we are in an Open Infrastructure 2.0 world, where organizations are leveraging new processes, skills and technologies to enable agility and efficiency in the enterprise.

When it comes to adoption of these new and fast changing pieces, we recommend that the organization incubate the concept, drawing from multiple disciplines within the existing organization, leveraging outside experts to accelerate the knowledge gathering, building quick and iterative pilots to demonstrate the success of these concepts/technologies, defining metrics (KPI’s) that show executives hard, tangible improvements, and then roll out the iterated, proven concepts to the rest of the organization. This will take months, it will not happen overnight.

This approach helps enterprises move towards “mode 2” and also takes “mode 1” along, without leaving it behind, in Gartner parlance. Our opinion—and indeed, that of anyone who has successfully guided an enterprise to agile—is that dividing IT into one bucket labeled “stuff that works” and another labeled “stuff that might be better one day” is dangerous.

I’m not alone in my opinion about the dangers of Bimodal IT. Bernard Golden, Jason Bloomberg, and Mark Campbell each have instructive views on the topic, all of which precede Donna’s keynote in Austin.


While mine is an opinion forged in the furnace of building agile infrastructures and teaching organizations how to use it, you might have a different take. Would love to hear your take on Bimodal IT.

 

Author: Francesco Paola, CEO, Solinea

The post Why Gartner’s Mode 1 / Mode 2 is Dangerous Thinking appeared first on Solinea.

]]>
1668
Taking Care of Business: Breaking Down Business Barriers to DevOps Transformation (third of three) https://solinea.com/blog/taking-care-business-breaking-business-barriers-devops-transformation-third-three Thu, 28 Apr 2016 15:42:19 +0000 https://solinea.com/?p=1504 At its core, DevOps is a strategy for enhancing the speed and quality of business services to drive competitive differentiation in the marketplace.

The post Taking Care of Business: Breaking Down Business Barriers to DevOps Transformation (third of three) appeared first on Solinea.

]]>

Author’s note: This is the third in a three-part series about why DevOps transformation can be so hard for an enterprise, Be sure to read Part 1: Overcoming Process Barriers to DevOps Transformation: A Herculean Task and Part 2: Winning at Whac-a-Mole: Overcoming Psychological Barriers to DevOps Transformation.

At its core, DevOps is a strategy for enhancing the speed and quality of business services to drive competitive differentiation in the marketplace. This strategy unites application development and IT operations into a cohesive unit focused on delivering the results the business needs.


Ironically, in our work with global enterprises we have found that business obstacles can get in the way of the DevOps mission of “Taking Care of Business.” Here I’ll provide suggestions for overcoming three business obstacles you may face in your DevOps Transformation:


1. Seek Environmental Alignment

Every enterprise has the usual complement of environments to develop, test and deploy their code. These have names that include Dev, DevTest, Integration, Load, Prod, etc.

Each of these environments serves a specific function to meet the needs of the business—from development work to integration to production, and everything in between. Unfortunately, even if these environments were set up the same way in the beginning, the lack of central management will guarantee that there is drift in these environments over time. The resulting discrepancies add complexity to the deployment process.


These differences often affect the non-functional requirements for an application, such as load balancer software / hardware, storage, security, cloud platform, and others. During a manual installation, most of these differences are just “known,” and people will account for them based on their knowledge. However, during a DevOps transformation, these environmental differences will become very evident (almost immediately) to the teams that are automating the deployment of any application.


Remember, one of the goals of a DevOps process transition is to create infrastructure that is immutable. To support this goal, the same infrastructure automation code should be utilized across all environments, starting with the developers. If you are writing code specific to an environment, your infrastructure is no longer immutable. (Granted, your code will need to account for things like IP Address changes and the like, but these fall in the category of configuration management and do not represent environmental differences that should impact the application.)


Getting these environments aligned presents a great challenge for many of our enterprise customers. This is especially true when the environments have different system owners and different budgets and may be deployed in different parts of the world. Fortunately, as we progress further into a world where infrastructure is becoming code—with API-driven infrastructures like AWS, OpenStack, Docker, SDN and others—standardization becomes easier to achieve.


2. Enhance Software Development Skills

If you look at the skill-set requirements of the enterprise infrastructure engineering, networking, and operations teams, you’ll notice that software development skills have not been a priority. Certainly members of these teams often have some level of scripting skills and basic bash, but a deeper software development skill set is needed in the modern DevOps support model.

These teams need to learn basic git workflows to succeed in an environment where infrastructure is maintained as code. I have seen a lot of resistance to hiring for these skills and to providing training to develop these skills, and that’s a huge mistake. Software development skills are crucial to the success of enterprise IT now and will only become more critical in the future.


3. Manage Applications Not Servers

Years ago when the concept of virtualization was just being introduced, many IT operators did not want to give up on bare metal. We called these folks “server huggers.” Most people have since moved past server hugging, but, now that we are transitioning to a cloud world, we have a new generation of IT operators resistant to moving beyond their comfort zone, and we call them “VM huggers.” In most large enterprises today, you will find remnants of both server huggers and VM huggers, still clinging on.

For DevOps transformation to succeed, we have to release those old mindsets and think differently about what our job really is:

The goal should be to transition from managing servers and virtual machines to managing applications.


Let me give you an example. We tend to focus all of our measurements on the host level. While this will give you insight into how the server is running, it is only part of the picture. Understanding how the application is performing and reacting to those details is generally more significant and will greatly improve the business case for the application being monitored.


Consider this: When an application becomes unresponsive, it will take more time to determine that a server has filled up its disk than to simply add more VMs. Once the application is performing to meet the business requirements, details on what went wrong can be evaluated out of band from any single incident. Then you can take the troubling VMs out of service, review the incident, and provide feedback to the development team for remediation. This greatly simplifies the process for everyone and reduces the work being done “off-hours” for the entire team. Businesses care about applications, not virtual machines. The added cost of these “additional” VMs during a “failure” state is minimal compared to the downtime of the service.


Over the course of this three-part series, I’ve identified a few of the common barriers enterprises may face in their DevOps journeys. I’ve given you some things to watch out for and some time-tested strategies for overcoming those obstacles. More importantly, I hope I’ve made three things clear:


1. All of these obstacles are surmountable
2. The DevOps journey is well worth the effort
3. and The Solinea team is here to help.


Share your thoughts on these topics with us. We’d love to hear feedback from others walking the path to enterprise DevOps Nirvana.

Author: Seth Fox

The post Taking Care of Business: Breaking Down Business Barriers to DevOps Transformation (third of three) appeared first on Solinea.

]]>
1504
Deploying Kubernetes with Ansible and Terraform https://solinea.com/blog/deploying-kubernetes-ansible-terraform Fri, 22 Apr 2016 16:56:10 +0000 https://solinea.com/?p=1584 This is part 4 in a 6-part series on containers, microservices, and container orchestration to help tech and business professionals of all levels understand and embrace this transformative technology and the best practices processes to maximize your investment in it.Here is an outline:Intro: Why Containers and Microservices Matters to Business – Executive thought leadership perspective – […]

The post Deploying Kubernetes with Ansible and Terraform appeared first on Solinea.

]]>
This is part 4 in a 6-part series on containers, microservices, and container orchestration to help tech and business professionals of all levels understand and embrace this transformative technology and the best practices processes to maximize your investment in it.

Here is an outline:

  1. Intro: Why Containers and Microservices Matters to Business – Executive thought leadership perspective – Coming Soon!
  2. Getting started with Kubernetes – How to start with a POC, weave k8s into your existing cicd pipelines, build a new pipeline
  3. Intermediate level post – Ready to kick the tires? K8s, Ansible, Terraform media/entertainment enterprise case study
  4. Advanced tips and tricks – Take things further with “Tapping into Kubernetes Events“, “Posting Kubernetes Events to Slack“, and “Chatbots for Chatops set up on Gcloud w Container engine
  5. Scaling Docker and Kubernetes in production to thousands of nodes on Kubernetes in one of the largest consumer web properties – Coming Soon!


Website-Kubernetes-Ansible-Terraform_Logo

Let’s talk Kubernetes. I’ve recently had some clients that have been interested in running Docker containers in a production environment and, after some research and requirement gathering, we came to the conclusion that the functionality that they wanted was not easily provided with only the Docker suite of tools. These are things like guaranteeing a number of replicas running at all times, easily creating endpoints and load balancers for the replicas created, and enabling more complex deployment methodologies like blue/green or rolling updates.

As it turns out, all of this stuff is included to some extent or another with Kubernetes and we were able to recommend that they explore this option to see how it works out for them. Of course, recommending is the easy part, while implementation is decidedly more complex. The desire for the proof of concept was to enable multi-cloud deployments of Kubernetes, while also remaining within their pre-chosen set of tools like Amazon AWS, OpenStack, CentOS, Ansible, etc.. To accomplish this, we were able to create a Kubernetes deployment using Hashicorp’s Terraform, Ansible, OpenStack, and Amazon. This post will talk a bit about how to roll your own cluster by adapting what I’ve seen.

Why Would I Want to do This?

This is totally a valid question. And the answer here is that you don’t… if you can help it. There are easier and more fully featured ways to deploy Kubernetes if you have open game on the tools to choose. As a recommendation, I would say that using Google Container Engine is by far the most supported and pain-free way to get started with Kubernetes. Following that, I would recommend using Amazon AWS and CoreOS as your operating system. Again, lots of people using these tools means that bugs and gotchas are well documented and easier to deal with. It should also be noted that there are OpenStack built-ins to create Kubernetes clusters, such as Magnum. Again, if you’re a one-cloud shop, this is likely easier than rolling your own.

Alas, here we are and we’ll search for a way to get it done!

What Pieces are in Play?

For the purposes of this walkthrough, there will be four pieces that you’ll need to understand:

  • OpenStack – An infrastructure as a service cloud platform. I’ll be using this in lieu of Amazon.
  • Terraform – Terraform allows for automated creation of servers, external IPs, etc. across a multitude of cloud environments. This was a key choice to allow for a seamless transition to creating resources in both Amazon and OpenStack.
  • Ansible – Ansible is a configuration management platform that automates things like package installation and config file setup. We will use a set of Ansible playbooks called KubeSpray Kargo to setup Kubernetes.
  • Kubernetes – And finally we get to K8s! All of the tools above will come together to give us a fully functioning cluster.

Clone KubeSpray’s Kargo

First we’ll want to pull down the Ansible playbooks we want to use.

  • If you’ve never installed Ansible, it’s quite easy on a Mac with brew install ansible. Other instructions can be found here.

  • Ensure git is also installed with brew install git.

  • Create a directory for all of your deployment files and change into that directory. I called mine ‘terra-spray’.

  • Issue git clone git@github.com:kubespray/kargo.git. A new directory called kargo will be created with the playbooks:

Spencers-MBP:terra-spray spencer$ ls -lah
total 104
drwxr-xr-x  13 spencer  staff   442B Apr  6 12:48 .
drwxr-xr-x  12 spencer  staff   408B Apr  5 16:45 ..
drwxr-xr-x  15 spencer  staff   510B Apr  5 16:55 kargo
  • Note that there are a plethora of different options available with Kargo. I highly recommend spending some time reading up on the project and the different playbooks out there in order to deploy the specific cluster type you may need.

Create Terraform Templates

We want to create two terraform templates, the first will create our OpenStack infrastructure, while the second will create an Ansible inventory file for kargo to use. Additionally, we will create a variable file where we can populate our desired OpenStack variables as needed. The Terraform syntax can look a bit daunting at first, but it starts to make sense as we look at it more and see it in action.

  • Create all files with touch 00-create-k8s-nodes.tf 01-create-inv.tf terraform.tfvars. The .tf and .tfvars extensions are terraform specific extensions.

  • In the variables file, terraform.tfvars, populate with the following information and update the variables to reflect your OpenStack installation:

node-count="2"
internal-ip-pool="private"
floating-ip-pool="public"
image-name="Ubuntu-14.04.2-LTS"
image-flavor="m1.small"
security-groups="default,k8s-cluster"
key-pair="spencer-key"
  • Now we want to create our Kubernetes master and nodes using the variables described above. Open 00-create-k8s-nodes.tf and add the following:
##Setup needed variables
variable "node-count" {}
variable "internal-ip-pool" {}
variable "floating-ip-pool" {}
variable "image-name" {}
variable "image-flavor" {}
variable "security-groups" {}
variable "key-pair" {}

##Create a single master node and floating IP
resource "openstack_compute_floatingip_v2" "master-ip" {
  pool = "${var.floating-ip-pool}"
}

resource "openstack_compute_instance_v2" "k8s-master" {
  name = "k8s-master"
  image_name = "${var.image-name}"
  flavor_name = "${var.image-flavor}"
  key_pair = "${var.key-pair}"
  security_groups = ["${split(",", var.security-groups)}"]
  network {
    name = "${var.internal-ip-pool}"
  }
  floating_ip = "${openstack_compute_floatingip_v2.master-ip.address}"
}

##Create desired number of k8s nodes and floating IPs
resource "openstack_compute_floatingip_v2" "node-ip" {
  pool = "${var.floating-ip-pool}"
  count = "${var.node-count}"
}

resource "openstack_compute_instance_v2" "k8s-node" {
  count = "${var.node-count}"
  name = "k8s-node-${count.index}"
  image_name = "${var.image-name}"
  flavor_name = "${var.image-flavor}"
  key_pair = "${var.key-pair}"
  security_groups = ["${split(",", var.security-groups)}"]
  network {
    name = "${var.internal-ip-pool}"
  }
  floating_ip = "${element(openstack_compute_floatingip_v2.node-ip.*.address, count.index)}"
}
  • Now, with what we have here, our infrastructure is provisioned on OpenStack. However, we want to get the information about our infrastructure into the Kargo playbooks to use as its Ansible inventory. Add the following to 01-create-inventory.tf:
resource "null_resource" "ansible-provision" {
  depends_on = ["openstack_compute_instance_v2.k8s-master","openstack_compute_instance_v2.k8s-node"]
  
  ##Create Masters Inventory
  provisioner "local-exec" {
    command =  "echo \"[kube-master]\n${openstack_compute_instance_v2.k8s-master.name} ansible_ssh_host=${openstack_compute_floatingip_v2.master-ip.address}\" > kargo/inventory/inventory"
  }

  ##Create ETCD Inventory
  provisioner "local-exec" {
    command =  "echo \"\n[etcd]\n${openstack_compute_instance_v2.k8s-master.name} ansible_ssh_host=${openstack_compute_floatingip_v2.master-ip.address}\" >> kargo/inventory/inventory"
  }

  ##Create Nodes Inventory
  provisioner "local-exec" {
    command =  "echo \"\n[kube-node]\" >> kargo/inventory/inventory"
  }
  provisioner "local-exec" {
    command =  "echo \"${join("\n",formatlist("%s ansible_ssh_host=%s", openstack_compute_instance_v2.k8s-node.*.name, openstack_compute_floatingip_v2.node-ip.*.address))}\" >> kargo/inventory/inventory"
  }
  provisioner "local-exec" {
    command =  "echo \"\n[k8s-cluster:children]\nkube-node\nkube-master\" >> kargo/inventory/inventory"
  }
}

This template certainly looks a little confusing, but what is happening is that Terraform is taking the information for the created Kubernetes masters and nodes and outputting the hostnames and IP addresses into the Ansible inventory format at a local path of ./kargo/inventory/inventory. A sample output looks like:

[kube-master]
k8s-master ansible_ssh_host=xxx.xxx.xxx.xxx

[etcd]
k8s-master ansible_ssh_host=xxx.xxx.xxx.xxx

[kube-node]
k8s-node-0 ansible_ssh_host=xxx.xxx.xxx.xxx
k8s-node-1 ansible_ssh_host=xxx.xxx.xxx.xxx

[k8s-cluster:children]
kube-node
kube-master

Setup OpenStack

You may have noticed in the Terraform section that we attached a k8s-cluster security group in our variables file. You will need to set this security group up to allow for the necessary ports used by Kubernetes. Follow this list and enter them into Horizon.

Let ‘Er Rip!

Now that Terraform is setup, we should be able to launch our cluster and have it provision using the Kargo playbooks we checked out. But first, one small BASH script to ensure things run in the proper order.

  • Create a file called cluster-up.sh and open it for editing. Paste the following:
#!/bin/bash

##Create infrastructure and inventory file
echo "Creating infrastructure"
terraform apply

##Run Ansible playbooks
echo "Quick sleep while instances spin up"
sleep 120
echo "Ansible provisioning"
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
-i kargo/inventory/inventory -u ubuntu -b kargo/cluster.yml

You’ll notice I included a two minute sleep to take care of some of the time when the nodes created by Terraform weren’t quite ready for an SSH session when Ansible started reaching out to them. Finally, update the -u flag in the ansible-playbook command to the user that has SSH access to the OpenStack instances you have created. I used ubuntu because that’s the default SSH user for Ubuntu cloud images.

  • Source your OpenStack credentials file with source /path/to/credfile.sh

  • Launch the cluster with ./cluster-up.sh. The Ansible deployment will take quite a bit of time as the necessary packages are downloaded and setup.

  • Assuming all goes as planned, SSH into your Kubernetes master and issue kubectl get-nodes:

ubuntu@k8s-master:~$ kubectl get nodes
NAME         STATUS    AGE
k8s-node-0   Ready     1m
k8s-node-1   Ready     1m

The post Deploying Kubernetes with Ansible and Terraform appeared first on Solinea.

]]>
1584
Open Infrastructure 2.0 https://solinea.com/blog/open-infrastructure-2-0 Thu, 21 Apr 2016 07:11:06 +0000 https://solinea.com/?p=1590 Making the leap to Open Infrastructure 2.0 - Architect and enable OpenStack clouds AND to ensure that CI/CD automation, containerization of applications, microservices, and the orchestration of these services all happen in a new, frictionless environment where barriers between the siloes dissolve, processes change, and skills develop.

The post Open Infrastructure 2.0 appeared first on Solinea.

]]>


The Beginning


In 2006 there was AWS EC2. It was used by forward-thinking application developers who soon saw it as a way to quickly deploy and automate the scaling of infrastructure. Then came the advent of open source IaaS platforms like Cloud.com and Eucalyptus. Some brave souls ventured out to implement private and public clouds with the basic AWS-like services for internal and external consumption, people like Tata Communications and Korea Telecom (I worked with both of these customers).


When OpenStack entered the scene in 2010—largely as an antidote to the perceived shortcomings of the other open source cloud projects—people were skeptical at first. Many derided the entire exercise as misguided or wrong headed. Open source “science projects” worked fine on laptops and in controlled settings, but failed in the data center. Still, the community grew, developers persevered, and soon the big boys joined the party, granting legitimacy to OpenStack.


Today, with the upcoming 13th release of the software, code-named Mitaka, OpenStack is mature, stable and well on its way to becoming THE choice for enterprises implementing agile private cloud infrastructures.


Open Infrastructure 1.0


The story of Cloud.com, Eucalyptus, and OpenStack is the story of Open Infrastructure 1.0—the basic, infrastructure-as-a-service resources behind the firewall that power a private cloud. Armed with basic functionality comparable to AWS (compute, block and object storage, basic networking, authentication, etc.), these services were soon enhanced with SDN, software-defined storage, some timid attempts at PaaS, and workload orchestration templates, among many others.


Today, there are seven core projects with 50+ active projects in the OpenStack “big tent,” the term given to the broad governance structure for all projects, including the seven that are part of “Core” OpenStack (Nova compute, Cinder block storage, Swift object storage, Neutron networking, Keystone authentication, Horizon dashboard, and Heat image management).


The team at Solinea helped enterprises implement some of the first Open Infrastructure 1.0 platforms. While these clouds leveraged some legacy hardware investments and were behind the firewall, they did not necessarily touch legacy infrastructure, organization and processes. These early cloud projects were governed under a philosophy that essentially said, “It’s infrastructure. We have a set of processes that have worked for 20 years. Let’s force fit the cloud into those processes.”

Open Infrastructure 1.0
When cloud was new, processes generally remained legacy.

Some of these enterprises implemented successful clouds, among them a top 5 auto maker, a leading research institute, and a leading global network provider. Solinea worked with all of them.


But we soon saw that there was something missing—the benefits of deploying open infrastructure were not as drastic as we’d all anticipated. Where were the big cost savings? Why wasn’t speed to market for new apps faster?


Provisioning times for infrastructure came down, but not much. Existing operations teams were having a difficult time keeping the cloud running smoothly; outside help was needed. Application workloads were moving onto the new platforms, albeit slowly. The expected deluge in demand did not materialize. Perhaps most concerning was that the development, test and operations teams were still as disjointed and uncommunicative as before—write code, throw it over the fence to test it, send it back with a bug report, with no empowerment on the operations side.


Everyone was still executing the same way they were before. Customers were deploying cloud as a technology. #Fail.


Then, sometime in late 2013, things started to change. “DevOps” was entering the vernacular of our prospects and customers, application migration to the cloud started to become a primary driver, cost no longer was a driving factor, but speed and agility instead; people started looking beyond the technology—reconfiguring processes, re-training operations teams, restructuring their siloed organizations, and incubating cross-functional cloud teams.


We began to engage with customers to address these challenges. In effect, they needed to go beyond OpenStack as a technology. We expanded our team of smart engineers and architects—pros that had been at the forefront of the DevOps movement who knew how to architect massively scalable infrastructures, both private and hybrid. In short, we built a team that understood application architectures for cloud, and knew how to manage large cross-functional programs.


These changes put Solinea at the forefront of creating what we call Open Infrastructure 2.0. The definition is simple. We defined it at a high level in my prior blog post:




Now we expand the definition to a more granular level:

Open Infrastructure 2.0
Open Infrastructure 2.0 unlocks the agility, efficiency and cost advantages of open infrastructure.


As one of our customers likes to say: “If the Infrastructure is the highway, and applications are the cars, we need to get more cars on the highway faster to justify the investment and achieve our agility objectives.”


Making the Leap to Open Infrastructure 2.0


We are well on our way to helping enterprises make the leap from Open Infrastructure 1.0 to 2.0. We’ve worked and are working with companies like Deutsche Boerse, Yahoo! Japan America, and a leading US media and entertainment company to architect and enable OpenStack clouds AND to ensure that CI/CD automation, containerization of applications, microservices, and the orchestration of these services all happen in a new, frictionless environment where barriers between the siloes dissolve, processes change, and skills develop—all driven by the relentless pursuit of business and technology agility and operational efficiency—leading to more “cars on the highway.”


Of course, change takes time, and this is mostly about cultural change after all (see Seth Fox’s excellent blog post on process change).

In the coming days and weeks, we will provide examples across the Open Infrastructure spectrum of the work Solinea is pioneering, so that you can learn more on how enterprises can successfully adopt Open Infrastructure 2.0.

Author: Francesco Paola, CEO, Solinea

The post Open Infrastructure 2.0 appeared first on Solinea.

]]>
1590
Webinar: What’s New in the OpenStack Mitaka Release https://solinea.com/blog/webinar-whats-new-openstack-mitaka-release Sat, 09 Apr 2016 02:38:01 +0000 https://solinea.com/?p=1571 Join OpenStack experts to get the latest on: Key release highlights for Mitaka, What Mitaka new features do, How they work, and what they mean to both pros and beginners...

The post Webinar: What’s New in the OpenStack Mitaka Release appeared first on Solinea.

]]>
The OpenStack Mitaka release is coming soon, and as one of the biggest releases yet, we are excited to share an overview with you! 

Join OpenStack experts to get the latest on: 
  • Key release highlights for Mitaka
  • What Mitaka new features do, how they work, and what they mean to both pros and beginners
  • Bug updates, links to technical resources to help you move to open infrastructure faster 
Bring your questions for interactive Q&A with other open infrastructure, DevOps, and open source experts next Thursday, April 12th, at 11am PT / 2pm ET in this no-frills, engineer-led technical discussion.


Website-Register-Now

The post Webinar: What’s New in the OpenStack Mitaka Release appeared first on Solinea.

]]>
1571
It’s the Enterprise’s Time to Shine: OpenStack Summit Austin https://solinea.com/blog/its-the-enterprises-time-to-shine-openstack-summit-austin Wed, 30 Mar 2016 14:00:59 +0000 https://solinea.com/?p=1495 Stackers from around the world are busy buying airfare and polishing up presentations in anticipation of what will be the biggest gathering of OpenStack developers and users yet...

The post It’s the Enterprise’s Time to Shine: OpenStack Summit Austin appeared first on Solinea.

]]>

Stackers from around the world are busy buying airfare and polishing up presentations in anticipation of what will be the biggest gathering of OpenStack developers and users yet.

The OpenStack Summit in Austin, April 25-29, will set the roadmap for the Newton and Ocata releases of the software, feature keynotes from users and visionaries, and dig into architectural, deployment, and operations best practices.


The momentum within the OpenStack community is palpable, and enterprises are leading the way. In many respects, this Summit is as much about enterprise users as it is about the large service providers and telecoms who are deploying public and hybrid clouds powered by OpenStack.


The Solinea team is presenting five talks at the Summit. We hope you’ll check out the entire schedule and choose the sessions that will help you move your OpenStack game plan—whether you’re looking at VMs or containers—to the next level.


DEFCON 3: OpenStack meets the Information Security Department – Monday, Apr 25th, 2:50pm
James Clark presents this vendor-neutral look at real obstacles and architectural solutions for Infosec compliance when deploying OpenStack.


The Test Takers’ Guide to the Certified OpenStack Administrator Exam – Tuesday April 26th, 2:50pm  
Seth Fox joins Susan Wu (Midokura), Jeffrey Olson (EMC), and Ron Terry (SUSE) for a session that is must-see if you’re preparing to take the COA exam. Come get advice on how to prepare from the group that helped build the test. No, the panel won’t divulge test questions, but attendees will receive sage advice on what it takes to achieve certification from panelists whose companies offer hands-on OpenStack training.

Seeking Fame and Fortune in OpenStack Startup Land – Tuesday, April 26, 2:50pm
Francesco Paola joins fellow entrepreneurs Jesse Proudman (Blue Box, an IBM company) and Sumeet Singh (AppFormix) to offer a “been there, done that” view from founders who are prepared to tell their stories, show their scars and point out some of the less-obvious landmines along the path to startup nirvana. Special guest appearance by Randy Bias (Cloudscaling, acquired by EMC).

Rolling Your Own Kubernetes Clusters on OpenStack with Ansible and Terraform – Thursday, April 28th, 3:10pm  
Spencer Smith leads this intermediate-level tour of the community resources available for deploying Kubernetes clusters on any group of servers, as well as how to leverage tools like Terraform to automate the creation of necessary infrastructure. The session concludes with a demo of a full end-to-end deployment of a small cluster.

App Development in a Dockerized Universe – Thursday, April 28th 5:00pm
Luke Heidecke and John Stanford will present a development and deployment lifecycle approach developed by Solinea that uses Docker containers deployed into an OpenStack cloud. They will describe how each stage has been improved, the challenges encountered, and the solutions.


We’ll see you in Austin!

Author: Seth Fox

The post It’s the Enterprise’s Time to Shine: OpenStack Summit Austin appeared first on Solinea.

]]>
1495
Winning at Whac-a-Mole: Overcoming Psychological Barriers to DevOps Transformation (second of three) https://solinea.com/blog/winning-at-whac-a-mole-overcoming-psychological-barriers-to-devops-transformation Tue, 29 Mar 2016 14:00:24 +0000 https://solinea.com/?p=1483 Resistance to change is one of the largest barriers to successful enterprise adoption of DevOps.

The post Winning at Whac-a-Mole: Overcoming Psychological Barriers to DevOps Transformation (second of three) appeared first on Solinea.

]]>

Author’s note: This is the second in a three-part series about why DevOps transformation can be so hard for an enterprise, Be sure to read Part 1: Overcoming Process Barriers to DevOps Transformation: A Herculean Task.

Resistance to change is one of the largest barriers to successful enterprise adoption of DevOps. This can be especially true when you try to scale the concept beyond a tiger team or skunk works to the entire software development and infrastructure ops resource pools. Resistance to change is human nature, and the mere whisper of forthcoming “change” can elicit anything from passive-aggressive behavior to overt resistance from anyone within shouting distance.


Why? Because change disrupts our “equilibrium.” Change forces us to move beyond our comfort zone in the safe status quo and into the unpredictable, uncertain, world beyond-our-control—and this creates fear.


Popping Up Everywhere


In our work with enterprises all over the world, we see this fear of and resistance to change popping up in all sorts of unproductive ways—like ugly Whac-a-Moles—in the midst DevOps transformation journeys. You can arrange these anecdotal experiences of corporate organ rejection into roughly three categories.


This isn’t easy stuff. Solving problems of this variety are how large management consulting firms stay in business. But, if you face these fears rationally, you can begin to see how the rational self-interest of enterprises can win the day, especially among enterprises where leaders understand the existential imperative to change.


Without further ado, those three categories of fears we commonly encounter are:

  1. Fear of losing control
  2. Fear of failure
  3. Fear of obsolescence

Now, let’s look at some ways to anticipate where these fears may raise their ugly heads in your DevOps journey and how to keep things moving in a positive direction.

Fight Fiefdoms


We fear losing control. This also goes to human nature. When you introduce change people feel like this will impact their ability to maintain control of the environment they own. If things don’t change, at least they know how to keep the lights on, even if the process is less than ideal.


If fact, we are so enamored with control that in many large enterprises, we see evidence that there has been some “event” or out-of-control situation that has precipitated the knee-jerk addition of a non-productive, non-collaborative policy to ensure that event never happens again. We see this evidence in policies like “no development access to production systems,” and vice versa. I have heard things like “We don’t do blank in that environment” where blank could be technologies like DHCP or load balancing.


It is easy to see how these fear-based policies come to be. These environments are managed by different people, each of whom are being evaluated independently. Therefore, they each have a strong incentive to maintain control by “protecting the fiefdom.” But DevOps requires the elimination of these fiefdoms and the policies that sustain organizational silos.


When you implement a new process, people will need to collaborate in ways that they have not done before. They will need to trust people with things that they have not been entrusted with in the past. Each organization is different, but finding that “thing” that gets people to collaborate is critical to overcoming this psychological barrier.


Give Deployment an “Easy Button”

Processes that are unnecessarily hard and complex are prone to fail. No one likes to fail.


So, when a transition to a DevOps workflows begin, you’ll find that the people who are the biggest disbelievers generally have been responsible for deployments in some form or fashion. These people have seen the worst of it. Deploying code that has never been deployed before into a production system is scary.


But herein lies the beauty of DevOps. One of the primary benefits of maintaining a code base that can be deployed at any time is that to get to this state, you will deploy it over and over again. When your code base is being continuously integrated and deployed, by the time you get to the production system, you will have deployed this code several hundred times (the actual number of deployments will vary based on the size of the team, size of the code base and the length of the project).


By performing this deployment over and over again, there should be no surprises when it comes time to stand up an application in production. Practice makes perfect. A code deployment to a production environment, should be the easiest thing we do, not the hardest. Give deployment an “Easy Button” and watch fear evaporate.


Invest in Training


Another common source of resistance to DevOps is a fear of obsolescence— ”If this works, they won’t need me any more.” What your IT team needs to know is that the DevOps transformation can not only make the company more competitive and successful but also make their job more important to the organization than ever before. After all, the success of your company’s DevOps transformation is to a large extent dependent upon the expertise of the individuals charged with architecting, implementing and operating the infrastructure. Plus, DevOps will improve day-to-day work life of IT professionals considerably by eliminating unproductive processes that generated conflict, dissention, deployment failures, etc.


One of the best ways to establish trust in this regard is to invest in your people. Provide training and mentoring to ensure they have the necessary knowledge and skills to execute and sustain your DevOps transformation.


The fears associated with change are not always rational, but they are real and potentially destructive to your DevOps journey. Keep these fears—like Whac-a-Moles—out of play by anticipating them and facing them head on.


Read our final post of this series … getting down to business, with tips on taking care of several of the most common business barriers to DevOps transformation.

Author: Seth Fox

The post Winning at Whac-a-Mole: Overcoming Psychological Barriers to DevOps Transformation (second of three) appeared first on Solinea.

]]>
1483