One of the most underestimated, and mis-understood elements of Cloud enablement and Business Digital Transformation, is that without first embracing and creating an Agile and automated framework for Development & Operations, the rest of Cloud Adoption is simply “window dressing”. If done right, the area of DevOps is THE enabling foundation and integrated framework of process + tool automation (-> CI/CD) that supports many if not most of a company’s Cloud adoption benefits.
(this section is taken from our article : Enterprise Architecture 101 : Part 1 – From Frameworks & Methodologies.. to Agile Cloud Enablement *)
As described in the History of Agile, the Spring of 2000 initiated talks at a ski resort ( ), where several in the SW development community gathered and started to come to a shared emphasis to promote/progress ideas in “Light methodologies, such as Extreme Programming, Adaptive Software Development, Crystal, and SCRUM“. While this initial meeting didn’t result in one formal consensus, it did result in several articles that started conversations.
.. Then on February 11, 2001 at the lodge in Snowbird (ski resort), seventeen members from this extended community did align their objectives to become the “Agile Alliance”, and ultimately together wrote the Agile Manifesto and the 12 principles of Agile SW Development.
Note that the 12 principles of the Agile Manifesto came before and is distinctly different from the 12 Factor Application methodology that several Heroku developers contributed to and published in 2011 by the Heroku Co-Founder Adam Wiggins. The latter (12 Factor App) lends itself toward more prescriptive SW development as a service or “stateless” microservice guidelines for Application Architecture (stateless, don’t hard-code config info but store locally, isolate dependencies, scale horizontally, and build-deploy-run in an iterative model..). The 12FactorApp principles come more into context when discussing the post-DevOps (post 2008) movement where “Cloud Native” development and concepts flourished (see the next section). Note that while stateless was the initial focus, the reality within the Enterprise space has begun to extend the need for “Stateful” cloud-native architectures and capabilities, where data locality contributes to enterprise business critical requirements & SLA’s (latency, availabilty, regulatory compliance, security, etc).
As you can see from the above example, the founders of Agile came from the SW development world, but adopted and incorporated MANY ideas and leveraged the best of several models (from several disciplines beyond SW development) to develop Agile methodology (Continuous Improvement, Small teams, customer directed discovery/req’s, end-to-end ownership of SW, iterative/regular readouts on progress, etc).
Ironically, several years prior (1994), while leading my company’s end-to-end SW re-architecture and development efforts, I published my MBA dissertation on agile SW Development through Continuous Improvement and TQM that incorporated most of these concepts from a real-world vantage point. Though I never published it beyond my University, I was on the right track and got my MBA (since then it’s been a part of required course materials). Since that point in time, these themes and principles have become a part of my DNA 😉 .
Taking a step back and being fair, Agile (more holistically than only within the SW development specific context) is neither a Methodology, nor a specific Framework or process, but more so a collection of principles and a philosophy of deploying and reducing wasted cycles.. offering more rapid continuous value.
To be fair, Agile is neither a Methodology, nor a Framework or process, but more so a collection of principles and a philosophy of deploying and reducing wasted cycles.. offering more rapid continuous value.
*As such, we need to acknowledge that Agile & Continuous Improvement themes came from and built upon movements, as we discuss in our article : Enterprise Architecture 101 : Part 1 – From Frameworks & Methodologies.. to Agile Cloud Enablement **
SDLC with Agile is a more traditional “Enterprise” SW development methodology applying “Agile” methods :
Upon the arrival of tools like Ansible, Chef, Puppet, and the more comprehensive Terraform (one of many Hashicorp tools), the concept of Infrastructure as Code was born (aka automation of Operations and Configuration Management tasks). Together these tools enabled developers / sysadmins to script & REUSE the former manual command-line driven process of maintaining/configuring infrastructure. Upon use and re-use/extensions of these tools / techniques, the PROCESS of building & maintaining systems and infrastructure (deployment environments) began to closely resemble the way that SW developers build/ maintain their applications and source code. Note in the diagram below from edureka!, that these tools and frameworks typically come as either a “push” or “pull” (poll) model.
See the section below on FOSS tools for automation for a greater walk through of these tools & technologies.
At the same time that IaC was gaining traction, several advancements and many competing efforts, tools and frameworks to automate and centralizing the Management of these many virtualized commodity environments grew rapidly.
Many ad-hoc tools began to fill the void in the Application Container space (today’s Docker and competing CoreOS rkt) offering a lighter-weight way to virtualize, image, and rapidly deploy services without full-fledged heavy weight VM’s/ Hypervisors.
Given that consolidating workloads (or constituent services) within Containers ultimately shares ONE underlying OS Kernel, if/when that OS goes down or fails, all of the Containers that rely upon it will also go DOWN. This is one of the many architectural considerations that need to be considered up-front when doing Cloud-Native Development, or Modernizing / Re-Factoring Legacy Applications. See my other Cloud and Application Architecture articles for an in-depth understanding.
Hence, when deploying Applications/Services to Containers, one must also factor in the RAS (Reliability Availability Scalability) capabilities, in addition to Orchestration, Scheduling, and Monitoring (for faults, etc). This is the realm that Kubernetes has become the standard for managing.
Kubernetesalso offers you an additional layer of abstraction to manage above the underlying hardware (or IaaS Infrastructure as a Service layer) that ultimately host the VM’s or bare-metal infrastructure running the container workloads.
You can think of Kubernetesas analogous to a basic subset of VMware’s cVenter VRA/ VRO (vRealize Automation and Orchestration packages for managing VM’s), but targeting Container centric management (and with super-set components like Knative/ Istio, a micro-services ecosystem for serving, building, and eventing). While Kubernetes(also abrevited K8’s) can provide additional structure, orchestration, and management mechanisms for container-based deployments, the amount of time and effort required to manually configure & deployment K8s (and ensure it’s secure and extensible) can be a very daunting process.
For this reason, many have moved to “canned” K8s frameworks such as PKS, OpenShift, Heptio(acquired by VMware), Rancher, etc.. or considered utilizing integrated Hybrid/ Public cloud platform services (eg. AWS EKS managed control plane, Google GCP’s GKE, or Azure’s AKS services).
While VMware has been very aggressive in extending it’s local/private customer SDDC/VCF deployments into the public cloud space (eg. VMware cloud on AWS), it should also be noted that VMware has acquired several companies recently for container/ kubernetes capabilities (Heptio, including 2 of the 3 Kubernetes co-founders), and will be offering Kubernetes and Container management within vCenter out of the box (via VMware’s Project Pacifica and Tanzu) in the near future.
Related to VMware Kubernetes/Containers futures and Dev/Ops orchestration (as presented at VMworld’19), with VMware’s recent acquisition of Bitfusion, you can most likely expect to see integrated GPUaaS/FPGAaaS capabilities not too far down the road. This can be thought of like how you provision and align fractional CPU’s (cores) to VM’s today (or as noted above to containers w K8’s coming), you will also be able to provision and spin up new fractional GPU/ FPGA cores to VM’s (and/or containers) within this same managed framework over the network.
Abstraction is alive and well, and continuing the consolidation and simplification of managing several disparate components/systems/environments from fewer and fewer interfaces, where a unified SPOG (single pane of glass) interface will be the preference for most.
*See the Top FOSS Tools Section below for a more detailed look at Containers and Kubernetes, and as importantly, the list of related Security Vulnerabilities described in the Drawbacks and Challenges section below. *
When Patrick DuBois and Andrew Clay Schafer met at the Agile2008 Conference for “Agile Infrastructure”, the connection to DevOps was born. Patrick later coined the term “DevOps”, to represent the merging of Development and Operations.
As a contrast to more traditional Development models, today’s DevOps is intended to bypass the serial nature of Development and disjointed Operations.. Integrating them together and focusing around “Automation” of Testing / Integration, decreasing the time to Develop-> Deploy.
In terms of the contrast between a traditional “Waterfall” development model vs. Agile SW Dev, see the diagrams below :
At the same time that technology was rapidly advancing, we need to look back and recognize the impact and benefits of the economic down-turn that simultaneously hit all with several years of Global Recessions & recovery (that we still feel the remnants of today). We need to realize that in part, this down-turn in Corporate spending led to many of today’s movements toward Efficiency, and doing More with Less :
The following table from an IBM blog shows comparisons between these common configuration management and provisioning tools/frameworks :
From the gruntwork.io blog, you can see how Ansible and much more predominantly, Terraform have grown MUCH more rapidly, while Chef and Puppet have been on the decline between 2016 & 2019 :
Given that github is among the top repo’s, no DevOps discussion is complete without an overview of what it offers and some of it’s key concepts and capabilities :
Containers are a technology that isn’t new, but rather one that was before it’s time and today has found a perfect match in Cloud-Native development/ testing and Micro-Service based deployments. The first “Production Ready” Containers actually were released in 2004 by Sun Microsystems as part of Solaris 10 (and later rebranded as “Zones”), but were a technology before their time, with little marketed use-cases (and no DevOps community movement at the time to promote them). From 2004-2010 Sun Microsystems Open Sourced it’s Solaris OS code, exposing it’s Containers, Dtrace, and ZFS capabilities for other Linux variants to decipher and reverse-engineer within their subsequent OS versions.
*Even though Linux containers weren’t released until 2008, and Docker Containers not until 2013, Docker has won the battle as a standard.
Today’s Containers (aka Docker since it has become the standard) offer light-weight and rapid workload (Non-VM) virtualization within a multi-tenant environment (sharing the same underlying OS, only requiring an application and related libraries to be loaded within each container as a “container image”, startup time is very fast in comparison to loading an entire OS within VM’s). This offers running many environments with very little hardware (nor VM overhead), and is well suited for cloud-based or locally isolated and elastically scalable Development/ Testing & DevOps (where enterprise requirements such as high-performance, high availability, security, and fine-grained production workload isolation of a VM or even bare-metal system aren’t as stringent).
In contrast to the conventional virtualization using VM’s (hypervisors), where Vm’s are designed to encapsulate and run an entire Operating System along with multiple workloads/ services, application containers are designed to package and run a single service or application (along with required libraries within the container image). These isolated workloads that are run in Containers (predominantly Docker), share the same single underlying OS Kernel.
In this sense, a containers based deployment running on bare-metal (without a hypervisor, and not within VM’s) can save enormous amounts of memory and CPU capacity & utilization, given that only one copy of the OS is loaded and run on the physical HW, and that only applications / libraries are loaded within individual containers.
The other aspect of performance “overhead” (penalty) which is often overlooked in relationship to using VM’s (traditional consolidated IaaS deployments) is that in a hypervisor “virtualized” deployment, much more system P2V (physical to virtual) translation (aka overhead) occurs, itself incurring additional latency and performance penalty (translating between physical components and multiple virtual instantiations, along with additional layers of system/library calls required and driver stacks to traverse per call). Some have published performance penalties as high as 12-20% PER VM vs. bare-metal deployments.
While a containers-based approach can offer much more resource efficient (and dense) consolidation, it also requires further Reliability/Availability and management considerations, given that if the underlying OS fails.. then all of the Containers relying upon it will also fail. For this reason, additional orchestration and fault monitoring is required (see Kubernetes, below), not to mention SW architectural availability considerations (minimizing cascading failures in a “Micro-Services” based deployment, etc).
For more information, Seehttps://www.docker.com/what-docker
In addition to our discussion earlier, (from the Kubernetes.io site), it can be described as :
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.
Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.
Created at Netflix, it has been battle-tested in production by hundreds of teams over millions of deployments. It combines a powerful and flexible pipeline management system with integrations to the major cloud providers.
*See the final 2 sections for examples of Jenkins / Spinnaker within the context of CI / CD.
One Architectural trend that has come about along-side the Cloud movement (and first advocated by cloud providers/vendors and hyper scalers) is to develop using a “Cloud Native” or “Cloud First” approach, such as described in the section above relating to The 12Factor App (Heroku). This centers around scaling horizontally and in parallel by decomposing an Application/ Workload into it’s underlying SW components, and core “Services”, then further into their constituent modules, aka “Micro-Services” for optimal scalability and light-weight (rapid) development cycles.
With Agile DevOps, developers or (2 pizza) teams “Own” the entire lifecycle, becoming experts on just those “MicroServices” from Development -> Testing -> Integration/ Staging -> Prod Deployment.
Many of the popular tools of today for DevOps center around Docker Containers / Images (and frequently using a bitnami or other repository) & Kubernetes for Container mgmt and Orchestration.
*Lastly, note that Agile DevOps can be in conflict with the methods, process, and culture of traditional Enterprise SW development (and Enterprise Architecture Principles / methods/ reference architectures) that historically have generated large monolithic SOA/ J2EE/ ESB code-bases and libraries which are very slow to extend/test/deploy, and typically run within Application Servers with shared JVMs (that today are not in line with application design and runtime requirements within the Microservice or CNA-Container Native App space, not to mention the rapid growth within Serverless / FaaS-Functions as a Service deployments and architectures). For Java application environments, many have already moved to more agile frameworks such as Spring (Opensource open framework).
See our article : Enterprise Architecture 101 : Part 1 – From Frameworks & Methodologies.. to Agile Cloud Enablement for a more comprehensive review.
Wave 1 of Public Cloud deployments today only account for a SMALL % of all business workloads, where Enterprise Applications are the last (and most difficult 80) mile(s)..
It needs to be addressed and noted that Enterprise Application environments are unique in many ways from the low-hanging fruit (General Purpose, Dev/Test, Web/stateless, environments..) that cloud providers rapidly assisted organizations to re-platform onto their respective Public Clouds as the first wave or rapid cloud adoption.
Note that with most enterprise application environments, the inability to use automated migration tools are the least of your challenges as described below.
*Enterprise Workloads add several distinct characteristics and complexities :
In summary, while “stateless” services have been the focus for “cloud-native” development and refactoring, as noted above, the recent push and Enterprise focus has begun to extended (or somewhat backtrack) towards accommodating these “stateful” application architectures.
Note that much work continues on the Enterprise Workload front, given the stark reminder that MOST workloads today are NOT running in a public cloud footprint.
*This final push for capabilities and movement in the Enterprise space (that has driven rapid growth in Hybrid & Multi-Cloud on-premises deployments) reflects (as a small reminder) the FUD that businesses have been fed over the past few years, given the reality that today ONLY 20% of workloads are running in Public Clouds today !
.. as Google vice president of engineering Eyal Manor, at Google Cloud Next 2019 noted , ~80% of Enterprise Applications still reside On-Premises and not in the Public cloud space :
“Today, 80 percent of workloads are still not in the cloud.”
Today DevOps developers might spend 1/2+ of their time on Operational aspects (to integrate various CI/CD tools, maintain integrations, and constantly update the CI/CD tool chain as new technologies are released). Because of this, many/most do Continuous Integration today, but few are doing Continuous Deliver & Deployments.
Where Enterprise Applications can be started and stopped and/or failed-over as a single entity, Application Environments composed of many Micro-Services need additional external “watchdog” or HA-like monitoring capabilities (startup/ shutdown / probing if OK) in the event that one or several microservices fail.
In order to prevent cascading failures with MicroServices, an architecture design pattern and mechanisms have been developed as “Circuit Breakers” (aka, fail fast), among others such as the “side-car” pattern.
(using standards to “minimize” the number of physical components & configurations/versions/ pkgs, patches, FW.. is at the core of reducing IT Service Management /ITIL & Enterprise Architecture COMPLEXITY )
When application architecture does not depend on the existence of other (re-usable, consistent, tested) libraries/frameworks, it entices the developer to utilize components from potentially disparate (non-validated) repositories / hubs (eg. Github, npm, etc).
Today’s developers can readily grab any of the following, without the ability to know completely “what” previous changes, sub-component updates, or “Untested vulnerabilities” might lie lurking within :
By allowing Developers control and ownership over the entire DevOps life-cycle (from Dev->Test->Prod..) this can lead to a lack of “unity” in standards and configurations between teams and environments, (which can result in future Production related COMPLEXITY, & Extended Time to Diagnose).
The following graph demonstrates how many permutations/combinations you get (from the traditional 7-layer stack : App, DB, OS, Hypervisor, Server, Storage, Network) as you increase the number of unique configurations per layer, ..we quickly get exponential RISK, Exposure, and COMPLEXITY to manage ! :
We have all heard of the Heartbleed (OpenSSL/ TLS) & Venum (Xen/ KVM..) security holes and/or attacks, as well as the Equifax (Apache Struts Framework bug) breach of personal/SSN records for nearly HALF of ALL Americans (~150 Million people !), including hundreds of thousands of Credit Card account records.
With ANY use of large Open Source Software (FOSS) distributions and frameworks, Security issues will always be possible.
The fundamental reasons being :
A> EVERYONE has access to the source code, as well as public visibililty to any/all testing vulnerabilities exposed.. BEFORE the issues are resolved..
and
B> That the Patches are only applied by the rest of the world AFTER they have been IDENTIFIED within the organization. (eg. the Equifax issues related to an Apache Struts framework patch that had been available for several months.. but was only 1 of MANY disparate components deployed in various configurations/versions across many systems & environments, requiring significant effort in IDENTIFYING Patches needed as part of Security Management activities).
A new Github based Security Guide For Developers is a reasonable high-level dev-centric checklist to start with.
Today’s DevOps world is in reality becoming more of a DevSecOps world, with the realization that Security needs to be factored in at all levels/ layers, especially when dealing with Enterprise class SLA’s/ SLC’s.
Unfortunately, with all new things that are Open Source, preying eyes and those that monitor “day-1 vulnerabilities” (aka, those identified and not yet fixed/patched), Containers and Kubernetes deployments have recently undergone a great amount of scrutiny from a Security vantage point, and for GOOD reason.
If you want to take a look at current security issues with each, take a scary peek at these lists :
Yes, > 1000 vulnerabilities exist for Containers !
For this reason, while not always publicized, many (if not most) of today’s managed container platform (PaaS) services actually spin up Containers WITHIN VM’s !
You might ask why in the world would they do so ? The reason should be clear to you, but beyond container vulnerability issues, it’s also related to the movement towards a “security-first” approach (above and beyond “least-privileged” or a “zero trust” stance), where (complete) tenant isolation is better guaranteed from within a VM/ hypervisor vs. within a shared OS kernel underlying the current state of loose container security affairs (where any container breached or proc with sudo/root or back-door permissions can wreak havoc across all containers and the entire Cluster on that system, given a single underlying OS/kernel is shared among container tenants on each system within a K8s pod).
Beyond the organizational, cultural, and procedural aspects of the DevOpsworld, completing the full spectrum of Automated Builds, Testing, Integration, and Delivery / Deployment is the pinnacle of agility. This is the area that CI/CD is targeted to fulfill and extend.
Continuous Integration (CI) is the process that Developers (should) utilize regarding software life-cycles to integrate all their disparate Source Code builds within a shared repository continuously (as many times a day as the code changes, and typically utilizing automation). This allows for shared visibility, testing and integration of all the pieces together as early as possible (so that issues/errors can be found and resolved as early on in the process as possible). Jenkinsis today’s reference toolset “de jour” for CI activities.
Note that with complex “refactoring” projects (moving from Monolithic to a Microservices based architecture), one prior application & large block(s) of source code might be decomposed into dozens, hundreds or even thousands of individual “microservices”, where a separate DevOps team may be individually responsible for a single or group of Microservices (and many times blind regarding other code that they need to integrate and function with) to complete the functionality of the refactored application. By utilizing an integrated suite of automation tools, developers can establish and utilize a pipelined CI approach where thorough testing/integration phases can be automated and results made visible.
The following offers you the progression from a CI framework, to a more inclusive CI/ CD process :
.. and some have gone to lengths to extend Jenkins to go beyond CI to also handle Continuous Delivery & Deployment as depicted in the following examples (note that Spinnaker is typically the CD counterpart to Jenkins, as described in our next section).
Continuous Delivery (CD) is the next aspect of a fully functional CI/CD environment, where developers can automate the build, testing, and preparation for MANUAL deployment(s). Note that while Continuous Delivery is a manual process, the term (and it’s close cousin) “Continuous Deployment” is it’s automated counterpart, where the successful completion of all tests can automate the deployment of applications/service components to Production (or other) environments automatically. Today’s CD counterpart toolset to Jenkins(CI) is most often Spinnaker, which was originally developed byNetflix and more recently backed by Google. The “Concepts” page at the Spinnaker site gives an excellent overview of the level of depth and breadth that the toolset offers. Note that many/ most “mature” organizations utilize both Jenkins + Spinnaker, leveraging each of their strengths.
Gartner sums up as “DevOpsas a tool-centric philosophy that supports a Continuous Delivery value chain.”
Again, it should be noted that DevOps without CI/CD (+ oversight and visilbity) can be a scenario of automating things without structure and uniformity, which can easily result in greater complexity that will require more time & effort to address at some point. While these toolsets are powerful, they don’t come pre-configured and require some time to setup and customize. See our article and discussion of challenges/considerations for managing DevOps via Enterprise Architecture in a CloudNative World.
For a greater examination of the Jenkins/Spinnaker toolsets, see the article CI/CD- Continuous Integration, Continuous Delivery/Deployment .
Let us know what you think. Please Comment and/or Retweet Below with any Q’s, thoughts, or requests for future content.
Visit our Contact page for further ways to reach out.. or participate by becoming a contributor. 🙂
All content and comments made are that of TechThought.org, the author, and not of current nor past employers.