Type to search

DevOps 101 : From Waterfalls & IaC to Agile Cloud-Native Development with CI/CD

Revised from an article originally published @ Oracle.com on October 3, 2017
by: Todd Jobson | Sr. Principal Enterprise Architect; MBA


DevOps 101 : From Waterfalls to Agile Cloud-Native Development with CI/CD

One of the most underestimated, and mis-understood elements of Cloud enablement and Business Digital Transformation, is that without first embracing and creating an Agile and automated framework for Development & Operations, the rest of Cloud Adoption is simply “window dressing”.    If done right, the area of DevOps is THE enabling foundation and integrated framework of process + tool automation (-> CI/CD) that supports many if not most of a company’s Cloud adoption benefits.

Outline :

    • Constraints and origins of Traditional SW Development
    • Roots of SW Development Agility
    • An overview of Infrastructure as Code (IaC) & Infrastructure Automation
    • From DevOps to Cloud Native w Microservices & Containers
    • Today’s DevOps Enabling Technology & Benefits (Ansible, Chef, Puppet, Terraform, Docker, Kubernetes)
    • Level setting & Challenges of today’s FOSS & DevOps movements
    • Progression towards the HolyGrail of CI/CD (Continuous Integration/Deployment with Jenkins / Spinnaker)


Constraints and Origins of Traditional SW Development


The past, tried-and-true, Software Development “Waterfall” Model is where most Enterprise Application Environments have been born :

WHAT was MISSING before the late ~1990’s+ ?

(noting that Silo’d deployments and dedicated HW was best practice)

  • No Wide-spread access to the Internet via HTML/Web Browser – Which wasn’t available until after the 1994 NCSA (National Center for SuperComputer Applications) release of Mosaic, 1994’s Netscape release of Mozilla, and 1995’s Microsoft release of Internet Explorer.
  • No/little Re-Use of assets via large public repositories (Apache, Github, javascript/ npm, Container/ VM images, etc) to leverage elsewhere for Development.
  • Minimal FOSS (Free / Open Source Software) w/o SW Licensing limitations/restrictions (GNU, .. pre-Apache).    Don’t forget.. that the term “Open Source” wasn’t even coined until 1998 !!
  • Many enabling technologies were missing (such as Java’s cross-platform portability) along with a variety of tools to offer low-cost, Rapid/Agile capabilities (Polyglot dev w Python/ Javascript/ Node.js/ JSON.., Virtualization/ OS Containers, etc) restricted and constrained SW Development within “Silo’d” environments bounded within platforms.  (for more details see my article Cloud Architecture 101)
  • Continuous Improvement wasn’t yet in vogue (see my article regarding Enterprise Architecture).
  • Automation of Testing and promotion from Dev -> Test -> Prod .. didn’t exist beyond ~C/ “makefiles”.


The roots of SW Development “Agility”

(this section is taken from our article : Enterprise Architecture 101 : Part 1 – From Frameworks & Methodologies.. to Agile Cloud Enablement *)

2001 – The “Agile Manifesto”, aka the “Manifesto for Agile SW Development

As described in the History of Agile, the Spring of 2000 initiated talks at a ski resort ( ), where several in the SW development community gathered and started to come to a shared emphasis to promote/progress ideas in “Light methodologies, such as Extreme Programming, Adaptive Software Development, Crystal, and SCRUM“.   While this initial meeting didn’t result in one formal consensus, it did result in several articles that started conversations.

.. Then on February 11, 2001 at the lodge in Snowbird (ski resort), seventeen members from this extended community did align their objectives to become the “Agile Alliance”, and ultimately together wrote the Agile Manifesto and the 12 principles of Agile SW Development.

Note that the 12 principles of the Agile Manifesto came before and is distinctly different from the 12 Factor Application methodology that several Heroku developers contributed to and published in 2011 by the Heroku Co-Founder Adam Wiggins.    The latter (12 Factor App) lends itself toward more prescriptive SW development as a service or “stateless” microservice guidelines for Application Architecture (stateless, don’t hard-code config info but store locally, isolate dependencies, scale horizontally, and build-deploy-run in an iterative model..).   The 12FactorApp principles come more into context when discussing the post-DevOps (post 2008) movement where “Cloud Native” development and concepts flourished (see the next section).   Note that while stateless was the initial focus, the reality within the Enterprise space has begun to extend the need for “Stateful” cloud-native architectures and capabilities, where data locality contributes to enterprise business critical requirements & SLA’s (latency, availabilty, regulatory compliance, security, etc).

As you can see from the above example, the founders of Agile came from the SW development world, but adopted and incorporated MANY ideas and leveraged the best of several models (from several disciplines beyond SW development) to develop Agile methodology (Continuous Improvement, Small teams, customer directed discovery/req’s, end-to-end ownership of SW, iterative/regular readouts on progress, etc).

Ironically, several years prior (1994), while leading my company’s end-to-end SW re-architecture and development efforts, I published my MBA dissertation on agile SW Development through Continuous Improvement and TQM that incorporated most of these concepts from a real-world vantage point.   Though I never published it beyond my University, I was on the right track and got my MBA (since then it’s been a part of required course materials).    Since that point in time, these themes and principles have become a part of my DNA 😉 .

Taking a step back and being fair, Agile (more holistically than only within the SW development specific context) is neither a Methodology, nor a specific Framework or process, but more so a collection of principles and a philosophy of deploying and reducing wasted cycles.. offering more rapid continuous value.

To be fair, Agile is neither a Methodology, nor a Framework or process, but more so a collection of principles and a philosophy of deploying and reducing wasted cycles.. offering more rapid continuous value.

*As such, we need to acknowledge that Agile & Continuous Improvement themes came from and built upon movements, as we discuss in our article : Enterprise Architecture 101 : Part 1 – From Frameworks & Methodologies.. to Agile Cloud Enablement **




SDLC with Agile Methods :

SDLC with Agile is a more traditional “Enterprise” SW development methodology applying “Agile” methods :


Infrastructure as Code (IaC)

Upon the arrival of tools like Ansible, ChefPuppet, and the more comprehensive Terraform (one of many Hashicorp tools), the concept of Infrastructure as Code was born (aka automation of Operations and Configuration Management tasks).   Together these tools enabled developers / sysadmins to script & REUSE the former manual command-line driven process of maintaining/configuring infrastructure.    Upon use and re-use/extensions of these tools / techniques, the PROCESS of building & maintaining systems and infrastructure (deployment environments) began to closely resemble the way that SW developers build/ maintain their applications and source code.    Note in the diagram below from edureka!, that these tools and frameworks typically come as either a “push” or “pull” (poll) model.


See the section below on FOSS tools for automation for a greater walk through of these tools & technologies.


Infrastructure Automation gains acceptance and comes in many forms

At the same time that IaC was gaining traction, several advancements and many competing efforts, tools and frameworks to automate and centralizing the Management of these many virtualized commodity environments grew rapidly.

Many ad-hoc tools began to fill the void in the Application Container space (today’s Docker and competing CoreOS rkt) offering a lighter-weight way to virtualize, image, and rapidly deploy services without full-fledged heavy weight VM’s/ Hypervisors.

Given that consolidating workloads (or constituent services) within Containers ultimately shares ONE underlying OS Kernel, if/when that OS goes down or fails, all of the Containers that rely upon it will also go DOWN.     This is one of the many architectural considerations that need to be considered up-front when doing Cloud-Native Development, or Modernizing / Re-Factoring Legacy Applications.   See my other Cloud and Application Architecture articles for an in-depth understanding.

Hence, when deploying Applications/Services to Containers, one must also factor in the RAS (Reliability Availability Scalability) capabilities, in addition to Orchestration, Scheduling, and Monitoring (for faults, etc).    This is the realm that Kubernetes has become the standard for managing.

Kubernetesalso offers you an additional layer of abstraction to manage above the underlying hardware (or IaaS Infrastructure as a Service layer) that ultimately host the VM’s or bare-metal infrastructure running the container workloads. 

You can think of Kubernetesas analogous to a basic subset of VMware’s cVenter VRA/ VRO (vRealize Automation and Orchestration packages for managing VM’s), but targeting Container centric management (and with super-set components like Knative/ Istio, a micro-services ecosystem for serving, building, and eventing).    While Kubernetes(also abrevited K8’s) can provide additional structure, orchestration, and management mechanisms for container-based deployments, the amount of time and effort required to manually configure & deployment K8s (and ensure it’s secure and extensible) can be a very daunting process.

For this reason, many have moved to “canned” K8s frameworks such as PKS, OpenShift, Heptio(acquired by VMware), Rancher, etc.. or considered utilizing integrated Hybrid/ Public cloud platform services (eg.  AWS EKS managed control plane, Google GCP’s GKE, or Azure’s AKS services).


While VMware has been very aggressive in extending it’s local/private customer SDDC/VCF deployments into the public cloud space (eg. VMware cloud on AWS), it should also be noted that VMware has acquired several companies recently for container/ kubernetes capabilities (Heptio, including 2 of the 3 Kubernetes co-founders), and will be offering Kubernetes and Container management within vCenter out of the box (via VMware’s Project Pacifica and Tanzu) in the near future.

Related to VMware Kubernetes/Containers futures and Dev/Ops orchestration (as presented at VMworld’19), with VMware’s recent acquisition of Bitfusion, you can most likely expect to see integrated GPUaaS/FPGAaaS capabilities not too far down the road.    This can be thought of like how you provision and align fractional CPU’s (cores) to VM’s today (or as noted above to containers w K8’s coming), you will also be able to provision and spin up new fractional GPU/ FPGA cores to VM’s (and/or containers) within this same managed framework over the network.

Abstraction is alive and well, and continuing the consolidation and simplification of managing several disparate components/systems/environments from fewer and fewer interfaces, where a unified SPOG (single pane of glass) interface will be the preference for most.

*See the Top FOSS Tools Section below for a more detailed look at Containers and Kubernetes, and as importantly, the list of related Security Vulnerabilities described in the Drawbacks and Challenges section below. *



Coining the term “DevOps”

.. Means going from Isolated Developers & Operations teams (separate tool-sets, environments, and systems)  :

… To a UNIFIED approach of Integrating the 2 Environments and Applying “Continuous Improvement” principles


How does Agile “DevOps” Compare to Traditional Development ?

When Patrick DuBois and Andrew Clay Schafer met at the Agile2008 Conference for “Agile Infrastructure”, the connection to DevOps was born.    Patrick later coined the term “DevOps”, to represent the merging of Development and Operations.

As a contrast to more traditional Development models, today’s DevOps is intended to bypass the serial nature of Development and disjointed Operations.. Integrating them together and focusing around “Automation” of Testing / Integration, decreasing the time to Develop-> Deploy.

In terms of the contrast between a traditional “Waterfall” development model vs. Agile SW Dev, see the diagrams below :



A UNIFIED approach to Integrate Development + Operations, applying “Continuous Improvement” principles :



An Overview of today’s DevOps :  Enabling Technology & BENEFITS :

At the same time that technology was rapidly advancing, we need to look back and recognize the impact and benefits of the economic down-turn that simultaneously hit all with several years of Global Recessions & recovery (that we still feel the remnants of today).   We need to realize that in part, this down-turn in Corporate spending led to many of today’s movements toward Efficiency, and doing More with Less  :

  • FOSS (Free and Open Source Software & Frameworks, as opposed to non-OSS alternatives such as MS .NET-ASP.NET, WebAPI, SQL Server, C#). Stated another way.. FOSS = NO Licen$ing fees nor single-vendor lock-in.
    • LAMP (Linux, ApacheWeb, MySQL DB, and PHP [and/or commonly Python/Perl]), not to mention frameworks built upon these technologies dominating Content Mgmt (WordPress), IoT (Kafka), BigData (Hadoop), etc..
    • MEAN (MongoDB, Express.js, Angular, Node.js), which offers a framework for dynamic Javascript web app development.
    • + MANY Other OSS Stacks & Frameworks.. such as Pyramid, Django, etc..
  • SW Development adoption of lighter-weight “Polyglot” (blended multi-language) scripting in Python, Javascript,.. (vs. J2EE/C , SOA, for Apps)


  • Application Container Deployment models/ frameworks (Docker/ Kubernetes).


  • Agile Project Management / SCRUM / Kanban   (doing more with less.. more rapidly, Continuously improving = better products/services getting to market FASTER !)


  • Dev + Ops :   Continuous Improvement & Delivery Mechanisms (which has led to “Agile” development tools offering more independent developer “ownership” of not only Development, but also the end-to-end lifecycle Operational management aspects from Dev -> Test -> Prod ).  (eg. integrated DevOps toolsets such as : Jenkins, Wercker, etc..).


  • The movement to Cloud Native” Development :  from “Monolithic” (heavy-weight) Enterprise Application Stacks (SOA, ESB, App Servers w shared JVM’s) to the independent rapid/automated Dev/Test/Ops (CI/CD) of many smaller “Micro-Services” that can run as stand-alone services, as well as the leap to Fn /FaaS (Function as a Service) or “Serverless” Architecture options (offer the most pure from of Utility, low cost computing.. IF/ When/ Where it can be utilized).


  • API Repositories (beyond FOSS packages/frameworks and Docker Images) enabling more rapid development cycles by re-using already existing “standard” API’s, as shown in the following diagram from Bessemer Venture Partners :




Today’s top FOSS Developer Tools & Frameworks for Automation :

  • Chef is a ruby based configuration management tool. You might have come across the term ” infrastructure as code”, which means configuration management. Chef has the concept of cookbooks where you code your infrastructure in DSL (domain specific language) and with a little bit of programming. Chef provisions virtual machines and configures them according to the rules mentioned in the cookbooks. An agent would be running on all the servers which have to be configured. The agent will pull the cookbooks from the chef master server and runs those configurations on the server to reach its desired state.


  • Puppet is also a ruby based configuration management tool like chef. The configuration code is written using puppet DSL’s and wrapped in modules. While chef cookbooks are more developer-centric while puppet is developed by keeping system administrators in mind. Puppet runs a puppet agent on all server to be configured and it pulls the compiled module from the puppet server and installs required software packages specified in the module.



  • Ansible is another provisioning and configuration management tool (agent-less, using SSH or Powershell) developed in ~2013 and acquired by RedHat in 2015.    Ansible takes a light-weight approach in client-only architecture and it’s founder is the person that actually created YAML (as a uniform config/markup language).   It has become a very popular toolset, utilizing it’s “playbooks” approach.


Image result for ansible image


  • Terraform (by Hashicorp) is the latest tool (especially where IaaS/PaaS is concerned) to gain popularity within this space, and is more inclusive in it’s nature (offering orchestration and integrations with other services, etc).   It’s meant to holistically deploy and configure Infrastructure and related Services (IaaS), and also follows in a client-only architecture.   Terraform is extremely extensible via API’s and integrates with most private/public cloud environments (AWS, GCP, etc), even including some platform services (PaaS, DB..).    Terraform deviates from the other tools listed in it’s function of being “immutable” (similar to how Containers are utilized), opting for uniformity via provisioning new builds of systems vs. allowing changes to existing configurations (mutable, as Chef/Puppet.. offer).


The following table from an IBM blog shows comparisons between these common configuration management and provisioning tools/frameworks :

Configuration management (CM) tools


From the gruntwork.io blog, you can see how Ansible and much more predominantly, Terraform have grown MUCH more rapidly, while Chef and Puppet have been on the decline between 2016 & 2019 :


Given that github is among the top repo’s, no DevOps discussion is complete without an overview of what it offers and some of it’s key concepts and capabilities :

Containers are a technology that isn’t new, but rather one that was before it’s time and today has found a perfect match in Cloud-Native development/ testing and Micro-Service based deployments.   The first “Production Ready” Containers actually were released in 2004 by Sun Microsystems as part of Solaris 10 (and later rebranded as “Zones”), but were a technology before their time, with little marketed use-cases (and no DevOps community movement at the time to promote them).  From 2004-2010 Sun Microsystems Open Sourced it’s Solaris OS code, exposing it’s Containers, Dtrace, and ZFS capabilities for other Linux variants to decipher and reverse-engineer within their subsequent OS versions.

*Even though Linux containers weren’t released until 2008, and Docker Containers not until 2013, Docker has won the battle as a standard.

Today’s Containers (aka Docker since it has become the standard) offer light-weight and rapid workload (Non-VM) virtualization within a multi-tenant environment (sharing the same underlying OS, only requiring an application and related libraries to be loaded within each container as a “container image”, startup time is very fast in comparison to loading an entire OS within VM’s).    This offers running many environments with very little hardware (nor VM overhead), and is well suited for cloud-based or locally isolated and elastically scalable Development/ Testing & DevOps (where enterprise requirements such as high-performance, high availability, security, and fine-grained production workload isolation of a VM or even bare-metal system aren’t as stringent).

In contrast to the conventional virtualization using VM’s (hypervisors), where Vm’s are designed to encapsulate and run an entire Operating System along with multiple workloads/ services, application containers are designed to package and run a single service or application (along with required libraries within the container image).   These isolated workloads that are run in Containers (predominantly Docker), share the same single underlying OS Kernel.

In this sense, a containers based deployment running on bare-metal (without a hypervisor, and not within VM’s) can save enormous amounts of memory and CPU capacity & utilization, given that only one copy of the OS is loaded and run on the physical HW, and that only applications / libraries are loaded within individual containers.

The other aspect of performance “overhead” (penalty) which is often overlooked in relationship to using VM’s (traditional consolidated IaaS deployments) is that in a hypervisor “virtualized” deployment, much more system P2V (physical to virtual) translation (aka overhead) occurs, itself incurring additional latency and performance penalty (translating between physical components and multiple virtual instantiations, along with additional layers of system/library calls required and driver stacks to traverse per call).   Some have published performance penalties as high as 12-20% PER VM vs. bare-metal deployments.

While a containers-based approach can offer much more resource efficient (and dense) consolidation, it also requires further Reliability/Availability and management considerations, given that if the underlying OS fails.. then all of the Containers relying upon it will also fail.    For this reason, additional orchestration and fault monitoring is required (see Kubernetes, below), not to mention SW architectural availability considerations (minimizing cascading failures in a “Micro-Services” based deployment, etc).




For more information, ​Seehttps://www.docker.com/what-docker


In addition to our discussion earlier, (from the Kubernetes.io site), it can be described as :

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.


CI / CD tools   

    • Jenkins is a java based Continuous Integration tool for faster delivery of applications.  Jenkins has to be associated with a version control system like GitHub or SVN.  Whenever new code is pushed to a code repository, Jenkins server can build and test the new code and notify the team for with the results and changes.   While Jenkins is no longer simply a CI build tool, that is it’s origin of core strengths and capabilities.   Jenkins is been used as an orchestrationtool by building pipelines for the application provisioning and deployment.  Its new pipeline as code functionality lets you keep the CI/CD pipelines as a complete code.  **(See below for Jenkins integration examples within the CI/CD sections) **

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.
Created at Netflix, it has been battle-tested in production by hundreds of teams over millions of deployments. It combines a powerful and flexible pipeline management system with integrations to the major cloud providers.


*See the final 2 sections for examples of Jenkins / Spinnaker within the context of CI / CD.



Agile “DevOps” & Cloud Native Development (aka “Cloud First”)

One Architectural trend that has come about along-side the Cloud movement (and first advocated by cloud providers/vendors and hyper scalers) is to develop using a “Cloud Native” or “Cloud First” approach, such as described in the section above relating to The 12Factor App (Heroku).   This centers around scaling horizontally and in parallel by decomposing an Application/ Workload into it’s underlying SW components, and core “Services”, then further into their constituent modules, aka “Micro-Services” for optimal scalability and light-weight (rapid) development cycles.

With Agile DevOps, developers or (2 pizza) teams “Own” the entire lifecycle, becoming experts on just those “MicroServices” from Development -> Testing -> Integration/ Staging -> Prod Deployment.

Many of the popular tools of today for DevOps center around Docker Containers / Images (and frequently using a bitnami or other repository) & Kubernetes for Container mgmt and Orchestration.

DevOps & Cloud Native movements can be in conflict with Enterprise methods/procedures

*Lastly, note that Agile DevOps can be in conflict with the methods, process, and culture of traditional Enterprise SW development (and Enterprise Architecture Principles / methods/ reference architectures) that historically have generated large monolithic SOA/ J2EE/ ESB code-bases and libraries which are very slow to extend/test/deploy, and typically run within Application Servers with shared JVMs (that today are not in line with application design and runtime requirements within the Microservice or CNA-Container Native App space, not to mention the rapid growth within Serverless / FaaS-Functions as a Service deployments and architectures).    For Java application environments, many have already moved to more agile frameworks such as Spring (Opensource open framework).

See our article : Enterprise Architecture 101 : Part 1 – From Frameworks & Methodologies.. to Agile Cloud Enablement for a more comprehensive review.



Level setting & Challenges of today’s FOSS / DevOps / Cloud-Native movements


Wave 1 of Public Cloud deployments today only account for a SMALL % of all business workloads, where Enterprise Applications are the last (and most difficult 80) mile(s)..

It needs to be addressed and noted that Enterprise Application environments are unique in many ways from the low-hanging fruit (General Purpose, Dev/Test, Web/stateless, environments..) that cloud providers rapidly assisted organizations to re-platform onto their respective Public Clouds as the first wave or rapid cloud adoption.

Note that with most enterprise application environments, the inability to use automated migration tools are the least of your challenges as described below.

*Enterprise Workloads add several distinct characteristics and complexities :

      • Business or Mission Critical SLA’s & SLO’s :
        • Transaction Response Times and/or Latency Requirements
        • On-Premises Regulatory/Compliance requirements (data locality, etc)
        • Disaster Recovery & Business Continuity Requirements (99.999% Uptime requirements, Time to Recovery, RPO/RTO’s, etc)
        • Long-standing Organizations and teams trained on prior technology that aren’t familiar with “Cloud” operations.
        • Long-standing cultures, methods, procedures, and operating models/ frameworks supporting production environments.. most of which will be incompatible with the future state target architectures.
      • Many large mission critical environments are not only monolithic applications (and code-bases), but many times are designed to scale vertically (within systems > xTB of memory and dozens of CPU cores) larger than offered within cloud providers, not to mention the other stateful design aspects.
      • The number of integration points to other mission critical environments or key components (ESB’s, etc) can be daunting and many times integrate with Legacy HW or SW (many times no longer supported by vendors, nor having developers within the company, not to mention many times missing source code).   This point in and of itself can relate to very RISKY and very costly efforts to migrate or re-platform, where for most today the path to cloud-enable has resulted in a Private or Hybrid/ Multi- cloud realization.   For this reason (among others such as total cost), a movement to even repatriate footprints back on-premises from initial public cloud deployments has been quietly growing within the enterprise space.
      • Formal and detailed Enterprise Architecture and assessment is required up-front and throughout the process (not a casual “let’s just migrate your VM’s and optimize things later”.. aka, migrate them over and you’re locked-in approach).

In summary, while “stateless” services have been the focus for “cloud-native” development and refactoring, as noted above, the recent push and Enterprise focus has begun to extended (or somewhat backtrack) towards accommodating these “stateful” application architectures.

Note that much work continues on the Enterprise Workload front, given the stark reminder that MOST workloads today are NOT running in a public cloud footprint.

*This final push for capabilities and movement in the Enterprise space (that has driven rapid growth in Hybrid & Multi-Cloud on-premises deployments) reflects (as a small reminder) the FUD that businesses have been fed over the past few years, given the reality that today ONLY 20% of workloads are running in Public Clouds today !

.. as Google vice president of engineering Eyal Manor, at Google Cloud Next 2019 noted , ~80% of Enterprise Applications still reside On-Premises and not in the Public cloud space :

“Today, 80 percent of workloads are still not in the cloud.”


Developers spending 50% of their time Configuring, Integrating, and modifying the Ops Tools

Today DevOps developers might spend 1/2+ of their time on Operational aspects (to integrate various CI/CD tools, maintain integrations, and constantly update the CI/CD tool chain as new technologies are released).     Because of this, many/most do Continuous Integration today, but few are doing Continuous Deliver & Deployments.


Microservices require more Integration / Orchestration of modules in the event of stand-alone component Failures.

Where Enterprise Applications can be started and stopped and/or failed-over as a single entity, Application Environments composed of many Micro-Services need additional external “watchdog” or HA-like monitoring capabilities (startup/ shutdown / probing if OK) in the event that one or several microservices fail.

In order to prevent cascading failures with MicroServices, an architecture design pattern and mechanisms have been developed as “Circuit Breakers”  (aka, fail fast), among others such as the “side-car” pattern.



NOTE:  *The following aspects can be impacted by advocating independence and autonomous ownership of environments :

(using standards to “minimize” the number of physical components & configurations/versions/ pkgs, patches, FW.. is at the core of reducing IT Service Management /ITIL & Enterprise Architecture COMPLEXITY )


Independent of Tested / Integrated Frameworks

When application architecture does not depend on the existence of other (re-usable, consistent, tested) libraries/frameworks, it entices the developer to utilize components from potentially disparate (non-validated) repositories / hubs (eg. Github, npm, etc).

Reduced “Visibility“, nor detailed Revision Control / Configuration Management

Today’s developers can readily grab any of the following, without the ability to know completely “what” previous changes, sub-component updates, or “Untested vulnerabilities” might lie lurking within :

  • Ability to grab/deploy Images of complete systems (OS+) and/or Application Environments in various states (via Docker Images.. either locally or from bitnami/ other repositories), typically without rigorous tracking/documentation of past history.
  • Ability to grab/deploy Product Stacks (eg. Hadoop,…) either within “images”, packages, etc.
  • As noted in the item above, the ability to easily grab/deploy smaller code snippets/modules from repositories (vs. using “verified”/validated API/libraries) is a double-edged sword (re-use is good, but untested variations are not).

Increased Complexity + Reduced Consistency

=  Potential for > Time to Diagnose + > Risk (due to uncertainty/ variations)

By allowing Developers control and ownership over the entire DevOps life-cycle (from Dev->Test->Prod..) this can lead to a lack of “unity” in standards and configurations between teams and environments, (which can result in future Production related COMPLEXITY, & Extended Time to Diagnose).

The following graph demonstrates how many permutations/combinations you get (from the traditional 7-layer stack : App, DB, OS, Hypervisor, Server, Storage, Network) as you increase the number of unique configurations per layer, ..we quickly get exponential RISK, Exposure, and COMPLEXITY to manage !  :



FOSS Security Issues & “Known” Vulnerabilities

We have all heard of the Heartbleed (OpenSSL/ TLS) & Venum (Xen/ KVM..) security holes and/or attacks, as well as the Equifax (Apache Struts Framework bug) breach of personal/SSN records for nearly HALF of ALL Americans (~150 Million people !), including hundreds of thousands of Credit Card account records.

With ANY use of large Open Source Software (FOSS) distributions and frameworks, Security issues will always be possible.

The fundamental reasons being :

A>  EVERYONE has access to the source code, as well as public visibililty to any/all testing vulnerabilities exposed.. BEFORE the issues are resolved..


B>  That the Patches are only applied by the rest of the world AFTER they have been IDENTIFIED within the organization.  (eg. the Equifax issues related to an Apache Struts framework patch that had been available for several months.. but was only 1 of MANY disparate components deployed in various configurations/versions across many systems & environments, requiring significant effort in IDENTIFYING Patches needed as part of Security Management activities).

A new Github based Security Guide For Developers is a reasonable high-level dev-centric checklist to start with.

Container Security Vulnerabilities .. lead to encapsulation within VM’s

Today’s DevOps world is in reality becoming more of a DevSecOps world, with the realization that Security needs to be factored in at all levels/ layers, especially when dealing with Enterprise class SLA’s/ SLC’s.

Unfortunately, with all new things that are Open Source, preying eyes and those that monitor “day-1 vulnerabilities” (aka, those identified and not yet fixed/patched), Containers and Kubernetes deployments have recently undergone a great amount of scrutiny from a Security vantage point, and for GOOD reason.

If you want to take a look at current security issues with each, take a scary peek at these lists :

Yes, > 1000 vulnerabilities exist for Containers !

For this reason, while not always publicized, many (if not most) of today’s managed container platform (PaaS) services actually spin up Containers WITHIN VM’s ! 

You might ask why in the world would they do so ?    The reason should be clear to you, but beyond container vulnerability issues, it’s also related to the movement towards a “security-first” approach (above and beyond “least-privileged” or a “zero trust” stance), where (complete) tenant isolation is better guaranteed from within a VM/ hypervisor vs. within a shared OS kernel underlying the current state of loose container security affairs (where any container breached or proc with sudo/root or back-door permissions can wreak havoc across all containers and the entire Cluster on that system, given a single underlying OS/kernel is shared among container tenants on each system within a K8s pod).



Progression towards the DevOps “Holy Grail” : Release Pipeline(s) with CI/CD (Continuous Integration, Delivery, & Deployment)

Beyond the organizational, cultural, and procedural aspects of the DevOpsworld, completing the full spectrum of Automated Builds, Testing, Integration, and Delivery / Deployment is the pinnacle of agility.   This is the area that CI/CD is targeted to fulfill and extend.

Working towards the extended DevOps “Holy Grail” with CI / CD

Continuous Integration (CI) is the process that Developers (should) utilize regarding software life-cycles to integrate all their disparate Source Code builds within a shared repository continuously (as many times a day as the code changes, and typically utilizing automation).   This allows for shared visibility, testing and integration of all the pieces together as early as possible (so that issues/errors can be found and resolved as early on in the process as possible).   Jenkinsis today’s reference toolset “de jour” for CI activities.

Note that with complex “refactoring” projects (moving from Monolithic to a Microservices based architecture), one prior application & large block(s) of source code might be decomposed into dozens, hundreds or even thousands of individual “microservices”, where a separate DevOps team may be individually responsible for a single or group of Microservices (and many times blind regarding other code that they need to integrate and function with) to complete the functionality of the refactored application.   By utilizing an integrated suite of automation tools, developers can establish and utilize a pipelined CI approach where thorough testing/integration phases can be automated and results made visible.

The following offers you the progression from a CI framework, to a more inclusive CI/ CD process :




CI examples using Jenkins:

Related image


Related image


.. and some have gone to lengths to extend Jenkins to go beyond CI to also handle Continuous Delivery & Deployment as depicted in the following examples (note that Spinnaker is typically the CD counterpart to Jenkins, as described in our next section).




Today’s Fully Integrated DevOps with (CI/ CD) Continuous Integration & Continuous Delivery

Continuous Delivery (CD) is the next aspect of a fully functional CI/CD environment, where developers can automate the build, testing, and preparation for MANUAL deployment(s).   Note that while Continuous Delivery is a manual process, the term (and it’s close cousin) “Continuous Deployment” is it’s automated counterpart, where the successful completion of all tests can automate the deployment of applications/service components to Production (or other) environments automatically.   Today’s CD counterpart toolset to Jenkins(CI) is most often Spinnaker, which was originally developed byNetflix and more recently backed by Google.    The “Concepts” page at the Spinnaker site gives an excellent overview of the level of depth and breadth that the toolset offers.  Note that many/ most “mature” organizations utilize both Jenkins + Spinnaker, leveraging each of their strengths.

Gartner sums up as “DevOpsas a tool-centric philosophy that supports a Continuous Delivery value chain.

Again, it should be noted that DevOps without CI/CD (+ oversight and visilbity) can be a scenario of automating things without structure and uniformity, which can easily result in greater complexity that will require more time & effort to address at some point.    While these toolsets are powerful, they don’t come pre-configured and require some time to setup and customize.    See our article and discussion of challenges/considerations for managing DevOps via Enterprise Architecture in a CloudNative World.

For a greater examination of the Jenkins/Spinnaker toolsets, see the article CI/CD- Continuous Integration, Continuous Delivery/Deployment .


Related image



Image result for spinnaker CD images


*Continue the Conversation..

Let us know what you think.  Please Comment and/or Retweet Below with any Q’s, thoughts, or requests for future content.

Visit our Contact page for further ways to reach out..  or participate by becoming a contributor. 🙂

All content and comments made are that of TechThought.org, the author, and not of current nor past employers.

*Copyright© 2017-2020 Todd A. Jobson & TechThought.org ; All Rights Reserved*

Leave a Comment

Your email address will not be published. Required fields are marked *