Type to search

Cloud Architecture 101: The Road to Cloud Services (IaaS/PaaS/SaaS) & Deployment Models (Private, Hybrid, Public)

Revised from an article originally published @ Oracle.com on July, 2017
by: Todd Jobson | Sr. Principal Enterprise Architect; MBA

Cloud Architecture 101: The Road to Cloud Services (IaaS/PaaS/SaaS) & Deployment Models (Private, Hybrid, Public)


A Brief History of the Road to Cloud Computing :

Where we all started: Higher CAPEX (Capital Expenditures) Dedicating HW to Application Silo’s.


Enabling Technology & Fundamental Prerequisites to Cloud Computing

Over the course of the past 2 decades, technology has advanced dramatically to offer the following capabilities, which were required PRIOR to wide-spread, general-purpose adoption of cloud computing (each a pre-req built upon the technology that preceded it).   While this is far from a complete list, it gives you a high-level grasp of how rapidly technology has evolved, and is still evolving.   :

  • Massive CPU Core/HW-Thread Density.      Multi-core, Multi-Threaded CPU’s (aka. Chip level Multi Threading – CMT), offering the ability to execute MANY parallel (LWP) threads concurrently in parallel execution pipelines within HW Threads, across many Cores within the same (or many) CPU(s) within a modern computer (beyond prior CMP/HMT/SMP/SMT capabilities).
    • Additionally, dramatic increases in on-die (and within cores) Memory as L1 / L2 / L3 cache, each of which help reduce the latency for waiting processes which otherwise would require going out to access a DIMM via a memory controller (cache/TLB misses), and even worse.. and IO storage device such as a spinning hard drive (orders of magnitude in additional latency).


  • Increased Scale and Density of Components and Interconnects.    Previously, for traditional computing, the slowest components, where the greatest Latency exists are places that run more slowly than the CPU’s (which today can run up to 5 GHz = 5 BILLION Cycles PER Second), causing CPU processing WAIT events.


   The main culprits of this latency are found in :

    • Physically Spinning Hard Drives with moving R/W heads for Storage (many orders of magnitude slower than CPU’s) vs SSD/NVMe Flash Storage of today which solve many of these latency issues.
    • Memory Speeds and Densities requiring more distant memory access (NUMA vs. UMA), where the speed of light does take measurable TIME to travel down conductors, especially when multiplied several BILLION times per Second !   (DDR-DDR4/PC4)
    • Intra-System & Inter-System Network interconnections & related Latencies, as well as Bandwidth/Throughput limitations (PCI, USB, Ethernet, Infiniband, FiberChannel, ..).


    • CROSS-Platform Application Development/Deployment.   Via. the JVM – Java Virtual Machines, the first widely adopted mechanism for cross-platform/OS application Deployment, where you write-once.. and can run anywhere.   This also was the first widely adopted entry to further, more pervasive expansion of Virtualization (see next bullet).


  • Virtualization Technologies, enabling multiple application “Workloads” (&/or services) to run as “Isolated” entities within the same global shared Operating System as a “Container” (sharing the same Kernel, utilizing resource mgmt/controls), or on top of /within a system “Hypervisor” as a “Virtual Machine” (VM’s) can come in several forms :
    • VM’s :  Either Type 1, aka Hypervisor Technology, (eg. Xen, RedHat Linux KVM, VMware ESXi, MS Hyper-V..) to enable Virtualization and partitioning of the underlying System HW and it’s components, so that each VM will run it’s own isolated OS/kernel space provisioned with a subset of CPU cores and Physical memory associated [vs. an OS “container” or Type2 HV technology that runs on top of a shared, multi-tenant OS, such as Oracle’s VirtualBox, Parallels, or VMware workstation].    BM HV’s typically run an underlying light-weight OS kernel with a “Control Domain” that virtualizes and allocates resource management and access controls for physical reservation/isolation.

*Enterprise Type-1 VM’s do offer much better “encapsulated” security attributes such as disabling services/ports/RBAC.. (vs. Containers).    However, it does need to be considered that that since every VM has it’s own local COPY of an entire OS Kernel (eg. RH Linux, Win10, ..), and the trade-off is the additional required footprints in capacity overhead (storage space, and more notably, CPU/Memory), NTM the additional performance overhead of running many OK kernels on a shared multi-tenant system with many VM’s.

    • SDN (Software Defined Networking) / NFV (Network Function Virtualization), and today’s SD-WAN (SW Defined Wide Area networking) extend Virtualization to the Networking components, allowing for configurable QOS (Quality of Service) to optimize HW utilization/capacity, latency, and bandwidth required, ultimately by class of service over the optimal links/paths/medium available.

SDN de-couples the network functions from the physical devices/network, ~analogous to de-coupling virtual servers (VMs) from physical servers, re-creating the traditional network constructs in a virtual space including NICs/ ports, switches, routers, firewalls, etc.   (eg. VMware’s NSX/NSX-T)

    • SDS (Software Defined Storage) –

From VMware’s (vSAN) page :

SDS software abstracts the underlying storage through a virtual data plane; this makes the VM (and the application) the fundamental unit of storage provisioning and management.   By creating a flexible separation between applications and available resources, the pervasive hypervisor can balance all IT resources—compute, memory, storage, and networking—needed by an application.

Software-defined storage (SDS) is a storage data center architecture that separates the management and provisioning of storage from the underlying physical hardware, similar to server and network virtualization. SDS replaces static and inefficient hardware with dynamic, agile, and automated solutions. VMware’s simple-yet-intelligent SDS technologies deliver a more scalable, flexible, and automated storage solution for today’s business and application demands.

With software-defined storage, applications are dynamically provisioned with the precise mix of capacity, performance, and protection needed—no more, no less.

This storage solution allows businesses to eliminate inefficient, purpose-built hardware and use more agile, transparent, and automated systems instead. It brings to storage the simplicity, efficiency, and cost-savings that server virtualization brought to compute.

Leading examples of SDS within the industry (both from Dell-EMC/ VMware) include :

          • Dell-EMC’s FlexOS(formerly ScaleIO) & Dell-EMC’s VxFlex(Ready Nodes, Appliance, or Rack)



    • Containers :  aka, OS Virtualization, such the original Solaris Containers (built from BSD Jails, later Solaris Zones).. and/or today’s dominant form : Docker and/or CoreOS Rkt (rocket) Container-based virtualization.    As noted above, Containers are much lighter weight given that the underlying OS Kernel is shared among the pool (pod) of Containers running on the same physical system.   Kubernetesis used as a centralized master scheduling/ orchestration engine to manage the execution of workloads across available cluster of container “pods”.

NOTE:  Today’s Cloud-Native movement toward “MicroServices” advocates (via 12 Factor SW Architecture) decoupling individual SW functions/services to run individually/ separately in containers (as horizontally scalable, distributed, lightweight, and if possible stateless application components).   For this reason, Production container deployments with MicroServices can result in thousands of containers that need to speak with one-another, and address cascading failures (via curcuit breaker patterns), the task can quickly become a SIGNIFICANT feat to architect/design/code/configure/test & most importantly support according to SLA’s.

*Since containers are little more than isolated regions within a shared OS Kernel (only needing the app, & related libraries), they are extremely light-weight and extremely fast to spin up.   However, it needs to be realized that since root can gain access to all underlying containers/resources, Container isolation and security can be a matter of great concern and require significant configuration & oversight (as compared to VM’s, which fully encapsulate the entire OS and all applications within).    For this reason, today many Public Cloud providers actually encapsulate container environments within VM’s, especially when in multi-tenant (shared HW) environments.



  • Massive-Scale Storage : FileSystems to today’s cloud-based Distributed Object Storage.. that embed/preclude the use of Volume Management ,self-healing, with No-overhead Snapshot & secure encryption capabilities (eg. ZFS – Zetabyte FileSystem, Cloud-based Object Storage such as AWS S3 or Oracle’s OCI Object Storage, ..).    Note that distributed Object storage can result in “eventual consistency”, depending on technology/configs utilized.


  • Advances in Horizontal Scalability via Application and DataBase Clustering (vs. Active-Passive Fail-Over with entire or large portions of systems wasted sitting Idle for DR or HA Fail-Over).


  • Web Services and SERVICE-Oriented Architectures (SOA) – Enabling further integration and service delivery across disparate systems/networks/environments (w ESB’s – Enterprise Service/message Buses).    Extensibility via RESTapi’s & HTTP/S.


  • Enterprise-grade Security (Physical Isolation, Perimeter/Network : Firewalls/port/pkt filtering/Routing/Subnetting.., Inclusive Authentication / Authorization / Role Based Access Control via Polices.., Service presentation, centralized LDAP/SSO, Public/Private Key Encryption, VPN’s, IPSEC tunneling routers, .. to Cloud IAM – Identity & Access Mgmt).


  • Centralized / Unified Single Pane Of Glass Management (vs. several tools to manage several disparate components/subsystems.. OS, VM, storage, DB, ..).    Oracle’s OEM 13c Cloud Control Suite is the best example of this today, while Cloud provider “Control Plane’s” are catching up to these all-inclusive capabilities.



The Rush to FOSS/ LAMP SW stacks

At the same time that technology was rapidly advancing, we need to look back and recognize the economic down-turn that simultaneously hit all with Global Recessions, that we still feel the remnants of today.   We need to realize that in part, this down-turn in Corporate spending led to many of today’s movements toward :

  • FOSS (Free and Open Source Software) – Which has made LAMP (Linux, ApacheWeb…., MySQL DB, and PHP) dominant players.    One small example that is hard to comprehend is how the Andoid SmartPhone OS built on Linux, today powers 9 of 10 Cell phones sold (Apple is only ~1 of 10 new phones sold) !    Additionally FOSS frameworks built from open source repositories such as Apache/ GitHub/ Bitnami/ npm.. are dominating many, if not most other areas, such as Content Mgmt (WordPress), IoT (Kafka), BigData (Hadoop), and etc…
  • Agile Project Management / SCRUM / Kanban   (doing more with less.. more rapidly, Continuously improving)
  • Continuous Improvement & Delivery Mechanisms in DevOps (which has led to Cloud Native development and today’s movement from “Monolithic” Applications to more independent, developer “ownership” of end-to-end lifecycle management deploying many smaller MicroServices + Fn/Serverless).
  • SW Development adoption of lighter-weight “Polyglot” (blended multi-language) scripting in Python, Javascript,.. (vs. J2EE/C , SOA, for Apps), Container Deployment models, and more fully-integrated DevOps toolsets (Jenkins, Kubernetes, Wercker, etc..).
  • ** This section (items above) will be a follow-on Blog topic that I will cover in more depth **


The Impact(s) of Rapidly adopting FOSS, Commodity HW, along with Virtualized Workload Consolidations

This economic down-turn, coupled with a decade+ of FOSS (Free and Open Source Software) being promoted/ developed within Universities resulted in rapid adoption and deployments within business Production environments While this did initially reduce up-front CAPEX “acquisition Costs”, it brought forward much of today’s COMPLEXITY, that has since ballooned across the following areas of IT management :

  • Difficulty Managing Standards with ever-growing “best-of-breed” configuration sprawl.   It’s not uncommon to move from dozens of larger more expensive systems, to hundreds of smaller systems.. ultimately virtualized to thousands of VM’s or Containers.    Add to this, the collective variations in components/types (HBA’s, NIC’s, Storage Devices, CPU types/densities) all with various OS/ BIOS/ FW and device driver versions.   This doesn’t even account for the additional layers of the stack, App versions, DB, OS, –> HW+.    The following graph demonstrates how many permutations/combinations you get (from the traditional 7-layer stack : App, DB, OS, Hypervisor, Server, Storage, Network) when you increase the number of unique configurations per layer, we quickly get exponential RISK, Exposure, and COMPLEXITY !  :

  • Lack of Vertical Scalability (where small 2-CPU systems don’t offer headroom for Vertical Large-Memory or Large-CPU Workloads to scale “within” a single physically Isolated system).   This also leads to a dramatic increase in requirements for Network and Storage connectivity (Ports, Switches, Routers) and/or methods for clustering or sharing distributed resources.   This restriction on vertical scalability, coupled with the increased capacity demands of internet scale, ultimately forced everyone to consider how to best accommodate horizontal scalability (and resulting distributed/ clustered architectures that have arisen to support this shift in design).
  • + Time and Complexity to DIAGNOSE Root Causes, when problems arise (and there are SEVERAL Vendors to call, all pointing fingers at one-another).
  • + Increased Time to Manage/Maintain, and Patch (100’s or 1000’s of VM’s across hundreds of small servers).
  • = INCREASES in TOTAL COST of OWNERSHIP.   Even when you have cheap “rip & replace” servers, and free Operating Systems and SW, running several-fold more of them in Hundreds or even 1000’s of VM’s ultimately requires Support and Licensing costs for all those Production OS’s, Servers/components, and VM(ware?) + App SW licenses.
  • + INTANGIBLE COSTS of potential Outage or Security Exposure / RISK / Compliance issues due to Not patching all systems/components.. can cost $Millions.



“Converged Infrastructure” (& later HCI) comes to the rescue ..

As the computer industry embraced these paradigm shifts, they also witnessed their customers both cutting budgets, while many in the long-run (after several years of FOSS & VM sprawl) ultimately struggle with TCO increases, and as significantly (if not more-so) escalating complexity to manage/operate their environments.

This reality is what led most vendors to utilize the technology listed above to develop and promote “Converged Infrastructure” (Cisco UCS.. / VBlock, ..) and/or “Engineered Systems” (Oracle’s Hyper-converged) platforms, to REDUCE both COSTS and COMPLEXITY in On-premises IT management.     

Most converged solutions require and leverage SPOG – Single Pane of Glass Management suites (Oracle’s is OEM 13c Cloud Control, while many competing x86 platforms leverage VMware ESX/i).   

Together, these “Single-Stack” platforms do unify HW, OS, VM, & Patch Management.. ultimately REDUCING :

  • ​Complexity
  • Time to Manage/ Operate/ Provision/ Patch
  • Risk/ Exposure (more uniform & easier/ faster to diagnose, with fewer Security patch holes)
  • (In many cases) TCO (eg. Oracle systems come with FREE OS & VM as part of HW support)
  • Single (or fewer) Vendor(s) to call for Support

Many of today’s organizations still have dominant footprints running On-Premises in Private-Cloud Converged/ Hyper-Converged/ SDN Pod architectures.

** See our other articles for a more detailed examination of today’s HYPER-Converged Infrastructure offerings from today’s leading vendors such as Dell Technologies (Dell-EMC, VMware, Pivotal, Bitfusion..+) VxRack/VxRail offerings that offer VMware’s SDDC (Software Defined DataCenter – VMware vSphere/ESXi, vSAN, vCenter/vRA/vRO, +) and even can fully integrate VMware’s VCF (VMware Cloud Foundation) stack within a HW solution, tested IN the factory.

At the same time that Converged Infrastructures were being adopted, Amazon (AWS), Microsoft (Azure), Google, Oracle (OCI), and IBM  have rapidly shifted their focus and R&D efforts on Public Cloud IaaS / PaaS / SaaS offerings …



Infrastructure as Code (IaC)

Upon the arrival of tools like Chef and Puppet, the concept of Infrastructure as Code was born (Configuration Management automation).   Together these tools enabled developers / sysadmins to script & REUSE the former manual command-line driven process of maintaining/configuring infrastructure.    Upon use and re-use/extensions of these tools / techniques, the PROCESS of building & maintaining systems and infrastructure (deployment environments) began to closely resemble the way that SW developers build/ maintain their applications and source code.


Infrastructure Automation gains acceptance and comes in many forms

At the same time that IaC was gaining traction, several advancements and many competing efforts, tools and frameworks to automate and centralizing the Management of these many virtualized commodity environments grew rapidly.

Many ad-hoc tools began to fill the void in the Application Container space (today’s Docker and competing CoreOS rkt) offering a lighter-weight way to virtualize, image, and rapidly deploy services without full-fledged heavy weight VM’s/ Hypervisors.

.. which needed an Orchestration engine, that Kubernetes has taken hold of (for now).


** For a deeper examination re: IaC Tools (Ansible, Chef, Puppet, Terraform w/ Docker/Kubernetes) See our article DevOps 101 : .. to Agile Cloud Native Development (w CI/CD).



Comprehensive, Centralized Infrastructure Management Solutions [eg. Private Cloud Deployment Models] and Platform Frameworks

The ultimate solution for managing any environment is to offer SPOG (Single Pane of Glass) management, unifying all monitoring, provisioning, and lifecycle management capabilities through ONE centralized interface.

*While the following solutions were initially created for private cloud management, most IaaS management solutions today are extensible (beyond their own proprietary API’s), offering open REST api’s (Representational State Transfer), extending Private-Cloud IaaS / PaaS to Hybrid Cloud models.

Also NOTE that each and every Public Cloud provider has developed or extended similar solutions which allow for operating, management, and monitoring of their respective cloud platforms (eg.  Dreamhost compute cloud for example is built upon OpenStack).

The best know of the full-stack IaaS Cloud Management Platforms (CMP’s) are :

  • OpenStack  : 
    • Openstack was created in 2011 as a joint development effort between NASA and Rackspace.   Since then, it has been opensourced and described by Mark Collier (chief operating officer of the OpenStack Foundation), as one of the top three open-source projects globally in terms of code contributions, behind the Linux kernel and the Chromium web browser.
    • While predominantly used within companies in a private cloud environment, at the end of 2018 it was noted that Openstack is the underlying environment powering 75 Public Clouds World-wide.

Openstack is a combination of open source tools (known as projects) that use pooled virtual resources to build and manage private and public clouds. Six of these projects handle the core cloud-computing services of compute, networking, storage, identity, and image services.

The comprehensive nature of OpenStack is both it’s strength and weakness.

While OpenStack had become the dominant FOSS player (beating out CloudStack), it recently has lost momentum due to the complex installation & integration required, along with today’s rapid shift to utilize and explore the growing landscape of Public Cloud services (AWS, GCP, OCI, Azure..), where both IaaS & PaaS platform services (DBaaS, JaaS, BDaaS, IoT,..) can be rapidly provisioned and centrally managed remotely via HTTPS at the click of a mouse.

On the positive side, OpenStack today has also embraced and extended it’s capabilities beyond Bare Metal (BM) and Virtualized environments (VMs) to also include containers and Kubernetes.     It’s extensibility can also be noted to offer many variations for deployment, such as incorporating (underneath or in place of it’s own SWIFT ObjStore) the Ceph distributed object, block, and file storage platform (which has become a very popular alternative distributed Object Storage platform).


  • VMware  :  is the most comprehensive and dominant, “for cost” (Not free, nor FOSS) framework for managing multi-vendor commodity/converged x86/Linux/Virtualized IaaS environments, to the point of SDDC (SoftwareDefined DataCenter) multi-site capabilities from a centralized UI.

The VMware vRealize Suite (formerly called vCenter Operations Management Suite), is a software platform designed to help IT administrators build and manage heterogeneous, hybrid clouds.    In addition, VMware has done the best job of integrating Hybrid solution frameworks with nearly all cloud providers, as depicted below.



Image result for vcenter VMware stack VCF


  • For Oracle HW / SW deployments, the most comprehensive solution is by far Oracle’s OEM 13c suite (covering the entire stack.. from HW, OVM, OS, DB, Apps, ..).    Above and beyond offering fully integrated (App to Firmware) SPOG centralized management, Oracle’s OEM product offers the lowest TCO (Total Cost of Ownership) of any Infrastructure stack, noting that the foundation OEM suite as well as OS & OVM Virtualization technologycomes free with a hardware support contract.    Additionally, only ONE phone # is needed for ANY / ALL Oracle products (OEL OS, OVM, DB, Apps, HW, Java, ..).

OTHER Platform Service (PaaS) Solutions & Frameworks [re: PaaS+]

Beyond the Public Cloud providers (and as eluded to above), several other targeted solutions/platforms and frameworks exist for less comprehensive management of Application Workloads in a PaaS model (vs. inclusive of all Infrastructure/Storage/Networking/Virtualization – IaaS, as described in the sections above and explained/contrasted further in the next section).

Most of these platforms today leverage the proliferation of OpenSource components (eg. Devel- Spring, Languages- Java, Python, OS- Linux, Containers- Docker, Sched/Orchestration- Kubernetes, etc) embedded together, extending prior PaaS solutions.

Again, without offering a comprehensive list, but touching on a few of the biggest names in this space are :

Pivotal “Platform” (formerly : PCF – Pivotal Cloud Foundry)

As outlined on Wikipedia : Cloud Foundry :

The Cloud Foundry platform is available from either the Cloud Foundry Foundation as open-source software or from a variety of commercial providers as either a software product or delivered as a service. Cloud Foundry is open-source software, and hence, it is available to anyone. Deploying Cloud Foundry involves interfacing with the underlying infrastructure using the Cloud Foundry BOSH deployment system, another open-source tool governed by the Cloud Foundry Foundation.    Note:  In 2009, VMware initiated the original development, before being transferred to Pivotal.  In April 2013, Pivotal was created from EMC and VMware, to market assets including Cloud Foundry, RabbitMQ and Spring.[7]   In 2019, Pivotal was fully acquired by VMware.

Cloud Foundry is promoted for continuous delivery as it supports the full application development lifecycle, from initial development through all testing stages to deployment. Cloud Foundry’s container-based architecture runs apps in any programming language over a variety of cloud service providers. This multi-cloud environment allows developers to use the cloud platform that suits specific application workloads and move those workloads as necessary within minutes with no changes to the application.

Applications deployed to Cloud Foundry access external resources via an Open Service Broker API, which was launched in December 2016.[15]

In a platform, all external dependencies such as databases, messaging systems, file systems and so on are considered services. Cloud Foundry allows administrators to create a marketplace of services, from which users can provision these services on-demand. When an application is pushed to Cloud Foundry the services it needs may also be specified. This process puts the credentials in an environment variable.

Image result for pivotal platform

RedHat (now IBM)  OpenShift :

OpenShift Container Platform (formerly known as OpenShift Enterprise[12]) is Red Hat’s on-premises private platform as a service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS (RHCOS)[13].

The main differentiator between OpenShift and vanilla Kubernetes is the notion of build related artifacts being first class Kubernetes resources upon which standard Kubernetes operations can apply. The OpenShift client program is “oc” – which offers a superset of capabilities offered by the “kubectl” client program of Kubernetes[11]. Using this client, one can directly interact with the build related resources using sub-commands (such as “new-build” or “start-build”). In addition to this, an OpenShift-native pod build technology called Source-to-Image (S2I) is available out of the box. For the OpenShift platform, this provides capabilities equivalent to what Jenkins can do.

Another difference when OpenShift compared to Kubernetes:

      1. The v4 product line uses the CRI-O runtime – which means that docker daemons are not present on the master or worker nodes. This improves the security posture of the cluster.
      2. The out-of-the-box install of OpenShift comes included with an image repository.

Image result for openshift 4.3



The formal NIST Definition of Cloud Computing

On September, 2011 (only 6 years ago), the National Institute of Standards and Technology’s (NIST) working definition of cloud computing, the 16th and final definition has been published as The NIST Definition of Cloud Computing(link is external) (NIST Special Publication 800-145).

This defined “cloud computing” as including the following (taken and paraphrased from the link above) :

NIST Cloud Computing Characteristics  [easily remembered as ‘ONPEM’ (~On-Prem) ] :

  1.  On Demand, Self-Service.    Where a consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  2. Broad Network Access.  (Self explanatory)
  3. Resource Pooling.     The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.
  4. Rapid Elasticity.       Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand.
  5. Measured Service.      Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

Fundamental Service Models :



  • Infrastructure as a Service (IaaS).     The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).
  • Platform as a Service (PaaS).     The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider.3 The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
  • Software as a Service (SaaS).     The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure2 . The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings.


Cloud Deployment Models :


    • Private Cloud (typically On-Premises).  The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
    • Hybrid Cloud (On + Off-Premises).     The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).
    • Public Cloud (Off-Premises).     The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them.   It exists on the premises of (@) the cloud provider.



The Key Q’s to ask when determining the best Cloud Model(s) for your Business


Before your organization boils the ocean and does detailed Discovery & Requirements Analysis, the 4 Key areas in the chart below highlight the Key Questions that you need to first explore more thoroughly in order to determine cloud “viability”, let alone select the appropriate Cloud Service (IaaS, PaaS, SaaS) & Deployment Models (Private, Hybrid, Public) :

  • Security   (Are there significant business or industry regulatory/compliance restrictions that dictate where your data can reside, or specific security needs/ controls that you must incorporate ? .. such as physically Isolated, single-tenant services, or On-Premise only requirements for Production or DR data copies ?)
  • Control    (Does your business require strict control over underlying physical Network/ Firewall, Storage and/or other infrastructure configuration details that you might not otherwise be able to manage with a Public Cloud deployment model ?)
  • Latency   (Do your Service SLA/SLC’s require a very low end-user transaction response time in acceptible latency ?    This can dictate a higher cost dedicated Public cloud connection.)
  • Cost     (Would your business benefit from having rapid capacity elasticity for peak periods that you wouldn’t have to pay for other times of the year ?    Is CAPEX – up-front Capital Expenditures buying the systems  or OPEX accounting more appropriate, where Operating Expenses are spread across monthly payments where you don’t “own” the Equipment).



The Cloud Operating Model brings with it a Business Transformation of IT

One critical area that needs to be planned for in advance is how Cloud-enablement and Digital Transformation ultimately shifts the Business Operating Model of IT : supporting End-users, Lines of Business, AppDev, Admin & Operations teams (See my other article regarding how this relates to the 4 key areas :   Control, Security, Latency, & Cost).

The transition to Cloud requires a Business Transformation of IT to a Cloud Operating Model and typically a radical shift in IT Operations.   For companies that are closing entire datacenters and migrating the majority of Infrastructure + Application Environments to Public Cloud providers, this transition can be VERY disruptive.  The list below specifically reflects impact to areas/environments migrated to Public Cloud providers, and not necessarily for remaining On-Premises Private/Hybrid Cloud deployments :

  • Movement from CAPEX acounting (HW procurement/provisioning) to OPEX (monthly subscription models for IaaS/PaaS/SaaS..), and the many ramifications.
  • Reorganization of IT to transition Roles & Responsibilities (with potential down-sizing or shift to partners/providers for related resources).
  • Transitioning IT from being a central Provider of Technology (Tactics) to a Strategic Partner where the focus needs to be on enabling enabling Strategic capabilities (Higher level solution Architecture, Multi-Cloud Service Broker, Service Catalogs, ..)
  • Dramatic re-training (Cloud Architecture, Cloud Admin/Operations, ..)
  • Departure of traditional activitiesIaaS config/admin (physical HW provisioning / rack/config, OS, network, Storage..) and PaaS Service setup (DBaaS, BigData, etc..) is transitioned from IT to the Cloud Provider, and in many cases to the Lines of Business for Service Configuration and Administration (decentralized Agile DevOps, DBA’s, SaaS, ..).
  • A shift of Service / Support (Desk) ownership and transition to more of a Service & Support “Broker” (or proxy between Lines of Business and Cloud Providers and/or Partners).
  • Creating Centers for Enablement (C4E) vs. Centers of Excellence (defining and Governing rigid standards, see next bullet)
  • Loosening the reins somewhat on Centralized “Standards” and Reference Architectures (given DevOps autonomy as self-directed teams with end-to-end ownership of Dev->Test-> Prod deployment).




Summary of Top Cloud Industry Surveys & Future Trends:

Today we’re blessed with a wide depth and breadth of both On-premises Private Cloud HW/Solution options as well as an incredible pace of expanding (and very robust) Public Cloud IaaS -> PaaS -> SaaS offerings with mature (and secure) capabilities.    While the trends uniformly point to rapid current/future Cloud adoption, stay tuned for future updates as vendors and Cloud providers gain/loose traction and jockey positions as the market dictates it’s supply & demand for various cloud services and deployment models.

NOTE that the following data is only valid for ~1 Qtr, given that 3 months in Cloud-Years is like a normal Calendar Year (things are changing rapidly, to say the least).   Check online for the latest capabilities and vendor announcements.

Several of the following charts come from a Forbes article , IDC data, as well as RightScale survey data.. aggregating this data from various studies and demonstrating a clear #Trend :



The following Charts & Data are from :   RightScale 2018  & RightScale 2017 State of the Cloud Reports



Worldwide IT Spending Forecast (Billions of U.S. Dollars) – Gartner Data




Growth (%)




Growth (%)

2019 Spending

2019 Growth (%)

Data Center Systems







Enterprise Software














IT Services







Communications Services







Overall IT







Source: Gartner (April 2018)



** In (past) future articles we will be delving into some of the topics covered here in more detail, including :



Continue the Conversation..

Let us know what you think and please Comment and/or Retweet Below with any Q’s, thoughts, or requests for future content.

Visit our Contact page for further ways to reach out..  or engage and participate by becoming a contributor 🙂   

To contact the author directly :

All content and comments made are that of the author, and not of current nor past employers.

Copyright© 2017-2019 Todd A. Jobson & TechThought.org ; All Rights Reserved *



You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *