The intention of this series of articles isn’t to delve into the lowest level of detail within any single Enterprise Architecture framework, but rather to provide a high-level overview (as I’ve done more than once before) demonstrating that nearly everything is an evolution of one or more things that came before it. Most importantly, is to walk away with a clear understanding of the fundamental concepts and principles, being capable of Thinking on your feet in the “real” world (not memorization of data and terminology that can be easily referenced or changes from version to version.. just to get a trending certification and not remember what it means next week).
Experience comes in time, but you’ll find that understanding the following concepts, terminology, and “use-cases” will be applicable in many areas for years to come (good ideas are resurrected, transformed / extended, and re-applied.. even as paradigms shift / Methodologies/ Frameworks, and products come and go). Enterprise Architecture (EA) aligns the Business process, requirements, and strategic objectives with the appropriate underlying Technical Architecture.
EA has grown from several disciplines, borrowing and extending principals and process from several areas : Software Development (Waterfall, SDLC/RUP, to Agile), traditional Project Management and Business Planning principles (1970’s IBM BSP), as well as borrowing from Total Quality Management characteristics (optimally mapping capabilities and iteratively improving toward future objectives), and IT Service Management. Additionally, most of this ~50 year evolution (as with many areas in technology) originated from Government funded research (DOD, NIST Enterprise Architecture Model, etc).
While today’s variations were further cultivated by individuals tirelessly working in the trenches to solve REAL-world business problems, we also have to thank a few key organizations (The Open Group ~ TOGAF) and institutions for harvesting and unifying this process to ensure longevity through adaptation and wide-spread adoption.
This article (Part 1) provides a chronological overview of WHAT came before (and later became leveraged, or adopted within) formal Enterprise Architecture :
Part 2 of this article, continues forward exploring the chronology of formal Enterprise Architecture Frameworks, and their evolution toward today’s agile needs :
Part 3, focuses on the real-world challenges, & realities of implementing EA :
(Waterfall, SDLC, AdaptiveSWDev, RapidAppDev, Agile -> DevOps)
While all facets of Enterprise/IT Architecture rely upon it’s foundation of underlying technology via Infrastructure (systems, storage, network, virtualization, mgmt..), the origin and “root” objective of all IT (and computing in general) can be defined as enabling business applications or more broadly, the development of software to extend/ optimize human (enterprise) capabilities and productivity.
Given this premise, the Software Development “Waterfall” Model is where most legacy Enterprise Application Environments have been born.
The “Waterfall” model is a logical progression of sequential tasks, along the critical path of traditional software development. Given that tasks are conducted in series, any delay in prior steps (aka dependencies), will accumulate as total delay in completing ONE cycle of development.
NOTE: While emphasizing a “thorough” approach, assuming that all aspects of development must wait on prior dependencies to have completed, the “serial” (and non-iterative) nature of the Waterflall process are among it’s greatest weaknesses, which are in direct opposition to that of the latest trends in Agile Dev/Ops, where many parallel work streams can be worked on in parallel (eg. decomposing monoliths into many services, etc).
At the same time that computing and the Internet was in it’s very infancy, the CMM was created to characterize and measure an organization’s process “Maturity” Levels (later applied more holistically to configurations and more integrated Enterprise Architectures as CMMI).
The project consisted of members of industry, government and the Carnegie Mellon Software Engineering Institute (SEI). The main sponsors included the Office of the Secretary of Defense (OSD) and the National Defense Industrial Association.
CMMI is the successor of the capability maturity model (CMM) or Software CMM. The CMM was developed from 1987 until 1997. In 2002, version 1.1 was released, version 1.2 followed in August 2006, and version 1.3 in November 2010. Some major changes in CMMI V1.3  are the support of agile software development, improvements to high maturity practices and alignment of the representation (staged and continuous).
According to the Software Engineering Institute (SEI, 2008), CMMI helps “integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes.”
In March 2016, the CMMI Institute was acquired by ISACA.
** Part 2 of this series will explore how the Maturity Model has evolved as a tool within multiple frameworks, and examine it’s implementation within TOGAF.
Fast Forward.. to today’s SW Development craze that has grown beyond SW, and into one of today’s most adopted business movements :
“Agility” & Continuous Improvement (aka SCRUM, DevOps, etc..)
As described in the History of Agile, the Spring of 2000 initiated talks at a ski resort ( ), where several in the SW development community gathered and started to come to a shared emphasis to promote/progress ideas in “Light methodologies, such as Extreme Programming, Adaptive Software Development, Crystal, and SCRUM“. While this initial meeting didn’t result in one formal consensus, it did result in several articles that started conversations.
.. Then on February 11, 2001 at the lodge in Snowbird (ski resort), seventeen members from this extended community did align their objectives to become the “Agile Alliance”, and ultimately together wrote the Agile Manifesto and the 12 principles of Agile SW Development.
Note that the 12 principles of the Agile Manifesto came before and is distinctly different from the 12 Factor Application methodology that several Heroku developers contributed to and published in 2011 by the Heroku Co-Founder Adam Wiggins. The latter (12 Factor App) lends itself toward more prescriptive SW development as a service or “stateless” mi microservice guidelines for Application Architecture (stateless, don’t hard-code config info but store locally, isolate dependencies, scale horizontally, and build-deploy-run in an iterative model..). The 12FactorApp principles come more into context when discussing the post-DevOps (post 2008) movement where “Cloud Native” development and concepts flourished (see the next section).
As you can see from the above example, the founders of Agile came from the SW development world, but adopted and incorporated MANY ideas and leveraged the best of several models (from several disciplines beyond SW development) to develop Agile methodology (Continuous Improvement, Small teams, customer directed discovery/req’s, end-to-end ownership of SW, iterative/regular readouts on progress, etc). Ironically, several years prior (1994), I published my MBA dissertation on SW Development through Continuous Improvement and TQM that incorporated most of these concepts, though never published beyond my University (since as part of required course materials).
Taking a step back and being fair, Agile(more holistically than only within the SW development specific context) is neither a Methodology, nor a Framework, but more so a collection of principles and a philosophy of deploying and reducing wasted cycles.. offering more rapid continuous value.
**As such, we need to acknowledge that Agile & Continuous Improvement themes came from and built upon movements that we will discuss later in this article. **
SDLC with Agile is a more traditional “Enterprise” SW development methodology applying “Agile” methods :
When Patrick DuBois and Andrew Clay Schafer tried to connect at the 2008 Agile Conference about “Agile Infrastructure”, the connection to DevOps was born. Patrick later coined the term “DevOps”, to represent the merging of Development and Operations.
As a contrast to more traditional Development models, today’s DevOps is intended to bypass the serial nature of Development and disjointed Operations.. Integrating them together and focusing around “Automation” of Testing / Integration, decreasing the time to Develop-> Deploy.
While not the topic here, the another Architectural trend being advocated by cloud providers is to develop using a “Cloud Native” or “Cloud First” approach, such as described in the section above relating to The 12Factor App. This centers around breaking an Application/ Workload into it’s underlying SW components, aka “Services”, and then further into their constituent modules, aka “Micro-Services” for optimal scalability and light-weight (rapid) development cycles.
With Agile DevOps, developers as part of “2 pizza“ teams will typically “Own” the entire life-cycle for specific components/modules and/or “MicroServices“, becoming experts and gatekeepers from Development -> Testing -> (Delivery) and potentially encompassing automated CI/CD Integration/ Staging -> Prod Deployment.
Hence, extending ownership and capabilities beyond traditional Development, to “Operational” aspects ~= Dev-Ops.
With the additional task and complexity in managing many Containers (& Docker “pods”), Kubernetes has filled the void and become the standard for Container mgmt and Orchestration.
Containers are a technology that isn’t new, but is today a perfect fit for Cloud-Native development/testing and Micro-Service based deployments. The first “Production Ready” Containers actually were released in 2004 by Sun Microsystems as part of Solaris 10 (and later rebranded as “Zones”), but were a technology before their time, with little marketed use-cases (and no DevOps community movement at the time to promote them). From 2004-2010 Sun Microsystems Open Sourced it’s Solaris OS code, exposing it’s Containers, Dtrace, and ZFS capabilities for other Linux variants to decipher and reverse-engineer within their subsequent OS versions.
*Note that Linux containers weren’t released until 2008, and Docker Containers until 2013.
Today’s Containers (Docker has won that battle as the standard) offer light-weight workload (Non-VM) virtualization within a multi-tenant environment (sharing the same underlying OS). This offers running many environments with very little hardware (nor VM overhead), and is well suited for cloud-based or locally isolated Development/ Testing & DevOps (where enterprise requirements such as high-performance, high availability, security, and fine-grained production workload isolation of a VM or even bare-metal system aren’t as stringent). For this reason, today’s DevOps world is in reality becoming more of a DevSecOps world, with the realization that Security needs to be factored in at all levels/ layers, especially when dealing with Enterprise class SLA’s/ SLC’s.
*See our article on DevOps & Cloud Native Development for a more comprehensive review.
Note that Agile DevOps can be in conflict with the methods, process, and culture of traditional Enterprise SW development (and Enterprise Architecture Principles / methods/ reference architectures) that historically have generated large monolithic SOA/ J2EE/ ESB (JAR’s, WAR’s, -> EAR’s) code-bases and libraries which are very slow to extend/test/deploy.
Additionally, many Enterprise Apps (J2EE) run within Java Web/ Application Servers with shared JVMs are diverging from many of today’s application designs and runtime alternatives (eg. within “Polyglot” multi-language development, and/or Microservice / CNA-Container Native App space, not to mention the rapid growth within Serverless / FaaS-Functions as a Service deployments and architectures).
In terms of the contrast between a traditional “Waterfall” development model vs. Agile SW Dev, see the diagram below :
Today’s “Holy Grail” of DevOps is to Automate the entire process, integrated end-to-end so that each developer/team can readily control/manage the process from via “Pipelines” from Development -> Automated Testing -> Automated Integration & Deployment to Production
** See our other article for more on SW Development & DevOps & CI/CD **
Now that you can see where things are rapidly moving today, let’s rewind to the origins of all “Continuous Improvement” process, and start with a few little known facts regarding where many of the common themes within Agile, Scrum, Kanban, Six Sigma, Lean, and Just-In-Time … have all come from. (and no, they weren’t invented during the Dot-Com or Tech build-out, nor by tech giants, but rather during the period after World War II).
Note that we’re intentionally leaving out the basic “Project Management” history and disciplines to focus here on the more recent THEMES and Objectives that we find all around us today : Continuous Improvement, Agile/ Rapid methodologies, ..to Scrum teams
Even though little thought, nor mention is given today, the history of Quality and Continuous Improvement Methodologies is very interesting.
At the core of today’s Total Quality Management (TQM) and “Continuous Improvement” movements, brings us to ~1950’s Japan where the United States had sent it’s best team of Quality Control & Quality Management guru’s to assist in the rebuilding of Japan’s economy and manufacturing sectors.
Deming as one of the inspirations for what has become known as the Japanese post-war economic miracle of 1950 to 1960, when Japan rose from the ashes of war on the road to becoming the second-largest economy in the world through processes largely influenced by the ideas Deming (& others) taught during this post-war “rebuilding” period :
Best know for Japan’s Quality movements (Quality Circles, etc) is Kaoru Ishikawa, who’s leadership is world known. He translated, integrated and expanded the management concepts of W. Edwards Deming and Joseph M. Juran into the Japanese system.
His Japanese nation-wide movement “Company Wide Quality Control (CWQC)” became famous for a bottom-up process, that was initiated and led from the Top.. which later became much of the Six Sigma mantra.
When you see a “Fishbone” diagram .. think twice, they are actually called “Ishikawa diagrams” :
So when you ask.. ? Why have Japanese automobiles and electronics manufacturing been 2nd to none ? .. remember Deming, Juran, and Ishikawa, .. and the little known fact that prior to their assistance, Japan was actually well known for poor quality in manufacturing.
Until Ford reached out for Deming’s assistance in 1981, American companies denied the need.. and resisted change (after 40 years of American TQM leadership globally), continuing to operate inefficiently until loosing Billion$ during the 1980’s.. when Japan had already eaten their lunch (only then did Ford’s motto change to : “Quality is job 1”).
JIT, or “flow”, has ruled the world of Manufacturing Inventory Control and Supply Chain Management ever since defining that in a perfect manufacturing process that “pulls” and “flows”, there will be NO over-purchased inventory.
Jumping forward a decade or 2, skipping over the 2000’s craze around TQM and SixSigma.. brings us to :
Summarized by the Scrum Alliance :
Scrum is an Agile framework for completing complex projects. Scrum originally was formalized for software development projects, but it works well for any complex, innovative scope of work. The possibilities are endless. The Scrum framework is deceptively simple.
ITSM frameworks, such as ITIL and COBIT provide a set of interrelated “Best Practices” that provide guidance for developing, delivering, and managing enterprise IT services.
While in the 1990’s and 2000’s, IT Service Management Strategy was the “rage”, today’s rapid shift toward Cloud-enabled Services & Deployment Models has placed ITSM on the sideline, with much of it’s process and best practices being incorporated within Enterprise Architecture & embedded as part of the latest generation of automated/ integrated Infrastructure as Operational & platform Management tools (Dell’s HCI – Hyper Converged Infrastructure, VMware’s SDDC– the Software Defined DataCenter, Oracle’s Engineered Systems, HW “Appliances“, and/or within Cloud IaaS/ PaaS, all of which abstract the control plane management capabilities and interfaces within a SPOG – Single Pane of Glass Management interface).
For formal Governance and Detailed ITSM process/methodology, Enterprise Architecture can be overlapped with ITSM (eg. ITIL), as depicted at the tail end of the EA/ TOGAF section, but again caution related to process “agility” and business/ cultural impacts need to be considered.
In 1989, GTIMM (Government Infrastructure Management Method) was renamed to ITIL. ITIL is a framework for implementing ITSM most predominantly centered around managing Enterprise IT Services and Data/ Information systems with business/LOB’s. It’s best known by it’s library of best practices and abundance of practitioners.
In 2000, ITIL v 2 rearanged the 30 volumes into 9 categories. ITIL also saw it’s adoption grow with Microsoft adopting it as the foundation for their MOF (Microsoft Operations Framework)
In 2007 ITIL v3 expanded to embrace and incorporate better alignment with LOB’s (Lines of Business) and alignment with TQM & Six Sigma movements adding “Continual Improvement“. Additionally, they attempted to make things more accessible by compressing 26 functions and processes into only 5 volumes, but required an update to v3 in 2011 to resolve inconsistencies and errors.
This year, ITIL has started releasing it’s updated version 4, which includes the following updates to make it more accessible and inclusive to organizations in the 21st century, where Agile / DevOps movements and Enterprise Architecture have become the focus for most serious Digital Transformation efforts (adding Organizational Management, which is a huge pain-point for enterprises that are closing datacenters and migrating to cloud-enabled future state architectures). Note that they have also added measurement and reporting (KPI’s) as an important process missing from ITIL version 3.
In 1996, COBIT was released by the ISACA (Information Systems Audit and Control Association) for IT governance and management. While it won’t be discussed here in detail, but is best suited where IT Governance is the central focus being short feedback loops for quality, control, risk and reliability management :
In 2003, Benjamin Treynor @Google coined the term “SRE”, reflecting titles of production Engineers (blending administration, operations, systems engineering, and Production support) who’s first priority was in Production Reliability (aka Up-Time).
Since that point in time this unique role has grown to become a well known and recognized title, but definitely is still a staple within Google.
While the SRE doesn’t have a formal methodology nor specific set of processes (external to Google), expect this position to be treated as such (a role), and one set of roles and responsibilities that need to be woven into the fabric of Enterprise Architecture where applicable.
Areas and disciplines related include the following foundation of activities and knowledge :
Below, Google depicts various aspects of an SRE bootstrapping (getting up to speed) in preparation for the realities of supporting live “on-call” activities :
While a unique role, most organizations handle these tasks within other more traditional job descriptions such as : Systems Engineering, Systems Administration, Operations, back-line Support, On-Site Engineers .. or as our discussions above reflected, the growing emphasis on self-sufficient DevOps teams, where the addition of an SRE for business critical environments would be recommended.
The team at Google have written 2 books that will give you a window into their world of ensuring production reliability. Another interesting read on this topic is an O’Reilly Ebook titled “Case Studies in Infrastructure Change Management How Google Rebuilds the Jet While Flying It“, where Google’s SRE’s also define and describe their use of ICM – Infrastructure Change Management as significant.
It should also be noted that in today’s tech industry, the terms SRE and DevOps have become frequently inter-related and also confused. The distinction being that an SRE role should be related to “reliability” considerations (Production Workloads, Incident Response, root cause analysis, with an understanding of systems Engineering, scalabilikty, availability, redundancy, replication, fail-over, disaster recovery, interpreting system metrics, etc).
This last topic reflects one of the key tools utilized across many IT/ EA and project-oriented disciplines.. a common modeling language.
As is with most things in IT, the standard “Unified Modeling Language” (UML) had it’s roots in Software Architecture and design.
In 1990’s, as the Object oriented craze came into glory, the need arose for a standard to use in depicting diagrams for Object Oriented notation.
In 1994, 4 of the Object Oriented pioneers : Grady Booch, Ivar Jacobson and James Rumbaugh at Rational Software together developed the “Unified Modeling Language” (UML), which later became adopted as an industry standard by the Object Management Group (OMG)., and later became adopted as an ISO standard.
As can be seen below, the evolution of UML came from state/other diagrams before it, and has since evolved extensively (and since, even beyond this diagram as noted below as SysML, UDPM, and most recently as UAF) :
UML diagrams come in many forms, such as the following (as extracted from Wikipedia Links above/below) :
*Explore the following links or the page “adoption of UML” for further examples of how UML has/is being applied across many different industries and disciplines. Note that the common “Use-Case” diagrams are a take-away commonly applied in the business world.
Between 2005-2013, UDPM came from an initiative to develop a modeling standard that supports both the USA Department of Defense Architecture Framework (DoDAF) and the UK Ministry of Defence Architecture Framework (MODAF). *UDPM is based upon UML*. Also note, UDPM v.3 became UAF (below).
In Sept. 2013, a request for proposal for a Unified Architecture Framework (UAF) [then requested as UDPM v3] was created in the Object Management Group (OMG). The intent of this framework was to develop a unified and consistent architecture framework from the three military AFs (IDEAS foundations for DoDAF and MODAF) making it applicable to industrial and commercial applications as well as still being applicable to defense. The extensions requested where in the areas of Security, IoT, and Human Sysetms Integration, and SoS. The framework developed (provides a consistent metamodel and profile (implementable in SysML) that can be used to represent SoS concepts across different levels of abstraction in a standardized manner.
Let us know what you think and please Comment and/or Retweet Below with any Q’s, thoughts, or requests for future content.
To contact the author directly :
All content and comments made are that of the author, and not of current nor past employers.