To provide a basic understanding of enterprise security, the following section offers a high level landscape of key security areas and “Concepts” that need to be understood and considered, PRIOR to embarking on any effort to evaluate/ assess an organization’s Security Policies or Future State Architectures (most notably, cloud-enabled).
Each of the following are key components of anySecurity Policy, and as with all things in technology.. are evolution’s built upon prior models (hopefully) Continuously Improving as demands, methods of access, and technology frameworks change. With this said, most aspects of a “Public/Hybrid Cloud Architecture” are a subset of what already exists On-Premises.
However, given that most platforms offer multi-tenant access to shared resources (in many Business or Mission Critical scenario’s going against past corporate Compliance/Regulatory standards or requirements), cloud architectures need to ensure that Security is a top priority. Additionally, new capabilities are being refined to more seamlessly Integrate Hybrid capabilities between On/Off-Prem environments more effectively (such as with RESTapi’s, secure object/file storage, DB replication, and integrated management frameworks).
**NOTE: This section is only an outline, and not intended to provided a detailed discussion (as such, it can be skipped if these concepts are understood). Also note that as with most areas in technology, many of the concepts and aspects overlap and apply to more than one area listed ( eg. Intrusion Detection can be applied to: Networks, OS, Physical Security, Access Management, ..).
Facilities – Physical Access Control – *Self Explanatory*
Prevention and deterrents to combat “Phishing” techniques, and “Spoofing” attacks (eg. calling a support center and claiming you forgot a password.. or spoofing email to someone to look like you are the email provider requesting an account Password reset, to more technical GPS, IP, DNS or ARP spoofing attacks).
NOTE: While the following concepts can be associated with “physical” network devices, modern Private & Public cloud IaaS utilize underlying hyper-converged SW Defined Networking (SDN) to virtualize and elastically scale these capabilities “within” IaaS platforms as Services/vDevices (eg. public VPC/Virtual Cloud Networks, Routers/Gateways, Firewalls/ Security Groups, vNICs, VPN’s, etc).
Note that today, TLS (a newer version of SSL) is primarily used
SSL & TLS can be interchanged/substituted (for the purposes of these examples)
IPSEC Tunneling
The following is an example of utilizing RBAC within a Unix/Linux based system :
(Application/Device/Directory/File access controls – eg. Owner/Group/Other : rwxrwxrwx )
*Minimizing the # of Base Packages Installed (fewer packages = fewer holes)
*Service Management – One of the key aspects of “Hardening a System”
(aka, Turning OFF the unnecessary Services that expose Sockets & ports externally)
Note that while (Type 1) Hypervisors better isolate workloads via VM’s (OS encapsulation), they are still running in the same physical server, and with some shared components, ultimately this model consumes MORE resources and VM overheads (redundant OS kernels taking CPU/core & memory resources) + VM scheduling and emulation/translation inefficiencies which requires “over-provisioning”.
This also poses the issue/question of RAS and Patch management required for maintaining many VM’s and OS versions.
Container based Virtualization (OS or Application), on the other hand, offers a collection of isolated workloads to be run (in Containers), SHARING the same single underlying OS Kernel. This model offers much more resource efficient (and dense) consolidation, but also requires further Reliability/Availability considerations, given that if the underlying OS fails.. then all of the Containers relying upon it will also fail. For this reason, additional orchestration and fault monitoring is required (see Kubernetes, below).
** Given the depth and breadth of this topic relating more toward Application Architecture, it’s outside the scope of this conversation and should be explored much more deeply. While most other aspects/topics listed here will apply, some level of Architectural Assessment and Analysis should be conducted, noting the many compliance/regulatory, and points of integration potentially involved. **
NOTE: Industry trends are shifting rapidly re: Application Architecture & Security :
Key areas and Considerations regarding Application Security :
Today, several alternatives exist for Application security, notably when most client traffic is over the internet and securing data as well as session authentication/authorization is the #1 priority.
At the highest level, the Application Architect and Developer needs to understand the various interactions and options, Use Cases, and Limitations for :
The open community organization OWASP has created a high level summarized Authentication Cheat Sheet
A nice summary / comparison of the SAML vs. OAuth 2.0 vs OpenID Connect can be found *HERE* :
OAuth2
|
OpenId |
SAML
|
|
Token (or assertion) format
|
JSON or SAML2
|
JSON
|
XML
|
Authorization?
|
Yes |
No
|
Yes
|
Authentication?
|
Pseudo-authentication
|
Yes
|
Yes
|
Year created
|
2005
|
2006
|
2001
|
Current version
|
OAuth2
|
OpenID Connect
|
SAML 2.0
|
Transport
|
HTTP
|
HTTP GET and HTTP POST
|
HTTP Redirect (GET) binding, SAML SOAP binding, HTTP POST binding, and others
|
Security Risks
|
Phishing
OAuth 2.0 does not support signature, encryption, channel binding, or client verification. Instead, it relies completely on TLS for confidentiality.
|
Phishing
Identity providers have a log of OpenID logins, making a compromised account a bigger privacy breach
|
XML Signature Wrapping to impersonate any user
|
Best suited for
|
API authorization
|
Single sign-on for consumer apps
|
Single sign-on for enterprise
Note: not well suited for mobile
|
NOTE: The following : Oracle ADF Mobile Whitepaper discusses several Considerations for Moving from SOA to Mobile, regarding interfaces, design, strategy, etc.
Inbound Data. Network Encryption. Strong Authentication. Identity Management Integration. Storage. Transparent Data Encryption. Secure Backup. Monitor. Configuration Scanning. Audit Vault. Access Control. Database Vault. Oracle Label Security. Fusion Security. Outbound Data. Network Encryption.[/caption]
Oracle DB Security Compliance & Auditing : ( <– This link is a nice overview of Oracle 12c DB Compliance)
As described in the sections above, encryption can be utilized for data both in transit (network) as well as at rest (Files, Filesystems, Storage Volumes, and Databases).
Public / Private Key Encryption
Certificates
Oracle’s TDE is part of the Oracle DB ASO (Advance Security Option), offering very robust DB encryption capabilities :
Note that while DB Fields can be “masked” so that that data won’t be visible to Users/Groups, DBA’s STILL have access to that data. (the following is an example of how to mask via OEM with Oracle DB 11g/12c)
Data Redaction is available with Oracle 12c, going beyond “masking” so that data access/visibility can even be restricted to DBA’s without permission.
Controls on privileged users. Restrict privileged users from accessing application data. Enforces separation of duty. Real time access controls. Controls access based on IP address, authentication method, time of day,…. Transparency. No changes to applications required.[/caption]
Essentially offers application of Security Policies to hide data to further limit access. Oracle’s Overview describes it further as :
Oracle Virtual Private Database (VPD) enables you to create security policies to control database access at the row and column level. Essentially, Oracle Virtual Private Database adds a dynamic WHERE
clause to a SQL statement that is issued against the table, view, or synonym to which an Oracle Virtual Private Database security policy was applied.
Oracle Virtual Private Database enforces security, to a fine level of granularity, directly on database tables, views, or synonyms. Because you attach security policies directly to these database objects, and the policies are automatically applied whenever a user accesses data, there is no way to bypass security.
Digital Identity uses “Attributes” to offer various levels of Role or Policy based Access Control.
These “Attributes” include personal identifying characteristics such as : email, SSN, Name, Role, DLN, Empid, DOB, phone#, address, etc.
The goal of Identity Management is to register Users and manage these “Attributes” for differentiating and validating Digital Identity (in databases and centralized repositories via IDM systems).
LDAP (Lightweight Directory Access Protocol) is at the center of all modern Identity Management Solutions, providing “Directory Services” as Wikipedia sums up nicely :
The Lightweight Directory Access Protocol (LDAP; /ˈɛldæp/) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network.[1]Directory services play an important role in developing intranet and Internet applications by allowing the sharing of information about users, systems, networks, services, and applications throughout the network.[2] As examples, directory services may provide any organized set of records, often with a hierarchical structure, such as a corporate email directory. Similarly, a telephone directory is a list of subscribers with an address and a phone number.
LDAP is specified in a series of Internet Engineering Task Force (IETF) Standard Track publications called Request for Comments (RFCs), using the description language ASN.1. The latest specification is Version 3.
A common use of LDAP is to provide a central place to store usernames and passwords. This allows many different applications and services to connect to the LDAP server to validate users.[4]
LDAP is based on a simpler subset of the standards contained within the X.500 standard. Because of this relationship, LDAP is sometimes called X.500-lite
The most common form of authentication we find is Username – Password validation.
** See the “Application Security” section above for a more comprehensive discussion of Application & Web Service based Authentication / Authorization (AuthN/Z) **
At the MOST Basic level, this entails determining whether someone (or a transaction) is “authorized”, typically based upon “Attributes” such as ROLE (See RBAC) and/or part of a Security Policy.
** See the “Application Security” section above for a more comprehensive discussion of Application & Web Service based Authentication / Authorization (AuthN/Z) **
Access Control mechanisms based upon “Role”.
Access Management builds on top of Identity Management, to offer robust centralized authentication and SSO (Single Sign-On).
According to Gartner, “Access management (AM) applies to technologies that use access control engines to provide centralized authentication, single sign-on (SSO), session management and authorization enforcement for target applications in multiple use cases. Access managers have evolved from early “web access managers.” Target applications may have traditional web application architectures using web browsers and web application servers, and these applications may run on customers’ premises or in the cloud. Capabilities have expanded to support native mobile or hybrid mobile applications. In addition, these applications may run on internet-connected things with or without human operators. Protected target resources may include on-premises or SaaS applications and web services APIs across business-to-employee (B2E), B2B and B2C use cases.”
“This Magic Quadrant focuses on vendors that deliver AM functionality to support multiple common use cases, and that provide solutions in software or hardware appliance form factors, or as a service to meet customer requirements for access control to applications and services on-premises or in private and public clouds.”
Gartner Notes :
By 2019, more than 80% of organizations will use access management software or services, up from 55% today.
By 2021, IDaaS will be the majority access management delivery model for new purchases, up from less than 20% today.
Most of today’s Healthcare, Pharmaceutical, Financial, and Insurance industries have Business / Mission Critical environments with regulatory commitments going above and beyond traditional SLA’s (Service Level Agreements).
To be in compliance for your industry, you need to know in depth what additional requirements are needed, many of them relating to securing environments & data.. along with some very specific isolation / retention needs and protection mechanisms.
Additional considerations should include : Extension within Business Continuity & Disaster Recovery Plans, along with Auditing mechanisms & Logging, and Legal/Contractual Public Records keeping requirements.
Some of the common Regulatory/ Compliance standards that need to be considered and addressed include :
Gartner offers a nice process for managing Security with Continuous Improvement in mind :
Given that using standards to “minimize” the number of physical components & configurations (OS, pkgs, patches, FW..) is at the core of reducing IT Service Management (ITIL) & Enterprise Architecture COMPLEXITY, the following aspects can be impacted by advocating independence and autonomous ownership of environments :
When application architecture does not depend on the existence of centralized standard (re-usable, consistent, tested) Enterprise libraries/ frameworks, it entices the developer to utilize components/ API’s from potentially disparate (non-validated) repositories / hubs (eg. Github, npm, etc), where the level of Testing, security, and stability should be in question unless first validated.
Today’s developers can readily grab any of the following, without the ability to know completely “what” previous changes, sub-component updates, or “Untested vulnerabilities” might lie lurking within :
By allowing Developers access and control over the entire life-cycle of Dev->Test->Prod.. , this lack of “unity” in standards between teams and environments can result in future Production related COMPLEXITY, & Extended Time to Diagnose,
The following chart shows how the addition of components/ variations, compounds the # of potential Permutations & Combinations to manage :
Over the next several years, the Growth of IoT devices is expected to far exceed 50 Billion devices.
Each of these devices will be sampling, gathering, monitoring, and aggregating activity and #DATA about the world around us, and WITHIN us (via biometrics). Securing access to these devices and the data they produce should be a critical concern.. planned for far in advance of deployments.
We have come a LONG way since the state of FOSS & Container security in 2016, but the comment from the FOSS Summit emcee (Bryan Cantril) was a bit of a eye-opener, and almost a premonition for the rapid increase in World-wide security breeches that occurred in 2017 :
” The Linux kernel substrate that these
things are built on, cgroups and namespaces, were never
designed to meet security needs. I think Docker
just this week released the first version that really uses
user-level namespaces, that enables any kind of security and
it’s not necessarily on by default.
…. The Linux kernel substrate that these
things are built on, cgroups and namespaces, were never
designed to meet security needs. I think Docker
just this week released the first version that really uses
user-level namespaces, that enables any kind of security and
it’s not necessarily on by default.
… I think that it’s not just a matter
of the ability to break out. It’s the ability to impact
service as well. What happens if a container exhausts disk
space? What happens if a container exhausts DRAM? What
happens if a container consumes all the bandwidth? It’s
resource management as well.
… I hope that the Linux kernel
gets there, from the perspective of all of us.
… So I do think that the
multi-tenant security is not there yet. I think it’s a
challenge to get there because there is a “you first”
attitude. Because you’re one vulnerability away from real
devastation (!) … “
The fundamental reasons being :
A> EVERYONE has access to the source code, as well as public visibility of any/all testing vulnerabilities exposed.. BEFORE the issues are resolved..
and
B> That the Patches are only applied by the rest of the world AFTER they have been IDENTIFIED within the organization. (eg. the Equifax issues related to an Apache Struts framework patch that had been available for several months.. but was only 1 of MANY disparate components deployed in various configurations/versions across many systems & environments, requiring significant effort in IDENTIFYING Patches needed as part of Security Management activities).
While Edward Snowden’s NSA leaks of 2014 shocked the world, his opening of Pandora’s box, not only opened eyes, but also released (likely via his laptop confiscation by both the Chinese Government and the Russian Government) asset locations, tools and vulnerabilities of FOSS and consumer devices that have become the “Snowden SNOWBALL” fueling the rapid growth in frequency and escalating impact of security breeches.
We have all heard of the Heartbleed (OpenSSL/ TLS) & Venum (Xen/ KVM..) security holes and/or attacks, but in 2017 the World experienced more FOSS-related HUGE, nation-impacting breaches than all years prior, 2 of the largest being :
– Wikileaks email and other breeches, which has been claimed to impact the US Presidential Election (granted, some of this was from “fishing”, but most environments hosting this data were FOSS/Linux infrastructures, where several attacks did not from from users exposing passwords).
– Equifax (Apache Struts Framework bug) breach of personal/SSN records for nearly HALF of ALL Americans (~150 Million people !), including hundreds of thousands of Credit Card account records.
– Uber, Retailers, HealthCare companies, Credit Card companies, … the list grows from 2015/ 2016.. onward
Much of the following section is taken directly from the following Gartner.com Security readout :
Today’s data centers support workloads that typically run in several different places: Physical machines, virtual machines, containers, and private and public cloud. Cloud workload protection platforms provide a single management console and a single way to express security policy, regardless of where the workload runs.
Browser-based attacks are the leading source of attacks on users. Establishing a remote browser by isolating end-user internet browsing sessions for enterprise endpoints keeps malware off an end-user’s system, reducing the surface area for attack and shifting it to server sessions. Server sessions can be reset to a known good state on new browsing sessions, tab opened or URL accessed.
Deception technology can be used to thwart or throw off a potential attacker. They allow enterprises to better detect attacks with a higher level of confidence in events detected. Current deception technology spans multiple layers within the stack, including endpoint, network, application and data.
Gartner predicts that by 2020, 80% of large enterprises, 25% of midsize organizations and 10% of small organizations will have invested in EDR capabilities. These solutions monitor endpoints for unusual behavior or malicious intent.
Network traffic analysis is a network-based approach to monitor network traffic, flows, connections and objects looking for malicious intent. This solution will identify, monitor and triage these events.
MDR can be a good solution for enterprises that want to improve threat detection, incident response and continuous-monitoring abilities but lack the skill or resources to do so in-house. MDR is particularly popular with small and midsize enterprises due to lack of investment in threat detection.
Microsegmentation means implementing isolation and segmentation for security purposes in the virtual data center. This can stop attackers who are already in the system from moving laterally to other systems.
SDPs define a logical set of disparate, network-connected participants within a secure computing enclave. The resources are typically hidden from public discovery, and access is restricted via a trust broker to the specified participants of the enclave, removing the assets from public visibility and reducing the surface area for attack.
CASBs provide a single point of control over multiple cloud services concurrently for any user or device, offering more control and visibility. They address gaps created by the significant increase in cloud service and mobile usage.
Security architects must be able to incorporate security controls without manual configuration throughout the DevOps process, while being as transparent as possible for developers. Software composition analysts analyze the source core, modules, frameworks and libraries to identify and inventory OSS components and known security vulnerabilities or licensing issues.
Because containers use a shared OS model, an attack on a vulnerability in the OS could compromise all the containers. Container security can be a challenge if developers create containers with no input from security experts. Container security solutions protect the entire life of the container from creation into production, and most provide preproduction scanning combined with runtime monitoring and protection.
As the cloud environment reaches maturity, it’s becoming even more of a security target and it will start having security problems. It’s possible cloud will fall victim to a tragedy of the commons wherein a shared cloud service becomes unstable and unsecure based on increased demands by companies. When it comes to cloud, security experts will need to decide who they can trust and who they can’t. Companies should develop security guidelines for private and public cloud use and utilize a cloud decision model to apply rigor to cloud risks.
In light of the MANY security breaches of this past year, coupled with the realization that a LARGE portion of your life exists (whether you like it or not) in the “Cloud”. No matter if you own an iPhone or a Samsung smartphone with Google Android.. far beyond your SSN, DOB, Drivers License, and Address are being tracked, stored, and in many (if not most) scenario’s, SOLD to the highest bidder (eg. your location, browsing history, shopping trends, search preferences, friends/contacts, interests, purchases, online sentiment, political viewpoints, …. are ALL being stored in a public cloud platform somewhere).
“Take the money you’re spending on prevention and begin to drive it more equitably to detection and response. The truth is that you won’t be able to stop every threat and you need to get over it.”
A dedicated, well-financed actor who is after something in your enterprise is going to get it, even if they use the weakest link–people–to do so. This means adapting your security setup to focus on detection, response, and remediation. That’s where the cybersecurity fight is today. In the future it will most likely move to prediction of what’s coming before anything happens.
There is a new window of opportunity in application security, but most enterprises don’t take advantage of it because of the expense. It’s time to figure out the right way to evaluate the value of security and the best way to explain that to the business.
Additionally, DevOps should become DevSecOps, with a focus on security. This is a good time to marry development and operations. The time to market has shortened so much, it creates an endless connection between development and operation, which means it’s important to stop running them as isolated units. This is the time to bring security to DevOps, or if the team is not internal, to ask the service provider what kind of security they provide.
In the meanwhile, let me know what you think, and if you have any Q’s or comments for future content.
I can be contacted via :
All content and comments made are that of Todd A. Jobson, Sr., TechThought.org, and not of Oracle Corporation.