THANK YOU FOR SUBSCRIBING
Cybersecurity professionals have all heard of the buzzword "Zero Trust," but few know what it means. The Zero Trust concept has come a long way in a decade, from the original 2010 Zero Trust model by John Kindervag of Forrester, the first implementation by Google (BeyondCorp) in 2013, followed by the Continuous Adaptive Risk and Trust Assessment (CARTA) model by Gartner in 2017, the Zero Trust extended (ZTX) Model by Forrester in 2018, to the Zero Trust Architecture (ZTA) by NIST. In spite of the various proposed models and architectures and a proliferation of costly complex proprietary products from vendors who often fail to deliver, there is no single tool to achieve Zero Trust. Instead, Zero Trust calls for a fundamental shift in a firm's security paradigm across many levels. CISO's must make it their mission to provide clear and concise requirements for Zero Trust, along with practical guidance on implementations to secure their enterprise. I have highlighted the fundamental tenants of Zero Trust below, along with commentary on how they may be implemented to provide better security.
All assets inside and outside a perimeter firewall are not to be trusted
Whether users, systems, or services are inside or outside the firewall, all must be treated as untrustworthy assets, which means they must be authenticated and authorized before use. This is enforced for external entities by a perimeter firewall and for internal entities by higher-level network segmentation of (internal and external) DMZ, the Extranet, and the Intranet, as well as by also performing application segmentation.
“In spite of the various proposed models and architectures and a proliferation of costly complex proprietary products from vendors who often fail to deliver, there is no single tool to achieve Zero Trust. Instead, "Zero Trust" calls for a fundamental shift in a firm's security paradigm across many levels”
In network micro-segmentation, the intranet is further segmented into smaller segments based on risk profiles or to meet network/application/work-load isolation needs. This concept can be applied to any part of the internal network using a combination of host-based and network-based firewalls. Although physical network micro-segmentation is possible, it is difficult to implement manually and very difficult and complex to maintain, manage, and audit. Instead, it is generally implemented through concepts of VLAN on physically networked compute platforms and components and through implementations like NSX at the hypervisor level on virtual compute platforms like VMware.
Here are guidelines for network-based micro-segmentation:
• Enable distributed stateful firewalling at a per-server or work-load (VM) level, regardless of the underlying physical or logical network overlay.
• Enable and programmatically define logical level micro-segmented networks, regardless of the underlying compute and network overlay.
• Programmatically create, provision, and subsequently manage fine-grained security and access control policies across multiple micro-segments using a single pane of glass.
• Perform full SSL/TLS decryption and network packet inspection and also integrate with advanced intrusion prevention system (IPS) capabilities.
Application segmentation limits the attack surface for a given application by using layer four controls. Here are guidelines to secure applications:
• Perform intra-application segmentation by enforcing a distinct separation between the n tiers of an individual application and using the principle of least privilege to allow only the least amount of access to each tier (e.g., web tier, application tier, database tier)
• Isolate a given application from other applications and systems to restrict vulnerability exploitation and lateral movement from other apps within or outside the network segment in which the app resides.
Network segmentation can further enable application segmentation.
Accurate asset inventory must exist for all systems and services
The capability should exist to create and maintain an accurate asset inventory of all hardware, software, systems, and services within an application (preferably a CMDB) with API and programmatic access. To be effective, all physical and virtual systems and services as they may come on or go off the network must be dynamically detected and inventoried (i.e., added or updated).
All traffic for all systems and services must be authenticated and authorized
The traditional way of hauling all authentication and authorization requests to a traditional data center to comply with a Zero Trust model is achievable but is becoming increasingly difficult to manage and maintain with the advent of distributed computing. An alternative means of proper authentication and authorization is to ensure that secure access decisions are made at the entity (user, system, device, service, or location) initiating the connection itself, generally at the edge computing location. This can be achieved using the concepts defined in Secure Access Service Edge (SASE), which extends the existing concept of identity built upon users, groups, and roles to include edge computing and wide area networks (WAN). The SASE paradigm uses a combination of security capabilities defined in software-defined wide area network (SD-WAN), cloud access security broker (CASB), secure web gateway (SWG), next-generation anti-virus (NGAV), a virtual private network (VPN), next-generation firewall (NGFW) and data loss prevention (DLP) - all delivered as a single service at the network edge.
Access control for users, devices, systems, and services must be provided using the least privilege
Access for all users, devices, systems, and serviced must be continuously assessed and always be provided using the principle of least privilege with continuous re-assessments.
All data in transit and at rest must be encrypted end to end
Encrypt data in transit end-to-end using techniques such as TLS with ESNI, IPSec, using FIPS-compliant initial handshake, and key management. Data at rest must be encrypted using techniques such as TDE for structured data (e.g., databases) and symmetric encryption algorithms such as AES (128 or 256 bit) for unstructured data (e.g., file shares).