Defining Security for an AWS EKS deployment Cloud-Native Security www.aporeto.com
Defining Security for a Kubernetes Deployment Kubernetes is an open-source orchestrator for automating deployment, scaling, and management of containerized distributed applications and is staking its claim as the fastest-growing open source project in history. Initially based on a Google initiative to open source portions of their internal orchestration tool, Borg, Kubernetes has taken a life of its own and has become the de facto standard for deploying microservices and containers in the cloud. Amazon Web Services (AWS) is now offering its own managed Kubernetes services (EKS) to simplify developing and deploying Kubernetes-orchestrated applications on their infrastructure. From a developer s perspective, the advantages of being cloud-native are clear the ability to roll out new features rapidly because the business demands them while not worrying about infrastructure complexities. The security team however, needs to maintain visibility, compliance, and control which is difficult with infrastructure they don t control. Kubernetes is extremely powerful, but it s still young and not simple to secure in production environments. Security breaches and cryptojacking attacks are hitting the news more frequently with misconfigured Kubernetes deployment as the primary attack vector. These risks are unavoidable without a comprehensive approach to securing Kubernetes and the workloads that it s orchestrating. For any Kubernetes deployment its very important to understand basic hygiene and follow best practices for securing the Kubernetes cluster in itself. Some basic security hygiene include: ensuring the use of Role Based Access Control (RBAC) for managing the cluster, only exposing what is required to the internet for instance don t expose your Kubernetes dashboard, and never use default passwords for any administrative accounts. EKS makes it easier to implement this security hygiene through the following capabilities: i. Integrated RBAC functionality with IAM roles making it easy to assign privileges and access to the cluster ii. Clusters are always run in a VPC providing isolation and more granular control from the internet. The focus of this whitepaper is not on how to secure a kubernetes cluster but on how to secure workloads deployed in the cluster. When deploying and running applications in a Kubernetes cluster, several important security requirements warrant particular attention: Network Access Control Security Configuration and Automation Consistent Policy Across Multiple Clusters and Hybrid Environments API Access Control Efficient Isolation in Multi-tenant Environments
Network Access Control Kubernetes network policy resources use labels to select pods and define rules that specify which traffic is allowed. The responsibility for actually enforcing the network policies falls on third-party network plugins. There are a variety of open source network plugins that have been written for Kubernetes, including Aporeto s. However, not all network plugins are alike. Network plugins commonly use their own control plane in addition to making too-frequent changes to Linux iptables or have deep kernel level dependencies like Linux BPF (Berkeley Packet Filter) to deliver their functionality. As a result, their performance and scalability vary, as does their ability to keep up with all of the IP routing changes required to track a highly dynamic environment where containers are spun up, closed down and moved frequently. Traditional monolithic workloads in an AWS Virtual Private Cloud (VPC) are fairly static and it was feasible to segment these workloads using AWS security groups. Microservices take away workload predictability. Kubernetes defaults to zero-trust networking; that is, by default the network is flat from the ground up. The pod IP address is dynamically assigned and carries no real information. Multiple pods can exist on the same kubernetes node. A traditional segmentation approach using AWS security groups tied to a network interface on the host no longer works. Even if one tried to make AWS security groups work the complexity and sheer scale required to segment every pod would not be possible. A zero-trust security approach is required to solve this problem by authenticating and authorizing all communications between pods. Security Configuration and Automation Achieving higher deployment velocity is one of the major benefits of microservices architectures and automation. Enterprises can have a completely automated continuous deployment pipeline but traditional manual approaches to defining security policies creates obstacles for deployments. If security gates or pauses the application deployment process, then many of the benefits of automation are wasted. Kubernetes offers a declarative model to applying policies that can be automated ensuring applications are deployed with the correct security posture. Kubernetes policies are based on a whitelist model; that is, all pod communication is forbidden unless it s explicitly authorized. By default no Kubernetes network policies are associated to a pod and all traffic to/from a pod is allowed. When a kubernetes network policy is applied to a pod enforcement falls into a whitelist model. These network policies have to be explicitly defined through a YAML file. This might work well if the development team is in a position to define the network policy for an application, as in the case of a greenfield application, but networking and security teams have poor understanding of and visibility into application-specific network policies. Managing network access policies as a series of files managed by developers is a very foreign concept to these other teams, so adapting to established operations and security policies without compromising on the benefits of automation has to be a goal. Consistent Policy Across Multiple Clusters & Hybrid Environments A further challenge with Kubernetes network security policies is that they only operate within the context of a single cluster, so multi-cluster and even multi-cloud deployments quickly become difficult to manage. Cloud-native applications are often deployed across multiple regions and sometimes across multiple clouds for high-availability/resiliency and this inevitably translates to a deployment across multiple clusters.
Kubernetes policies do not operate across clusters, so if an application has pods across multiple clusters, then enforcing Zero Trust network policies is not possible. Kubernetes network policies boil down to IP-based ingress and egress rules for access control outside the cluster and, as highlighted earlier, IP-based rules do not work well with dynamic microservices. Let s take an example: assume you have two mongo database nodes deployed across two Kubernetes clusters in different cloud regions for high availability. The only way to implement a network policy for sanctioned communications between these two mongo nodes would be an ingress and egress IP rule and the IP address here would have to be the external IP of the clusters - perhaps the load balancer IP address. This is extremely coarse grain segmentation. Often times applications deployed in a cloud-managed Kubernetes cluster have a dependency on a legacy application deployed on-premise behind a North-South firewall. The only way to implement policies in such scenarios is to open up the originating cluster IP address in the North-South firewall and defining ingress/egress rules in the Kubernetes cluster to accept any traffic from the on-premise data center. Addition or deletion of Kubernetes clusters means changes to the North-South firewall, which can become very tedious and error prone. These coarse-grained rules result in an unnecessarily large attack surface. API Access Control The discussion thus far has focused on the use of network policies to segment/isolate pods in a Kubernetes cluster. In the microservices world, network access control is not sufficient because the attack surface is more complex than with a monolithic app behind a firewall. APIs are high value assets exposed by microservices which require their own policies to control authentication and authorization at an appropriate level. A network policy may allow two pods to communicate but what if a pod should only be able to access a specific API with a specific HTTP method from its peer? This is a very common scenario in microservices. A service implemented by a pod could only have POST access to an API implemented by another pod. Not controlling access at the API layers leaves room for data exfiltration attacks. The example highlighted thus far is for communication between pods. Microservices also have 3rd-party API dependencies and ensuring proper API access control for these 3rd-party APIs is also critical. Network policies for Kubernetes clusters must be augmented with API access control for a stronger compliance and security posture. Efficient Isolation for Hybrid & Multi-cluster Environments Aporeto offers a Zero Trust security solution for cloud, microservices and containers and is designed for comprehensively securing Kubernetes clusters. Aporeto automates the most complex security requirements for operating workloads in Kubernetes, including network access control, API security, security configuration automation, policy distribution and enforcement across multiple clusters and multiple clouds from a single platform. Aporeto network security capability ensures that any communications between pods or external dependencies - regardless of where the pod or its dependency is deployed - is first authenticated and then authorized, and optionally transparently encrypted. Aporeto network access control is compatible with Kubernetes network access control, but offers a compelling superset of capabilities, as described below. To prevent conflicts, Aporeto can automatically import and apply Kubernetes network policy definitions to give Kubernetes users a familiar YAML interface and backward compatibility for declaring allowed network connections.
Aporeto s multi-attribute workload/service identity is fundamental to any authentication and authorization policy distributed across a Kubernetes cluster, so that policy enforcement isn t dependent on simple labels or brittle network configuration. Aporeto assigns a cryptographically signed and attested service identity to every Kubernetes pod. Context for this service identity can come from numerous sources, including but not limited to: Kubernetes service account that launched a pod Metadata from docker containers or assigned labels Kubernetes labels Access control policies are applicable for both network and API level access control. Policies are defined on Aporeto s SaaSdelivered security orchestrator and enforced in a distributed manner through a daemon-set that is deployed on all nodes. The use of a service identity tied to a pod allows all policies to be enforced independent of infrastructure. Aporeto is capable of enforcing policies across multiple Kubernetes clusters, across hybrid environments and to 3rd party API endpoints. The architecture diagram below illustrates how Aporeto fits into a Kubernetes environment. How Aporeto Works In a Kubernetes environment Aporeto adds a signed-identity-exchange phase with TCP s three-way handshake. This signed identity is then used to implement Kubernetes s native network policies with added benefits such as enforcing policies across multiple clusters. Aporeto s enforcer user-space daemon - installed as a daemon-set on each node - piggybacks the identity exchange by capturing the first three packets in TCP s three-way handshake. Signatures for identity are authenticated using asymmetric keys distributed through built-in PKI. Because this authentication is implemented at a node level it is transparent to the pod; the pod never receives incoming traffic that fails the Aporeto test. When implementing API policies in addition to enforcing rules at the TCP layer the enforcer embeds signed identities in the HTTP layer using bearer tokens to enforce API access control. Aporeto provides a significant superset of capabilities for network access control of Kubernetes workloads, as illustrated by the value-added capabilities described here. Some of the advanced features of the Aporeto solution are summarized in the table below.
For more information, visit: www.aporeto.com