Amazon Elastic Kubernetes Service

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that is easy to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane or worker nodes. Kubernetes is open source software that allows users to deploy and manage containerized applications at scale. Kubernetes groups containers into logical groupings for management and discoverability, then launches them onto clusters of EC2 instances. Using Kubernetes users can run containerized applications including microservices, batch processing workers, and platforms as a service (PaaS) using the same toolset on premises and in the cloud.

  • It allows customers to run your EKS clusters using AWS Fargate, which is a serverless compute for containers. Fargate manages servers, the way customers specify it.
  • EKS is integrated with services including Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC). These integrations enable  seamless experience to monitor, scale, and load-balance your applications.
  • EKS integrates with AWS App Mesh and provides a Kubernetes native experience to consume service mesh features and bring rich observability, traffic controls and security features to applications.
Amazon EKS

Amazon EKS Benefits

Amazon EKS provisions and scales the Kubernetes control plane, including the API servers and backend persistence layer, across multiple AWS availability zones for high availability and fault tolerance. Amazon EKS automatically detects and replaces unhealthy control plane nodes and provides patching for the control plane. 

AWS clients can run EKS using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage servers, lets users specify and pay for resources per application, and improves security through application isolation by design. 

Amazon EKS is integrated with many AWS services to provide scalability and security for users applications. These services include Elastic Load Balancing for load distribution, IAM for authentication, Amazon VPC for isolation, and AWS CloudTrail for logging.

EKS automatically applies the latest security patches to your cluster’s control plane. AWS works closely with the community to address critical security issues and help ensure that every EKS cluster is secure.

Amazon EKS Features

Amazon EKS has all the performance, scale, reliability, and availability of AWS infrastructure, as well as integrations with AWS networking and security services, such as Application Load Balancers for load distribution, Identity Access Manager (IAM) integration with role-based access control (RBAC), and Virtual Private Cloud (VPC) for pod networking.

Service Integrations: AWS Controllers for Kubernetes (ACK) is directly manage AWS services from Kubernetes. ACK makes it simple to build scalable and highly-available Kubernetes applications that utilize AWS services.

Hosted Kubernetes Console: EKS provides an integrated console for Kubernetes clusters. Cluster operators and application developers can use EKS as a single place to organize, visualize, and troubleshoot their Kubernetes applications running on Amazon EKS. 

EKS Add-ons: EKS Add-ons are common operational software which extend the operational functionality of Kubernetes. It enables EKS to install and keep this software up to date. 

Managed node groups: Amazon EKS lets you create, update, scale, and terminate nodes for your cluster with a single command. These nodes can also leverage Amazon EC2 Spot instances to reduce costs. 

Amazon EKS  provide security for the Kubernetes clusters, with advanced features and integrations to AWS services and technology partner solutions.

Service discovery: AWS Cloud Map is a cloud resource discovery service, that define custom names for application resources, and it maintains the updated location of these dynamically changing resources. Cloud Map works with external-dns, an open-source Kubernetes connector that automatically propagates internal service locations to the Cloud Map service registry as Kubernetes services launch and removes them on termination. 

Service mesh: Service mesh helps to build and run complex microservices applications by standardizing how every microservice in the application communicates. AWS App Mesh controller for Kubernetes enables to create new services connected to the mesh, define traffic routing and configure security features like encryption.

VPC Native Networking: Since EKS clusters run in an Amazon VPC, it allows users to use their own VPC security groups and network ACLs.

AWS IAM Authenticator: Amazon EKS integrates Kubernetes RBAC (the native role based access control system for Kubernetes) with AWS IAM. Users can assign RBAC roles directly to each IAM entity allowing them to granularly control access permissions to the Kubernetes control plane nodes.

Load balancing: Amazon EKS supports using Elastic Load Balancing including Application Load Balancer (ALB), Network Load Balancer (NLB), and Classic Load Balancer. Users can run standard Kubernetes cluster load balancing or any Kubernetes supported ingress controller with your Amazon EKS cluster.

Serverless Compute: EKS supports AWS Fargate to run users Kubernetes applications using serverless compute. Fargate removes the need to provision and manage servers, lets users specify and pay for resources per application, and improves security through application isolation by design.

Hybrid Deployments: Using EKS on AWS Outposts customers can run containerized applications that require particularly low latencies to on-premises systems. AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any connected site. 

Amazon EKS Distro packages up the same open source Kubernetes software distribution used in EKS on AWS for use on users own infrastructure on-premises. 

Amazon EKS Anywhere (coming 2021) enables users to easily create and operate Kubernetes clusters (building with the software in Amazon EKS Distro) on-premises, including on your own virtual machines (VMs) and bare metal servers.  EKS Anywhere provides automation tooling that simplifies cluster creation, administration and operations on infrastructure such as bare metal, vSphere, and cloud virtual machines with default configurations for logging, monitoring, networking, and storage.

eksctl is an open source command line tool that gets users up and running with Amazon EKS in minutes. Executing eksctl create cluster, will create an Amazon EKS cluster ready to run the application in minutes. Users can use eksctl to simplify the management and operations for the cluster including managing nodes and add-ons.

Windows Support: Amazon EKS supports adding Windows nodes as worker nodes and scheduling Windows containers. EKS supports running Windows worker nodes alongside Linux worker nodes, allowing customers to use the same cluster for managing applications on either operating system.

ARM Support: AWS Graviton2 processors power Arm-based EC2 instances delivering a major leap in performance and capabilities as well as significant cost savings. A primary goal of running containers is to improve the cost efficiency for the applications. By combining both and users get a great price performance. For example, testing of workloads shows instance types based on Graviton2 processors deliver up to 40% better price performance than their equivalent x86-based M5, C5, and R5 families. Amazon EKS on AWS Graviton2 is generally available where both services are available regionally.

Managed cluster updates: Amazon EKS makes it easy to update running clusters to the latest Kubernetes version without needing to manage the update process. Kubernetes version updates are done in place, removing the need to create new clusters or migrate applications to a new cluster. 

As new Kubernetes versions are released and validated for use with Amazon EKS, we will support three stable Kubernetes versions as part of the update process at any given time. You can initiate the installation of new versions and get details on the status of in-flight updates via the SDK, CLI or AWS Console.

Compliance: Amazon EKS is certified by multiple compliance programs for regulated and sensitive applications. Amazon EKS is compliant with SOCPCIISOFedRAMP-ModerateIRAPC5K-ISMSENS HighOSPARHITRUST CSF, and is a HIPAA eligible service.

Support for advanced workloads: Amazon EKS provides an optimized Amazon Machine Image (AMI) that includes configured NVIDIA drivers for GPU-enabled P2 and P3 EC2 instances. This makes it easy to use Amazon EKS to run computationally advanced workloads, including machine learning (ML), Kubeflow, deep learning (DL) containers, high performance computing (HPC), financial analytics, and video transcoding.

Works with open source tools: Amazon EKS is fully compatible with Kubernetes community tools and supports popular Kubernetes add-ons. These include CoreDNS to create a DNS service for users cluster and both the Kubernetes Dashboard web-based UI and the kubectl command line tool to access and manage their cluster on Amazon EKS.

Logging: Amazon EKS is integrated with AWS CloudTrail to provide visibility and audit history of EKS management operations. Users can use CloudTrail to view API calls to the Amazon EKS API. Amazon EKS also delivers Kubernetes control plane logs to Amazon CloudWatch for analysis, debugging, and auditing.

Certified conformant: Amazon EKS runs upstream Kubernetes and is certified Kubernetes conformant, so users can use all the existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises datacenters or public clouds. 

Worker Nodes 

Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that are built by AWS using the upstream, extensible model provided by Kubernetes. These controllers run as part of the Amazon EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto Fargate. The Fargate controllers include a new scheduler that runs alongside the default Kubernetes scheduler in addition to several mutating and validating admission controllers. 

The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in customers cluster when pods fail to launch due to lack of resources or when nodes in the cluster are underutilized and their pods can be rescheduled onto other nodes in the cluster. There are several types of Kubernetes autoscaling supported in Amazon EKS:

  • Cluster Autoscaler:- The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail to launch due to lack of resources or when nodes in the cluster are underutilized and their pods can be rescheduled on to other nodes in the cluster.
  • Horizontal Pod Autoscaler:-  The Kubernetes Horizontal Pod Autoscaler automatically scales the number of pods in a deployment, replication controller, or replica set based on that resource’s CPU utilization.
  • Vertical Pod Autoscaler:- The Kubernetes Vertical Pod Autoscaler automatically adjusts the CPU and memory reservations for your pods to help “right size” your applications. 

Customers can deploy one or more worker nodes into a node group. Nodes are Worker machines in Kubernetes. Amazon EKS worker nodes run in customers’ AWS accounts, and it connects their cluster’s control plane via the cluster API server endpoint. A node group is one or more Amazon EC2 instances that are deployed in an Amazon EC2 Auto Scaling group. 

A cluster can contain several node groups, and each node group can contain several worker nodes. The managed node groups are able to have a maximum number of nodes. All instances in a node group must:

  • Be the same instance type
  • Be running the same Amazon Machine Image (AMI)
  • Use the same Amazon EKS Worker Node IAM Role.

Amazon EKS provides a specialized Amazon Machine Image (AMI) called the Amazon EKS-optimized AMI. This AMI is built on top of Amazon Linux 2, and is configured to serve as the base image for Amazon EKS worker nodes.

  • The AMI is configured to work with Amazon EKS out of the box, and it includes Docker, kubelet, and the AWS IAM Authenticator. The AMI also contains a specialized bootstrap script that allows it to discover and connect to the customers cluster’s control plane automatically.
 

Amazon EKS cluster

 Amazon EKS cluster can schedule pods on any combination of Self-managed nodes, Amazon EKS Managed node groups, and AWS Fargate

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. With Amazon EKS managed node groups, users don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run Kubernetes applications. Users can create, update, or terminate nodes for their cluster with a single operation.

  • Nodes run using the latest Amazon EKS optimized AMIs. Node updates and terminations gracefully drain nodes to ensure that users applications stay available.

All managed nodes are provisioned as part of an Amazon EC2 Auto Scaling group that’s managed for customers by Amazon EKS. All resources including the instances and Auto Scaling groups run within their AWS account. Each node group uses the Amazon EKS optimized Amazon Linux 2 AMI and can run across multiple Availability Zones that you define.

Managed node groups concepts
  • Amazon EKS managed node groups create and manage Amazon EC2 instances for you.

  • All managed nodes are provisioned as part of an Amazon EC2 Auto Scaling group that’s managed for users by Amazon EKS. 

  • A managed node group’s Auto Scaling group spans all of the subnets that you specify when you create the group.

  • Amazon EKS tags managed node group resources so that they are configured to use the Kubernetes Cluster Autoscaler.

  • Amazon EKS follows the shared responsibility model for CVEs and security patches on managed node groups.
  • Amazon EKS managed node groups can be launched in both public and private subnets.
  • Managed node groups can’t be deployed to nodes on AWS Outposts or to nodes deployed in AWS Wavelength or AWS Local Zones.

  • Amazon EKS adds Kubernetes labels to managed node group instances.
  • Amazon EKS automatically drains nodes using the Kubernetes API during terminations or updates.
 

#01

Managed node groups

 

 

#02

Self-managed nodes

 
 

 

A cluster contains one or more Amazon EC2 nodes that pods are scheduled on. Amazon EKS nodes run in users AWS account and connect to their cluster’s control plane via the cluster API server endpoint. Users deploy one or more nodes into a node group. A node group is one or more Amazon EC2 instances that are deployed in an Amazon EC2 Auto Scaling group. All instances in a node group must:

A cluster can contain several node groups. As long as each node group meets the previous requirements, the cluster can contain node groups that contain different instance types and host operating systems. Each node group can contain several nodes.

Amazon EKS nodes are standard Amazon EC2 instances, and users are billed for them based on normal EC2 prices

Amazon EKS provides specialized Amazon Machine Images (AMI) called Amazon EKS optimized AMIs. The AMIs are configured to work with Amazon EKS and include Docker, kubelet , and the AWS IAM Authenticator. The AMIs also contain a specialized bootstrap script that allows it to discover and connect to your cluster’s control plane automatically.

AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. With AWS Fargate, users no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.

Users can control which pods start on Fargate and how they run with Fargate profiles, which are defined as part of their Amazon EKS cluster.

Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that are built by AWS using the upstream, extensible model provided by Kubernetes. These controllers run as part of the Amazon EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto Fargate.

  • The Fargate controllers include a new scheduler that runs alongside the default Kubernetes scheduler in addition to several mutating and validating admission controllers.
  • When users start a pod that meets the criteria for running on Fargate, the Fargate controllers running in the cluster recognize, update, and schedule the pod onto Fargate.

Each pod running on Fargate has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another pod.

#03

AWS Fargate

 
 

Networking

Amazon EKS
Amazon VPC and subnets 

All Amazon EKS resources are deployed to one Region in an existing subnet in an existing VPC. The VPC and subnets must meet requirements such as the following:

  • VPCs and subnets must be tagged appropriately, so that Kubernetes knows that it can use them for deploying resources, such as load balancers. When users deploy the VPC using an Amazon EKS provided AWS CloudFormation template or using eksctl, then the VPC and subnets are tagged appropriately for them.
  • A subnet may or may not have internet access. If a subnet does not have internet access, the pods deployed within it must be able to access other AWS services, such as Amazon ECR, to pull container images.
  • Any public subnets that users use need to be configured to auto-assign public IP addresses for Amazon EC2 instances launched within them. 
  • The nodes and control plane must be able to communicate over all ports through appropriately tagged security groups

  • Users can implement a network segmentation and tenant isolation network policy. Network policies are similar to AWS security groups in that users can create network ingress and egress rules. Instead of assigning instances to a security group, users assign network policies to pods using pod selectors and labels

Users can deploy a VPC and subnets that meet the Amazon EKS requirements through manual configuration, or by deploying the VPC and subnets using eksctl, or an Amazon EKS provided AWS CloudFormation template. Both eksctl and the AWS CloudFormation template create the VPC and subnets with the required configuration.

Amazon EKS control plane 

Deployed and managed by Amazon EKS in an Amazon EKS managed VPC. When users create the cluster, Amazon EKS creates and manages requester-managed network interfaces in a separate VPC from the control plane VPC that they specify, which allows AWS Fargate and Amazon EC2 instances to communicate with the control plane.

By default, the control plane exposes a public endpoint so that clients and nodes can communicate with the cluster. Users can limit the internet client source IP addresses that can communicate with the public endpoint. Alternatively, users can enable a private endpoint and disable the public endpoint or enable both the public and private endpoints

Fargate pods 

Deployed to private subnets only. Each pod is assigned a private IP address from the CIDR block assigned to the subnet. Fargate does not support all pod networking options. 

  • Classic Load Balancers and Network Load Balancers can be used with IP targets only.  
  • Pods must match a Fargate profile at the time that they are scheduled in order to run on Fargate. Pods which do not match a Fargate profile may be stuck as Pending
  • Privileged containers are not supported on Fargate.
  • Pods running on Fargate cannot specify HostPort or HostNetwork in the pod manifest.
  • GPUs are currently not available on Fargate.  For more information, see AWS Fargate considerations.
Amazon EKS control plane 

Each Amazon EC2 node is deployed to one subnet. Each node is assigned a private IP address from a CIDR block assigned to the subnet. If the subnets were created using one of the Amazon EKS provided AWS CloudFormation templates, then nodes deployed to public subnets are automatically assigned a public IP address by the subnet. Each node is deployed with the Pod networking (CNI) which, by default, assigns each pod a private IP address from the CIDR block assigned to the subnet that the node is in and adds the IP address as a secondary IP address to one of the network interfaces attached to the instance.

  • This AWS resource is referred to as a network interface in the AWS Management Console and the Amazon EC2 API. Therefore, we use “network interface” in this documentation instead of “elastic network interface”. The term “network interface” in this documentation always means “elastic network interface”.

Users can change this behavior by assigning additional CIDR blocks to your VPC and enabling CNI custom networking, which assigns IP addresses to pods from different subnets than the node is deployed to. To use custom networking, users need to enable it when launching the nodes.

  • Users can also associate unique security groups with some of the pods running on many Amazon EC2 instance types. 

By default, the source IP address of each pod that communicates with resources outside of the VPC is translated through network address translation (NAT) to the primary IP address of the primary network interface attached to the node.

  • Users can change this behavior to instead have a NAT device in a private subnet translate each pod’s IP address to the NAT device’s IP address. 

Pod networking (CNI)

Amazon EKS supports native VPC networking with the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes. Using this plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network. The plugin is an open-source project that is maintained on GitHub. During the creation an Amazon EKS node has one network interface. All Amazon EC2 instance types support more than one network interface.

  • The network interface attached to the instance when the instance is created is called the primary network interface. Any additional network interface attached to the instance is called a secondary network interface. Each network interface can be assigned multiple private IP addresses. One of the private IP addresses is the primary IP address, whereas all other addresses assigned to the network interface are secondary IP addresses

The CNI metrics helper is a tool that users can use to scrape network interface and IP address information, aggregate metrics at the cluster level, and publish the metrics to Amazon CloudWatch.

The CNI metrics helper helps users to:

  • Track these metrics over time
  • Troubleshoot and diagnose issues related to IP assignment and reclamation
  • Provide insights for capacity planning
 

The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes is deployed with each of users Amazon EC2 nodes in a Daemonset with the name aws-node. The plugin consists of two primary components:

  • L-IPAM daemon – Responsible for creating network interfaces and attaching the network interfaces to Amazon EC2 instances, assigning secondary IP addresses to network interfaces, and maintaining a warm pool of IP addresses on each node for assignment to Kubernetes pods when they are scheduled. When the number of pods running on the node exceeds the number of addresses that can be assigned to a single network interface, the plugin starts allocating a new network interface, as long as the maximum number of network interfaces for the instance aren’t already attached. 

  • CNI plugin – Responsible for wiring the host network (for example, configuring the network interfaces and virtual Ethernet pairs) and adding the correct network interface to the pod namespace.

Amazon EKS runs upstream Kubernetes and is certified Kubernetes conformant however, so alternate CNI plugins will work with Amazon EKS clusters.

Amazon EKS maintains relationships with a network of partners that offer support for alternate compatible CNI plugins, including Tigera, Isovalent, Weaveworks, and VMware.

Security groups for pods

 

Security groups for pods integrate Amazon EC2 security groups with Kubernetes pods. Users can use Amazon EC2 security groups to define rules that allow inbound and outbound network traffic to and from pods that deployed to nodes running on many Amazon EC2 instance types. 

Before deploying security groups for pods, users should consider the following limitations and conditions:

  • Users Amazon EKS cluster must be running Kubernetes version 1.17 and Amazon EKS platform version eks.3 or later. Users can’t use security groups for pods on Kubernetes clusters that was deployed to Amazon EC2.
  • Traffic flow to and from pods with associated security groups are not subjected to Calico network policy enforcement and are limited to Amazon EC2 security group enforcement only. Community effort is underway to remove this limitation.
  • Security groups for pods can’t be used with pods deployed to Fargate.
  • Security groups for pods can’t be used with Windows nodes.
  • Security groups for pods are supported by most Nitro-based Amazon EC2 instance families, including the m5c5r5p3m6gcg6, and r6g instance families. The t3 instance family is not supported. users nodes must be one of the supported instance types.
  • Source NAT is disabled for outbound traffic from pods with assigned security groups so that outbound security group rules are applied. To access the internet, pods with assigned security groups must be launched on nodes that are deployed in a private subnet configured with a NAT gateway or instance. Pods with assigned security groups deployed to public subnets are not able to access the internet.
  • Kubernetes services of type NodePort and LoadBalancer using instance targets with an externalTrafficPolicy set to Local are not supported with pods that users assign security groups to. 
  • If users are using pod security policies to restrict access to pod mutation, then the eks-vpc-resource-controller and vpc-resource-controller Kubernetes service accounts must be specified in the Kubernetes ClusterRoleBinding for the the Role that your psp is assigned to. 
 

Workloads

Users workloads are deployed in containers, which are deployed in pods in Kubernetes. A pod includes one or more containers. Typically, one or more pods that provide the same service are deployed in a Kubernetes service. Once deployed multiple pods that provide the same service, users can:

 

Security is a shared responsibility between AWS and users. The shared responsibility model describes this as security of the cloud and security in the cloud:

Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. For Amazon EKS, AWS is responsible for the Kubernetes control plane, which includes the control plane nodes and etcd database. Third-party auditors regularly test and verify the effectiveness of our security as part of the AWS compliance programs

Security in the cloud – Users responsibility includes the security configuration of the data plane, including the configuration of the security groups that allow traffic to pass from the Amazon EKS control plane into the customer VPC; The configuration of the nodes and the containers themselves; The node’s operating system (including updates and security patches); Other associated application software:

  • Setting up and managing network controls, such as firewall rules
  • Managing platform-level identity and access management, either with or in addition to IAM

The sensitivity of your data, your company’s requirements, and applicable laws and regulations

 
 

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that is easy to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane or worker nodes. Kubernetes is open source software that allows users to deploy and manage containerized applications at scale. Kubernetes groups containers into logical groupings for management and discoverability, then launches them onto clusters of EC2 instances. Using Kubernetes users can run containerized applications including microservices, batch processing workers, and platforms as a service (PaaS) using the same toolset on premises and in the cloud.

  • It allows customers to run your EKS clusters using AWS Fargate, which is a serverless compute for containers. Fargate manages servers, the way customers specify it.
  • EKS is integrated with services including Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC). These integrations enable  seamless experience to monitor, scale, and load-balance your applications.
  • EKS integrates with AWS App Mesh and provides a Kubernetes native experience to consume service mesh features and bring rich observability, traffic controls and security features to applications.