Understanding Role Based Access Control (RBAC) with Amazon EKS – Part 3

After assigning your team’s roles and integrating IAM with kubectl to authenticate your IAM principles, there is one final step toward setting up your RBAC with Amazon EKS. Joe Keegan, BlueChipTek Lead Cloud Services Architect, discusses how to configure EKS cluster to best segment your teams, while still allowing them to stick to their assigned roles.

AdobeStock_199165073-Converted.jpg#asset:3400


This is the last installment of a multi-part series covering RBAC with Amazon EKS and showing how IAM integrates with Kubernetes. In part 1 I covered Kubernetes roles and how to assign those role to IAM principals. Part 2 covers how IAM integrates with kubectl for authentication of IAM principals. In this part I’ll show how to put it all together to allow multiple teams to coexist on the same cluster without risk of them interfering with each other.

Kubernetes Configuration

We want to allow each team to manage their resources on the cluster but prevent one team from being able to make changes or otherwise manage resources belonging to the other team.

The way we segregate teams on Kubernetes is by using namespeces. So, a namespace will be created for each team.

$ kubectl get namespace
NAME          STATUS    AGE
default       Active    1d
kube-public   Active    1d
kube-system   Active    1d
team1         Active    1d
team2         Active    1d

An IAM role will need to be created for each team to use to access the EKS cluster. These IAM roles do not need any permission. It feels a little strange to create an IAM role with no permissions, but it’s OK for this use case.

Once you have the roles for each team then the aws-auth ConfigMap in the kube-system namespace needs to be updated to map the roles to Kubernetes groups. The manifest for doing this looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::123456789012:role/EksWorkers-
NodeInstanceRole-FB41US3UY2HG
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::123456789012:role/eks-team1-admins
      groups:
        - team1-admins
    - rolearn: arn:aws:iam::123456789012:role/eks-team2-admins
      groups:
        - team2-admins

This assigned the IAM roles to the Kubernetes groups team1-admins and team2-admins.

Lastly, we need the RoleBindings to bind the team1-admins and team2-admins to the admin role in each of the respective namespace. This is done via a manifest that looks like the following:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: admins
  namespace: team1
subjects:
- kind: Group
  name: team1-admins
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: admin
  apiGroup: rbac.authorization.k8s.io

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: admins
  namespace: team2
subjects:
- kind: Group
  name: team2-admins
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: admin
  apiGroup: rbac.authorization.k8s.io

Each RoleBinding in the manifest is created in the specific namespace for the team and binds the team’s group to the admin role.

To recap, the IAM role arn:aws:iam::123456789012:role/eks-team1-admins is mapped to the Kubernetes group team1-admins which is then bound to the Kubernetes role admin in the team1 namespace. And Lastly, we need to update our kubectl config to utilize these roles, which looks like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <SNIP>
    server: https://address.sk1.us-west-2.eks.amazonaws.com
  name: arn:aws:eks:us-west-2:123456789012:cluster/eks-cluster
contexts:
- name: cluster-admins
  context:
    cluster: arn:aws:eks:us-west-2:123456789012:cluster/eks-cluster
    user: arn:aws:eks:us-west-2:123456789012:cluster/eks-cluster
- name: team1-admins
  context:
    cluster: arn:aws:eks:us-west-2:123456789012:cluster/eks-cluster
    user: team1-admins
    namespace: team1
- name: team2-admins
  context:
    cluster: arn:aws:eks:us-west-2:123456789012:cluster/eks-cluster
    user: team2-admins
    namespace: team2
current-context: cluster-admins
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-west-2:123456789012:cluster/eks-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - eks-cluster
      - -r
      - arn:aws:iam::123456789012:role/eks-cluster-admins
      command: aws-iam-authenticator
- name: team1-admins
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - eks-cluster
      - -r
      - arn:aws:iam::123456789012:role/eks-team1-admins
      command: aws-iam-authenticator
- name: team2-admins
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - eks-cluster
      - -r
      - arn:aws:iam::123456789012:role/eks-team2-admins
      command: aws-iam-authenticator

This creates a context for each team that we can use for testing.

Validation

We are testing with the three contexts created in our kubectl config, one for cluster-admin and one for each team.

When using the cluster-admin context I can see pods deployed into either of the namespaces.

$ kubectl get pods --namespace team1 --context cluster-admins
NAME                 READY     STATUS    RESTARTS   AGE
redis-master-8kmfs   1/1       Running   0          1d
$ kubectl get pods --namespace team2 --context cluster-admins
NAME                                READY     STATUS    RESTARTS   AGE
nginx-deployment-75675f5897-q6zm5   1/1       Running   0          1m
nginx-deployment-75675f5897-w4stf   1/1       Running   0          1m

Using the team1-admins context I can only see pods in the team1 namespace. I get a forbidden error when doing anything against the team1 namespace using the team2-admins context.

$ kubectl get pods --namespace team1 --context team1-admins
NAME                 READY     STATUS    RESTARTS   AGE
redis-master-8kmfs   1/1       Running   0          1d
$ kubectl get pods --namespace team1 --context team2-admins
Error from server (Forbidden): pods is forbidden: 
User "" cannot list pods in the namespace "team1"

And vice versa:

$ kubectl get pods --namespace team2 --context team1-admins
Error from server (Forbidden): pods is forbidden: 
User "" cannot list pods in the namespace "team2"
$ kubectl get pods --namespace team2 --context team2-admins
NAME                                READY     STATUS    RESTARTS   AGE
nginx-deployment-75675f5897-q6zm5   1/1       Running   0          4m
nginx-deployment-75675f5897-w4stf   1/1       Running   0          4m

This setup can be replicated for as many teams as you’d like and can also be used as the basis to map IAM roles to the edit and view Kubernetes roles.

If you find that you’d like to help talking these kinds of problems with EKS then check out Jumpstart for Container Orchestration. We can come on site and show you how to get EKS up and running securely and ready for production.