Teleport Kubernetes Access Guide

Teleport has the ability to act as a compliance gateway for managing privileged access to Kubernetes clusters. This enables the following capabilities:

Teleport Proxy Service

By default, the Kubernetes integration is turned off in Teleport. The configuration setting to enable the integration in the proxy service section in the /etc/teleport.yaml config file, as shown below:

# snippet from /etc/teleport.yaml on the Teleport proxy service:
proxy_service:
    # create the 'kubernetes' section and set 'enabled' to 'yes':
    kubernetes:
        enabled: yes
        public_addr: [teleport.example.com:3026]
        listen_addr: 0.0.0.0:3026
Let's take a closer look at the available Kubernetes settings:

Connecting the Teleport proxy to Kubernetes

There are two options for setting up Teleport to access Kubernetes:

Option 1: Deploy Inside Kubernetes as a pod

Deploy Teleport Proxy service as a Kubernetes pod inside the Kubernetes cluster you want the proxy to have access to.

# snippet from /etc/teleport.yaml on the Teleport proxy service:
proxy_service:
    # create the 'kubernetes' section and set 'enabled' to 'yes':
    kubernetes:
        enabled: yes

If you're using Helm, we've a chart that you can use. Run these commands:

$ helm repo add gravitational https://charts.gravitational.io
$ helm install teleport gravitational/teleport
You will still need a correctly configured values.yaml file for this to work. See our Helm Docs for more information.

teleport-kubernetes-inside

Option 2: Deploy Outside of Kubernetes

Deploy the Teleport proxy service outside of Kubernetes and update the Teleport Proxy configuration with Kubernetes credentials. Update the Teleport Proxy configuration with Kubernetes credentials.

In this case, we need to update /etc/teleport.yaml for the proxy service as shown below:

# snippet from /etc/teleport.yaml on the Teleport proxy service:
proxy_service:
  # create the 'kubernetes' section and set 'enabled' to 'yes':
  kubernetes:
    enabled: yes
    # The address for the proxy process to accept k8s requests.
    listen_addr: 0.0.0.0:3026
    # The address used by the clients after tsh login. If you run a load balancer
    # in front of this proxy, use the address of that balancer here. Otherwise,
    # use the address of the host running this proxy.
    public_addr: [teleport.example.com:3026]
    kubeconfig_file: /path/to/.kube/config

teleport-ssh-kubernetes-integration

To generate the kubeconfig_file for the Teleport proxy service:

  1. Configure your kubectl to point at the Kubernetes cluster and have admin-level access.
  2. Use this script to generate kubeconfig:
# Download the script.
$ curl -o get-kubeconfig.sh https://github.com/gravitational/teleport/blob/master/examples/k8s-auth/get-kubeconfig.sh

# Make it executable.
$ chmod +x get-kubeconfig.sh

# Run the script, it will write the generated kubeconfig to the current
# directory.
$ ./get-kubeconfig.sh

# Check that the generated kubeconfig has the right permissions.
# The output should look similar to this.
$ kubectl --kubeconfig kubeconfig auth can-i --list
Resources                                       Non-Resource URLs   Resource Names   Verbs
selfsubjectaccessreviews.authorization.k8s.io   []                  []               [create create]
selfsubjectrulesreviews.authorization.k8s.io    []                  []               [create create]
                                                [/api/*]            []               [get]
                                                ...                 []               [...]
groups                                          []                  []               [impersonate]
serviceaccounts                                 []                  []               [impersonate]
users                                           []                  []               [impersonate]
  1. Copy the generated kubeconfig file to the host running the Teleport proxy service.
  2. Update kubeconfig_file path in teleport.yaml to where you copied the kubeconfig.

Alternatively, you can use your existing local config from ~/.kube/config. However, it will result in Teleport proxy using your personal Kubernetes credentials. This is risky: your credentials can expire or get revoked (such as when leaving your company).

Impersonation

Note

If you used the script from Option 2 above, you can skip this step. The script already configured impersonation permissions.

The next step is to configure the Teleport Proxy to be able to impersonate Kubernetes principals within a given group using Kubernetes Impersonation Headers.

If Teleport is running inside the cluster using a Kubernetes ServiceAccount, here's an example of the permissions that the ServiceAccount will need to be able to use impersonation (change teleport-serviceaccount to the name of the ServiceAccount that's being used):

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: teleport-impersonation
rules:
- apiGroups:
  - ""
  resources:
  - users
  - groups
  - serviceaccounts
  verbs:
  - impersonate
- apiGroups:
  - "authorization.k8s.io"
  resources:
  - selfsubjectaccessreviews
  verbs:
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: teleport
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: teleport-impersonation
subjects:
- kind: ServiceAccount
  # this should be changed to the name of the Kubernetes ServiceAccount being used
  name: teleport-serviceaccount
  namespace: default

There is also an example of this usage within the example Teleport Helm chart.

If Teleport is running outside of the Kubernetes cluster, you will need to ensure that the principal used to connect to Kubernetes via the kubeconfig file has the same impersonation permissions as are described in the ClusterRole above.

Kubernetes RBAC

Once you perform the steps above, your Teleport instance should become a fully functional Kubernetes API proxy. The next step is to configure Teleport to assign the correct Kubernetes groups to Teleport users.

Mapping Kubernetes groups to Teleport users depends on how Teleport is configured. In this guide we'll look at two common configurations:

Kubernetes Groups and Users

Teleport provides support for

When adding new local users you have to specify which Kubernetes groups they belong to:

# Adding a Teleport local user to map to a Kubernetes group.
$ tctl users add joe --k8s-groups="system:masters"
# Adding a Teleport local user to map to a Kubernetes user.
$ tctl users add jenkins --k8s-users="jenkins"
# Enterprise users should manage k8s-users and k8s-groups via RBAC, see Okta Auth
# example below

Github Auth

When configuring Teleport to authenticate against Github, you have to create a Teleport connector for Github, like the one shown below. Notice the kubernetes_groups setting which assigns Kubernetes groups to a given Github team:

kind: github
version: v3
metadata:
  # connector name that will be used with `tsh --auth=github login`
  name: github
spec:
  # client ID of Github OAuth app
  client_id: <client-id>
  # client secret of Github OAuth app
  client_secret: <client-secret>
  # connector display name that will be shown on web UI login screen
  display: Github
  # callback URL that will be called after successful authentication
  redirect_url: https://teleport.example.com:3080/v1/webapi/github/callback
  # mapping of org/team memberships onto allowed logins and roles
  teams_to_logins:
    - organization: octocats # Github organization name
      team: admin           # Github team name within that organization
      # allowed UNIX logins for team octocats/admin:
      logins:
        - root
      # list of Kubernetes groups this Github team is allowed to connect to
      kubernetes_groups: ["system:masters"]
      # Optional: If not set, users will impersonate themselves.
      # kubernetes_users: ['barent']

To obtain client ID and client secret from Github, please follow Github documentation on how to create and register an OAuth app. Be sure to set the "Authorization callback URL" to the same value as redirect_url in the resource spec.

Finally, create the Github connector with the command: tctl create -f github.yaml. Now, when Teleport users execute the Teleport's tsh login command, they will be prompted to login through the Github SSO and upon successful authentication, they have access to Kubernetes.

# Login via Github SSO and retrieve SSH+Kubernetes certificates:
$ tsh login --proxy=teleport.example.com --auth=github login

# Use Kubernetes API!
$ kubectl exec -ti <pod-name>

The kubectl exec request will be routed through the Teleport proxy and Teleport will log the audit record and record the session.

Note

For more information on integrating Teleport with Github SSO, please see the Github section in the Admin Manual.

Okta Auth

With Okta (or any other SAML/OIDC/Active Directory provider), you must update Teleport's roles to include the mapping to Kubernetes groups.

Let's assume you have the Teleport role called "admin". Add kubernetes_groups setting to it as shown below:

# NOTE: the role definition is edited to remove the unnecessary fields
kind: role
version: v3
metadata:
  name: admin
spec:
  allow:
    # if kubernetes integration is enabled, this setting configures which
    # kubernetes groups the users of this role will be assigned to.
    # note that you can refer to a SAML/OIDC trait via the "external" property bag,
    # this allows you to specify Kubernetes group membership in an identity manager:
    kubernetes_groups: ["system:masters", "{{external.trait_name}}"]]

To add kubernetes_groups setting to an existing Teleport role, you can either use the Web UI or tctl:

# Dump the "admin" role into a file:
$ tctl get roles/admin > admin.yaml
# Edit the file, add kubernetes_groups setting
# and then execute:
$ tctl create -f admin.yaml

Advanced Usage

{{ external.trait_name }} example is shown to demonstrate how to fetch the Kubernetes groups dynamically from Okta during login. In this case, you need to define Kubernetes group membership in Okta (as a trait) and use that trait name in the Teleport role.

Teleport 4.3 has an option to extract the local part from an email claim. This can be helpful since some operating systems don't support the @ symbol. This means by using logins: ['{{email.local(external.email)}}'] the resulting output will be dave.smith if the email was [email protected]

Once setup is complete, when users execute tsh login and go through the usual Okta login sequence, their kubeconfig will be updated with their Kubernetes credentials.

Note

For more information on integrating Teleport with Okta, please see the Okta integration guide.

Using Teleport Kubernetes with Automation

Teleport can integrate with CI/CD tooling for greater visibility and auditability of these tools. For this we recommend creating a local Teleport user, then exporting a kubeconfig using tctl auth sign

An example setup is below.

# Create a new local user for Jenkins
$ tctl users add jenkins
# Option 1: Creates a token for 1 year
$ tctl auth sign --user=jenkins --format=kubernetes --out=kubeconfig --ttl=8760h
# Recommended Option 2: Creates a token for 25hrs
$ tctl auth sign --user=jenkins --format=kubernetes --out=kubeconfig --ttl=25h

  The credentials have been written to kubeconfig

$ cat kubeconfig
  apiVersion: v1
  clusters:
  - cluster:
      certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZ....
# This kubeconfig can now be exported and will provide access to the automation tooling.

# Uses kubectl to get pods, using the provided kubeconfig.
$ kubectl --kubeconfig /path/to/kubeconfig get pods

How long should TTL be?

In the above example we've provided two options. One with 1yr (8760h) time to live and one for just 25hrs. As proponents of short lived SSH certificates we recommend the same for automation.

Handling secrets is out of scope of our docs, but at a high level we recommend using providers secrets managers. Such as AWS Secrets Manager, GCP Secrets Manager, or on prem using a project like Vault. Then running a nightly job on the auth server to sign and publish a new kubeconfig. In our example, we've added 1hr, and during this time both kubeconfigs will be valid.

Taking this a step further you could build a system to request a very short lived token for each CI run. We plan to make this easier for operators to integrate in the future by exposing and documenting more of our API.

AWS EKS

We've a complete guide on setting up Teleport with EKS. Please see the Using Teleport with EKS Guide.

Multiple Kubernetes Clusters

You can take advantage of the Trusted Clusters feature of Teleport to federate trust across multiple Kubernetes clusters.

When multiple trusted clusters are present behind a Teleport proxy, the kubeconfig generated by tsh login will contain the Kubernetes API endpoint determined by the <cluster> argument to tsh login .

For example, consider the following setup:

In this scenario, users usually login using this command:

# Using login without arguments
$ tsh --proxy=main.example.com login

# user's `kubeconfig` now contains one entry for the main Kubernetes
# endpoint, i.e. `proxy.example.com` .

# Receive a certificate for "east":
$ tsh --proxy=main.example.com login east

# user's `kubeconfig` now contains the entry for the "east" Kubernetes
# endpoint, i.e. `east.proxy.example.com` .

apartmentTeleport Enterprise

Teleport Enterprise is built around the open-source core, with premium support and additional, enterprise-grade features. It is for organizations that need to secure critical production infrastructure and meet compliance and audit requirements.

Demo Teleport Enterprise

get_appTeleport Community

Teleport Community provides modern SSH best practices out of the box for managing elastic infrastructure. Teleport Community is open-source software that anyone can download and install for free.

Star

Download Teleport Community