Trusted Clusters

The design of trusted clusters allows Teleport users to connect to compute infrastructure located behind firewalls without any open TCP ports. The real world usage examples of this capability include:

Example of a MSP provider using trusted cluster to obtain access to clients clusters. MSP Example

If you haven't already looked at the introduction to Trusted Clusters in the Admin Guide we recommend you review that for an overview before continuing with this guide.

The Trusted Clusters chapter in the Admin Guide offers an example of a simple configuration which:

This guide's focus is on more in-depth coverage of trusted clusters features and will cover the following topics:

Teleport Node Tunneling

If you have a large amount of devices on different networks, such as managed IoT devices or a couple of nodes on a different network you can utilize the Teleport Node Tunneling.


As explained in the architecture document, Teleport can partition compute infrastructure into multiple clusters. A cluster is a group of SSH nodes connected to the cluster's auth server acting as a certificate authority (CA) for all users and nodes.

To retrieve an SSH certificate, users must authenticate with a cluster through a proxy server. So, if users want to connect to nodes belonging to different clusters, they would normally have to use different --proxy flags for each cluster. This is not always convenient.

The concept of leaf clusters allows Teleport administrators to connect multiple clusters together and establish trust between them. Trusted clusters allow users of one cluster, the root cluster to seamlessly SSH into the nodes of another cluster without having to "hop" between proxy servers. Moreover, users don't even need to have a direct connection to other clusters' proxy servers. The user experience looks like this:

# login using the root "main" cluster credentials:
$ tsh login

# SSH into some host inside the "main" cluster:
$ tsh ssh host

# SSH into the host located in another cluster called "east"
# The connection is established through
$ tsh ssh --cluster=east host

# See what other clusters are available
$ tsh clusters

Leaf clusters also have their own restrictions on user access, i.e. permissions mapping takes place.

Once connection has been established it's easy to switch from the "main" root cluster Teleport Cluster Page

Join Tokens

Lets start with the diagram of how connection between two clusters is established:


The first step in establishing a secure tunnel between two clusters is for the leaf cluster "east" to connect to the root cluster "main". When this happens for the first time, clusters know nothing about each other, thus a shared secret needs to exist in order for "main" to accept the connection from "east".

This shared secret is called a "join token". There are two ways to create join tokens: to statically define them in a configuration file, or to create them on the fly using tctl tool.


It is important to realize that join tokens are only used to establish the connection for the first time. The clusters will exchange certificates and won't be using the token to re-establish the connection in the future.

Static Join Tokens

To create a static join token, update the configuration file on "main" cluster to look like this:

# fragment of /etc/teleport.yaml:
  enabled: true
  - trusted_cluster:join-token

This token can be used unlimited number of times.

Dynamic Join Tokens

Creating a token dynamically with a CLI tool offers the advantage of applying a time to live (TTL) interval on it, i.e. it will be impossible to re-use such token after a specified period of time.

To create a token using the CLI tool, execute this command on the auth server of cluster "main":

# generates a new token to allow an inbound from a trusting cluster:
$ tctl tokens add --type=trusted_cluster --ttl=5m

The cluster invite token: ba4825847f0378bcdfe18113c4998498
This token will expire in 5 minutes

# you can also list the outstanding non-expired tokens:
$ tctl tokens ls

# ... or delete/revoke an invitation:
$ tctl tokens rm ba4825847f0378bcdfe18113c4998498

Users of Teleport will recognize that this is the same way you would add any node to a cluster. The token created above can be used multiple times and has an expiration time of 5 minutes.

Security Implications

Consider the security implications when deciding which token method to use. Short lived tokens decrease the window for attack but make automation a bit more complicated.


Version Warning

The RBAC section is applicable only to Teleport Enterprise. The open source version does not support SSH roles.

When a trusting cluster "east" from the diagram above establishes trust with the trusted cluster "main", it needs a way to configure which users from "main" should be allowed in and what permissions should they have. Teleport Enterprise uses role mapping to achieve this.

Consider the following:


Lets make a few assumptions for this example:

First, we need to create a special role for main users on "east":

# save this into main-user-role.yaml on the east cluster and execute:
# tctl create main-user-role.yaml
kind: role
version: v3
  name: local-admin
      '*': '*'
      "environment": "production"

Now, we need to establish trust between roles "main:admin" and "east:admin". This is done by creating a trusted cluster resource on "east" which looks like this:

# save this as main-cluster.yaml on the auth server of "east" and then execute:
# tctl create main-cluster.yaml
kind: trusted_cluster
version: v1
  name: "name-of-main-cluster"
  enabled: true
    - remote: admin
      # admin <-> admin works for community edition. Enterprise users
      # have great control over RBAC.
      local: [admin]
  token: "join-token-from-main"

What if we wanted to let any user from "main" to be allowed to connect to nodes on "east"? In this case we can use a wildcard * in the role_map like this:

  - remote: "*"
    local: [admin]
   - remote: 'cluster-*'
     local: [clusteradmin]

You can even use regular expressions to map user roles from one cluster to another, you can even capture parts of the remote role name and use reference it to name the local role:

  # in this example, remote users with remote role called 'remote-one' will be
  # mapped to a local role called 'local-one', and `remote-two` becomes `local-two`, etc:
  - remote: "^remote-(.*)$"
    local: [local-$1]

NOTE: The regexp matching is activated only when the expression starts with ^ and ends with $

Trusted Cluster UI

For customers using Teleport Enterprise, they can easily configure leaf nodes using the Teleport Proxy UI.

Creating Trust from the Leaf node to the root node. Tunnels

Updating Trusted Cluster Role Map

In order to update the role map for a trusted cluster first we will need to remove the cluster by executing:

$ tctl rm main-cluster.yaml

Then following updating the role map, we can re-create the cluster by executing:

$ tctl create main-user-updated-role.yaml

Using Trusted Clusters

Now an admin from the main cluster can now see and access the "east" cluster:

# login into the main cluster:
$ tsh login admin
# see the list of available clusters
$ tsh clusters

Cluster Name   Status
------------   ------
main           online
east           online
# see the list of machines (nodes) behind the eastern cluster:
$ tsh ls --cluster=east

Node Name Node ID            Address        Labels
--------- ------------------ -------------- -----------
db1.east  cf7cc5cd-935e-46f1  role=db-leader
db2.east  3879d133-fe81-3212  role=db-follower
# SSH into any node in "east":
$ tsh ssh --cluster=east [email protected]


Trusted clusters work only one way. So, in the example above users from "east" cannot see or connect to the nodes in "main".

Disabling Trust

To temporarily disable trust between clusters, i.e. to disconnect the "east" cluster from "main", edit the YAML definition of the trusted cluster resource and set enabled to "false", then update it:

$ tctl create --force cluster.yaml

Sharing Kubernetes groups between Trusted Clusters

Below is an example of how to share a kubernetes group between trusted clusters.

In this example, we have a root trusted cluster with a role root and kubernetes groups:

kubernetes_groups: ["system:masters"]
SSH logins:

logins: ["root"]
The leaf cluster can choose to map this root cluster to its own cluster. The admin cluster in the trusted cluster config:

  - remote: "root"
    local: [admin]
The role admin of the leaf cluster can now be set up to use the root cluster role logins and kubernetes_groups using the following variables:


In order to pass logins from a root trusted cluster to a leaf cluster, you must use the variable {{internal.logins}}.

How does it work?

At a first glance, Trusted Clusters in combination with RBAC may seem complicated. However, it is based on certificate-based SSH authentication which is fairly easy to reason about:

One can think of an SSH certificate as a "permit" issued and time-stamped by a certificate authority. A certificate contains four important pieces of data:

Try executing tsh status right after tsh login to see all these fields in the client certificate.

When a user from "main (root)" tries to connect to a node inside "east (leaf)" cluster, her certificate is presented to the auth server of "east (leaf)" and it performs the following checks:


There are three common types of problems Teleport administrators can run into when configuring trust between two clusters:

HTTPS configuration

If the web_proxy_addr endpoint of the main cluster uses a self-signed or invalid HTTPS certificate, you will get an error: "the trusted cluster uses misconfigured HTTP/TLS certificate". For ease of testing the teleport daemon of "east" can be started with --insecure CLI flag to accept self-signed certificates. Make sure to configure HTTPS properly and remove the insecure flag for production use.

Connectivity Problems

To troubleshoot connectivity problems, enable verbose output for the auth servers on both clusters. Usually this can be done by adding --debug flag to teleport start --debug. You can also do this by updating the configuration file for both auth servers:

# snippet from /etc/teleport.yaml
    output: stderr
    severity: DEBUG

On systemd-based distributions you can watch the log output via:

$ sudo journalctl -fu teleport

Most of the time you will find out that either a join token is mismatched/expired, or the network addresses for tunnel_addr or web_proxy_addr cannot be reached due to pre-existing firewall rules or how your network security groups are configured on AWS.

Access Problems

Troubleshooting access denied messages can be challenging. A Teleport administrator should check to see the following:

apartmentTeleport Enterprise

Teleport Enterprise is built around the open-source core, with premium support and additional, enterprise-grade features. It is for organizations that need to secure critical production infrastructure and meet compliance and audit requirements.

Demo Teleport Enterprise

get_appTeleport Community

Teleport Community provides modern SSH best practices out of the box for managing elastic infrastructure. Teleport Community is open-source software that anyone can download and install for free.


Download Teleport Community