Teleport Admin Manual

This manual covers the installation and configuration of Teleport and the ongoing management of a Teleport cluster. It assumes that the reader has good understanding of Linux administration.


Please visit our installation page for instructions on downloading and installing Teleport.


Before diving into configuring and running Teleport, it helps to take a look at the Teleport Architecture and review the key concepts this document will be referring to:

Concept Description
Node Synonym to "server" or "computer", something one can "SSH to". A node must be running the teleport daemon with "node" role/service turned on.
Certificate Authority (CA) A pair of public/private keys Teleport uses to manage access. A CA can sign a public key of a user or node, establishing their cluster membership.
Teleport Cluster A Teleport Auth Service contains two CAs. One is used to sign user keys and the other signs node keys. A collection of nodes connected to the same CA is called a "cluster".
Cluster Name Every Teleport cluster must have a name. If a name is not supplied via teleport.yaml configuration file, a GUID will be generated.IMPORTANT: renaming a cluster invalidates its keys and all certificates it had created.
Trusted Cluster Teleport Auth Service can allow 3rd party users or nodes to connect if their public keys are signed by a trusted CA. A "trusted cluster" is a pair of public keys of the trusted CA. It can be configured via teleport.yaml file.

Teleport Daemon

The Teleport daemon is called teleport and it supports the following commands:

Command Description
start Starts the Teleport daemon.
configure Dumps a sample configuration file in YAML format into standard output.
version Shows the Teleport version.
status Shows the status of a Teleport connection. This command is only available from inside of an active SSH session.
help Shows help.

When experimenting, you can quickly start teleport with verbose logging by typing teleport start -d .


Teleport stores data in /var/lib/teleport . Make sure that regular/non-admin users do not have access to this folder on the Auth server.

Systemd Unit File

In production, we recommend starting teleport daemon via an init system like systemd . Here's the recommended Teleport service unit file for systemd:

Description=Teleport SSH Service

ExecStart=/usr/local/bin/teleport start --config=/etc/teleport.yaml --pid-file=/run/
ExecReload=/bin/kill -HUP $MAINPID


Graceful Restarts

If using the systemd service unit file above, executing systemctl reload teleport will perform a graceful restart, i.e.the Teleport daemon will fork a new process to handle new incoming requests, leaving the old daemon process running until existing clients disconnect.

Version warning

Graceful restarts only work if Teleport is deployed using network-based storage like DynamoDB or etcd 3.3+. Future versions of Teleport will not have this limitation.

You can also perform restarts/upgrades by sending kill signals to a Teleport daemon manually.

Signal Teleport Daemon Behavior
USR1 Dumps diagnostics/debugging information into syslog.
TERM , INT or KILL Immediate non-graceful shutdown. All existing connections will be dropped.
USR2 Forks a new Teleport daemon to serve new connections.
HUP Forks a new Teleport daemon to serve new connections and initiates the graceful shutdown of the existing process when there are no more clients connected to it.


Teleport services listen on several ports. This table shows the default port numbers.

Port Service Description
3022 Node SSH port. This is Teleport's equivalent of port #22 for SSH.
3023 Proxy SSH port clients connect to. A proxy will forward this connection to port #3022 on the destination node.
3024 Proxy SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server.
3025 Auth SSH port used by the Auth Service to serve its API to other nodes in a cluster.
3080 Proxy HTTPS connection to authenticate tsh users and web users into the cluster. The same connection is used to serve a Web UI.
3026 Kubernetes Proxy HTTPS Kubernetes proxy (if enabled)

Filesystem Layout

By default, a Teleport node has the following files present. The location of all of them is configurable.

Full path Purpose
/etc/teleport.yaml Teleport configuration file (optional).
/usr/local/bin/teleport Teleport daemon binary.
/usr/local/bin/tctl Teleport admin tool. It is only needed for auth servers.
/var/lib/teleport Teleport data directory. Nodes keep their keys and certificates there. Auth servers store the audit log and the cluster keys there, but the audit log storage can be further configured via auth_service section in the config file.


You should use a configuration file to configure the teleport daemon. For simple experimentation, you can use command line flags with the teleport start command. Read about all the allowed flags in the CLI Docs or run teleport start --help

Configuration File

Teleport uses the YAML file format for configuration. A sample configuration file is shown below. By default, it is stored in /etc/teleport.yaml, below is an expanded and commented version from teleport configure.

The default path Teleport uses to look for a config file is /etc/teleport.yaml. You can override this path and set it explicitly using the -c or --config flag to teleport start:

$ teleport start --config=/etc/teleport.yaml

For a complete reference, see our Configuration Reference - teleport.yaml


When editing YAML configuration, please pay attention to how your editor handles white space. YAML requires consistent handling of tab characters.

# Sample Teleport configuration file
# Creates a single proxy, auth and node server.
# Things to update:
#  1. ca_pin: Obtain the CA pin hash for joining more nodes by running 'tctl status'
#     on the auth server once Teleport is running.
#  2. license-if-using-teleport-enterprise.pem: If you are an Enterprise customer,
#     obtain this from
  # nodename allows to assign an alternative name this node can be reached by.
  # by default it's equal to hostname
  nodename: NODE_NAME
  data_dir: /var/lib/teleport

  # Invitation token used to join a cluster. it is not used on
  # subsequent starts
  auth_token: xxxx-token-xxxx

  # Optional CA pin of the auth server. This enables more secure way of adding new
  # nodes to a cluster. See "Adding Nodes" section above.
  ca_pin: "sha256:ca-pin-hash-goes-here"

  # list of auth servers in a cluster. you will have more than one auth server
  # if you configure teleport auth to run in HA configuration.
  # If adding a node located behind NAT, use the Proxy URL. e.g.
  #  auth_servers:
  #     -

  # Logging configuration. Possible output values to disk via '/var/lib/teleport/teleport.log',
  # 'stdout', 'stderr' and 'syslog'. Possible severity values are INFO, WARN
  # and ERROR (default).
    output: stderr
    severity: INFO

  enabled: "yes"
  # A cluster name is used as part of a signature in certificates
  # generated by this CA.
  # We strongly recommend to explicitly set it to something meaningful as it
  # becomes important when configuring trust between multiple clusters.
  # By default an automatically generated name is used (not recommended)
  # IMPORTANT: if you change cluster_name, it will invalidate all generated
  # certificates and keys (may need to wipe out /var/lib/teleport directory)
  cluster_name: "teleport-aws-us-east-1"

  # IP and the port to bind to. Other Teleport nodes will be connecting to
  # this port (AKA "Auth API" or "Cluster API") to validate client
  # certificates

  - proxy,node:xxxx-token-xxxx
  # license_file: /path/to/license-if-using-teleport-enterprise.pem

    # default authentication type. possible values are 'local' and 'github' for OSS
    #  and 'oidc', 'saml' and 'false' for Enterprise.
    type: local
    # second_factor can be off, otp, or u2f
    second_factor: otp
  enabled: "yes"
    teleport: static-label-example
  - name: hostname
    command: [/usr/bin/hostname]
    period: 1m0s
  - name: arch
    command: [/usr/bin/uname, -p]
    period: 1h0m0s
  enabled: "yes"

  # The DNS name of the proxy HTTPS endpoint as accessible by cluster users.
  # Defaults to the proxy's hostname if not specified. If running multiple
  # proxies behind a load balancer, this name must point to the load balancer
  # (see public_addr section below)
  public_addr: TELEPORT_PUBLIC_DNS_NAME:3080

  # TLS certificate for the HTTPS connection. Configuring these properly is
  # critical for Teleport security.
  https_key_file: /etc/letsencrypt/live/TELEPORT_PUBLIC_DNS_NAME/privkey.pem
  https_cert_file: /etc/letsencrypt/live/TELEPORT_PUBLIC_DNS_NAME/fullchain.pem

Public Addr

Notice that all three Teleport services (proxy, auth, node) have an optional public_addr property. The public address can take an IP or a DNS name. It can also be a list of values:

public_addr: ["", ""]

Specifying a public address for a Teleport service may be useful in the following use cases:


Teleport uses the concept of "authentication connectors" to authenticate users when they execute tsh login command. There are three types of authentication connectors:

Local Connector

Local authentication is used to authenticate against a local Teleport user database. This database is managed by tctl users command. Teleport also supports second factor authentication (2FA) for the local connector. There are three possible values (types) of 2FA:

Here is an example of this setting in the teleport.yaml :

    type: local
    second_factor: off

Github OAuth 2.0 Connector

This connector implements Github OAuth 2.0 authentication flow. Please refer to Github documentation on Creating an OAuth App to learn how to create and register an OAuth app.

Here is an example of this setting in the teleport.yaml :

    type: github

See Github OAuth 2.0 for details on how to configure it.


This connector type implements SAML authentication. It can be configured against any external identity manager like Okta or Auth0. This feature is only available for Teleport Enterprise.

Here is an example of this setting in the teleport.yaml :

    type: saml


Teleport implements OpenID Connect (OIDC) authentication, which is similar to SAML in principle. This feature is only available for Teleport Enterprise.

Here is an example of this setting in the teleport.yaml :

    type: oidc

Hardware Keys - YubiKey FIDO U2F

Teleport supports FIDO U2F hardware keys as a second authentication factor. By default U2F is disabled. To start using U2F:

# snippet from /etc/teleport.yaml to show an example configuration of U2F:
    type: local
    second_factor: u2f
    # this section is needed only if second_factor is set to 'u2f'
       # app_id must point to the URL of the Teleport Web UI (proxy) accessible
       # by the end users
       app_id: https://localhost:3080
       # facets must list all proxy servers if there are more than one deployed
       - https://localhost:3080

For single-proxy setups, the app_id setting can be equal to the domain name of the proxy, but this will prevent you from adding more proxies without changing the app_id . For multi-proxy setups, the app_id should be an HTTPS URL pointing to a JSON file that mirrors facets in the auth config.


The app_id must never change in the lifetime of the cluster. If the App ID changes, all existing U2F key registrations will become invalid and all users who use U2F as the second factor will need to re-register. When adding a new proxy server, make sure to add it to the list of "facets" in the configuration file, but also to the JSON file referenced by app_id

Logging in with U2F

For logging in via the CLI, you must first install u2f-host. Installing:

# OSX:
$ brew install libu2f-host

# Ubuntu 16.04 LTS:
$ apt-get install u2f-host

Then invoke tsh ssh as usual to authenticate:

$ tsh --proxy <proxy-addr> ssh <hostname>

Version Warning

External user identities are only supported in Teleport Enterprise.

Please reach out to [email protected] for more information.

Adding and Deleting Users

This section covers internal user identities, i.e. user accounts created and stored in Teleport's internal storage. Most production users of Teleport use external users via Github or Okta or any other SSO provider (Teleport Enterprise supports any SAML or OIDC compliant identity provider).

A user identity in Teleport exists in the scope of a cluster. The member nodes of a cluster have multiple OS users on them. A Teleport administrator creates Teleport user accounts and maps them to the allowed OS user logins they can use.

Let's look at this table:

Teleport User Allowed OS Logins Description
joe joe, root Teleport user 'joe' can login into member nodes as OS user 'joe' or 'root'
bob bob Teleport user 'bob' can login into member nodes only as OS user 'bob'
ross If no OS login is specified, it defaults to the same name as the Teleport user - 'ross'.

To add a new user to Teleport, you have to use the tctl tool on the same node where the auth server is running, i.e. teleport was started with --roles=auth .

$ tctl users add joe joe,root

Teleport generates an auto-expiring token (with a TTL of 1 hour) and prints the token URL which must be used before the TTL expires.

Signup token has been created. Share this URL with the user:

NOTE: make sure the <proxy> host is accessible.

The user completes registration by visiting this URL in their web browser, picking a password and configuring the 2nd factor authentication. If the credentials are correct, the auth server generates and signs a new certificate and the client stores this key and will use it for subsequent logins. The key will automatically expire after 12 hours by default after which the user will need to log back in with her credentials. This TTL can be configured to a different value. Once authenticated, the account will become visible via tctl :

$ tctl users ls

User           Allowed Logins
----           --------------
admin          admin,root
ross           ross
joe            joe,root

Joe would then use the tsh client tool to log in to member node "luna" via bastion "work" as root:

$ tsh --proxy=work --user=joe [email protected]

To delete this user:

$ tctl users rm joe

Editing Users

Users entries can be manipulated using the generic resource commands via tctl . For example, to see the full list of user records, an administrator can execute:

$ tctl get users

To edit the user "joe":

# dump the user definition into a file:
$ tctl get user/joe > joe.yaml
# ... edit the contents of joe.yaml

# update the user record:
$ tctl create -f joe.yaml

Some fields in the user record are reserved for internal use. Some of them will be finalized and documented in the future versions. Fields like is_locked or traits/logins can be used starting in version 2.3

Adding Nodes to the Cluster

Teleport is a "clustered" system, meaning it only allows access to nodes (servers) that had been previously granted cluster membership.

A cluster membership means that a node receives its own host certificate signed by the cluster's auth server. To receive a host certificate upon joining a cluster, a new Teleport host must present an "invite token". An invite token also defines which role a new host can assume within a cluster: auth , proxy or node .

There are two ways to create invitation tokens:

Static Tokens

Static tokens are defined ahead of time by an administrator and stored in the auth server's config file:

# Config section in `/etc/teleport.yaml` file for the auth server
    enabled: true
    # This static token allows new hosts to join the cluster as "proxy" or "node"
    - "proxy,node:secret-token-value"
    # A token can also be stored in a file. In this example the token for adding
    # new auth servers is stored in /path/to/tokenfile
    - "auth:/path/to/tokenfile"

Short-lived Tokens

A more secure way to add nodes to a cluster is to generate tokens as they are needed. Such token can be used multiple times until its time to live (TTL) expires.

Use the tctl tool to register a new invitation token (or it can also generate a new token for you). In the following example a new token is created with a TTL of 5 minutes:

$ tctl nodes add --ttl=5m --roles=node,proxy --token=secret-value
The invite token: secret-value

If --token is not provided, tctl will generate one:

# generate a short-lived invitation token for a new node:
$ tctl nodes add --ttl=5m --roles=node,proxy
The invite token: e94d68a8a1e5821dbd79d03a960644f0

# you can also list all generated non-expired tokens:
$ tctl tokens ls
Token                            Type            Expiry Time
---------------                  -----------     ---------------
e94d68a8a1e5821dbd79d03a960644f0 Node            25 Sep 18 00:21 UTC

# ... or revoke an invitation before it's used:
$ tctl tokens rm e94d68a8a1e5821dbd79d03a960644f0

Using Node Invitation Tokens

Both static and short-lived tokens are used the same way. Execute the following command on a new node to add it to a cluster:

# adding a new regular SSH node to the cluster:
$ teleport start --roles=node --token=secret-token-value --auth-server=

# adding a new regular SSH node using Teleport Node Tunneling:
$ teleport start --roles=node --token=secret-token-value

# adding a new proxy service on the cluster:
$ teleport start --roles=proxy --token=secret-token-value --auth-server=

As new nodes come online, they start sending ping requests every few seconds to the CA of the cluster. This allows users to explore cluster membership and size:

$ tctl nodes ls

Node Name     Node ID                                  Address            Labels
---------     -------                                  -------            ------
turing        d52527f9-b260-41d0-bb5a-e23b0cfe0f8f      distro:ubuntu
dijkstra      c9s93fd9-3333-91d3-9999-c9s93fd98f43      distro:debian

Untrusted Auth Servers

Teleport nodes use the HTTPS protocol to offer the join tokens to the auth server running on in the example above. In a zero-trust environment, you must assume that an attacker can hijack the IP address of the auth server e.g. .

To prevent this from happening, you need to supply every new node with an additional bit of information about the auth server. This technique is called "CA Pinning". It works by asking the auth server to produce a "CA Pin", which is a hashed value of its public key, i.e. for which an attacker can't forge a matching private key.

On the auth server:

$ tctl status
User CA  never updated
Host CA  never updated
CA pin   sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1

The "CA pin" at the bottom needs to be passed to the new nodes when they're starting for the first time, i.e. when they join a cluster:

Via CLI:

$ teleport start \
   --roles=node \
   --token=1ac590d36493acdaa2387bc1c492db1a \
   --ca-pin=sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 \

or via /etc/teleport.yaml on a node:

  auth_token: "1ac590d36493acdaa2387bc1c492db1a"
  ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1"
    - ""


If a CA pin is not provided, Teleport node will join a cluster but it will print a WARN message (warning) into its standard error output.


The CA pin becomes invalid if a Teleport administrator performs the CA rotation by executing tctl auth rotate .

Revoking Invitations

As you have seen above, Teleport uses tokens to invite users to a cluster (sign-up tokens) or to add new nodes to it (provisioning tokens).

Both types of tokens can be revoked before they can be used. To see a list of outstanding tokens, run this command:

$ tctl tokens ls

Token                                Role       Expiry Time (UTC)
-----                                ----       -----------------
eoKoh0caiw6weoGupahgh6Wuo7jaTee2     Proxy      never
696c0471453e75882ff70a761c1a8bfa     Node       17 May 16 03:51 UTC
6fc5545ab78c2ea978caabef9dbd08a5     Signup     17 May 16 04:24 UTC

In this example, the first token has a "never" expiry date because it is a static token configured via a config file.

The 2nd token with "Node" role was generated to invite a new node to this cluster. And the 3rd token was generated to invite a new user.

The latter two tokens can be deleted (revoked) via tctl tokens del command:

$ tctl tokens del 696c0471453e75882ff70a761c1a8bfa
Token 696c0471453e75882ff70a761c1a8bfa has been deleted

Adding a node located behind NAT

With the current setup you've only been able to add nodes that have direct access to the auth server and within the internal IP range of the cluster. We recommend setting up a Trusted Cluster if you have workloads split across different networks / clouds.

Teleport Node Tunneling lets you add a node to an existing Teleport Cluster. This can be useful for IoT applications or for managing a couple of servers in a different network.

Similar to Adding Nodes to Cluster, use tctl to create a single-use token for a node, but this time you'll replace the auth server IP with the URL of the Proxy Server. In the Example below, we've replaced the auth server IP with the Proxy web endpoint

$ sudo tctl nodes add

The invite token: n92bb958ce97f761da978d08c35c54a5c
Run this on the new node to join the cluster:
teleport start --roles=node --token=n92bb958ce97f761da978d08c35c54a5c

Using the ports in the default configuration, the node needs to be able to talk to ports 3080 and 3024 on the proxy. Port 3080 is used to initially fetch the credentials (SSH and TLS certificates) and for discovery (where is the reverse tunnel running, in this case 3024). Port 3024 is used to establish a connection to the Auth Server through the proxy.

To enable multiplexing so only one port is used, simply set the tunnel_listen_addr the same as the web_listen_addr respectively within the proxy_service. Teleport will automatically recognize using the same port and enable multiplexing. If the log setting is set to DEBUG you will see multiplexing enabled in the server log.

DEBU [PROC:1]    Setup Proxy: Reverse tunnel proxy and web proxy listen on the same port, multiplexing is on. service/service.go:1944

Load Balancers

The setup above also works even if the cluster uses multiple proxies behind a load balancer (LB) or a DNS entry with multiple values. This works by the node establishing a tunnel to every proxy. This requires that an LB uses round-robin or a similar balancing algorithm. Do not use sticky load balancing algorithms (a.k.a. "session affinity") with Teleport proxies.

Labeling Nodes

In addition to specifying a custom nodename, Teleport also allows for the application of arbitrary key:value pairs to each node, called labels. There are two kinds of labels:

  1. static labels do not change over time, while teleport process is running. Examples of static labels are physical location of nodes, name of the environment (staging vs production), etc.

  2. dynamic labels also known as "label commands" allow to generate labels at runtime. Teleport will execute an external command on a node at a configurable frequency and the output of a command becomes the label value. Examples include reporting load averages, presence of a process, time after last reboot, etc.

There are two ways to configure node labels.

  1. Via command line, by using --labels flag to teleport start command.
  2. Using /etc/teleport.yaml configuration file on the nodes.

To define labels as command line arguments, use --labels flag like shown below. This method works well for static labels or simple commands:

$ teleport start --labels uptime=[1m:"uptime -p"],kernel=[1h:"uname -r"]

Alternatively, you can update labels via a configuration file:

  enabled: "yes"
  # Static labels are simple key/value pairs:
    environment: test

To configure dynamic labels via a configuration file, define a commands array as shown below:

  enabled: "yes"
  # Dynamic labels AKA "commands":
  - name: hostname
    command: [hostname]
    period: 1m0s
  - name: arch
    command: [uname, -p]
    # this setting tells teleport to execute the command above
    # once an hour. this value cannot be less than one minute.
    period: 1h0m0s

/path/to/executable must be a valid executable command (i.e. executable bit must be set) which also includes shell scripts with a proper shebang line.

Important: notice that command setting is an array where the first element is a valid executable and each subsequent element is an argument, i.e:

# valid syntax:
command: ["/bin/uname", "-m"]

# INVALID syntax:
command: ["/bin/uname -m"]

# if you want to pipe several bash commands together, here's how to do it:
# notice how ' and " are interchangeable and you can use it for quoting:
command: ["/bin/sh", "-c", "uname -a | egrep -o '[0-9]+\\.[0-9]+\\.[0-9]+'"]

Audit Log

Teleport logs every SSH event into its audit log. There are two components of the audit log:

  1. SSH Events: Teleport logs events like successful user logins along with the metadata like remote IP address, time and the session ID.

  2. Recorded Sessions: Every SSH shell session is recorded and can be replayed later. The recording is done by the nodes themselves, by default, but can be configured to be done by the proxy.

  3. Optional: Enhanced Session Recording

Refer to the "Audit Log" chapter in the Teleport Architecture to learn more about how the audit log and session recording are designed.

SSH Events

Teleport supports multiple storage back-ends for storing the SSH events. The section below uses the dir backend as an example. dir backend uses the local filesystem of an auth server using the configurable data_dir directory.

For highly available (HA) configuration, users can refer to our DynamoDB or etcd chapters on how to configure the SSH events and recorded sessions to be stored on network storage. It is even possible to store the audit log in multiple places at the same time, see audit_events_uri setting in the sample configuration file above for how to do that.

Let's examine the Teleport audit log using the dir backend. The event log is stored in data_dir under log directory, usually /var/lib/teleport/log . Each day is represented as a file:

$ ls -l /var/lib/teleport/log/
total 104
-rw-r----- 1 root root  31638 Jan 22 20:00 2017-01-23.00:00:00.log
-rw-r----- 1 root root  91256 Jan 31 21:00 2017-02-01.00:00:00.log
-rw-r----- 1 root root  15815 Feb 32 22:54 2017-02-03.00:00:00.log

The log files use JSON format. They are human-readable but can also be programmatically parsed. Each line represents an event and has the following format:

    // Event type. See below for the list of all possible event types
    "event": "session.start",
    // Teleport user name
    "user": "ekontsevoy",
    // OS login
    "login": "root",
    // Server namespace. This field is reserved for future use.
    "namespace": "default",
    // Unique server ID.
    "server_id": "f84f7386-5e22-45ff-8f7d-b8079742e63f",
    // Session ID. Can be used to replay the session.
    "sid": "8d3895b6-e9dd-11e6-94de-40167e68e931",
    // Address of the SSH node
    "addr.local": "10.5.l.15:3022",
    // Address of the connecting client (user)
    "addr.remote": "",
    // Terminal size
    "size": "80:25",
    // Timestamp
    "time": "2017-02-03T06:54:05Z"

The possible event types are:

Event Type Description
auth Authentication attempt. Adds the following fields: {"success": "false", "error": "access denied"}
session.start Started an interactive shell session.
session.end An interactive shell session has ended.
session.join A new user has joined the existing interactive shell session.
session.leave A user has left the session.
session.disk A list of files opened during the session. Requires Enhanced Session Recording. A list of network connections made during the session. Requires Enhanced Session Recording.
session.command A list of commands ran during the session. Requires Enhanced Session Recording.
exec Remote command has been executed via SSH, like tsh ssh [email protected] ls / . The following fields will be logged: {"command": "ls /", "exitCode": 0, "exitError": ""}
scp Remote file copy has been executed. The following fields will be logged: {"path": "/path/to/file.txt", "len": 32344, "action": "read" }
resize Terminal has been resized.
user.login A user logged into web UI or via tsh. The following fields will be logged: {"user": "[email protected]", "method": "local"} .

Recorded Sessions

In addition to logging session.start and session.end events, Teleport also records the entire stream of bytes going to/from standard input and standard output of an SSH session.

Teleport can store the recorded sessions in an AWS S3 bucket or in a local filesystem (including NFS).

The recorded sessions are stored as raw bytes in the sessions directory under log . Each session consists of two files, both are named after the session ID:

  1. .bytes file or .chunks.gz compressed format represents the raw session bytes and is somewhat human-readable, although you are better off using tsh play or the Web UI to replay it.

  2. .log file or .events.gz compressed file contains the copies of the event log entries that are related to this session.

$ ls /var/lib/teleport/log/sessions/default
-rw-r----- 1 root root 506192 Feb 4 00:46 4c146ec8-eab6-11e6-b1b3-40167e68e931.session.bytes
-rw-r----- 1 root root  44943 Feb 4 00:46 4c146ec8-eab6-11e6-b1b3-40167e68e931.session.log

To replay this session via CLI:

$ tsh --proxy=proxy play 4c146ec8-eab6-11e6-b1b3-40167e68e931


A Teleport administrator has two tools to configure a Teleport cluster:

tctl has convenient subcommands for dynamic configuration, like tctl users or tctl nodes . However, for dealing with more advanced topics, like connecting clusters together or troubleshooting trust, tctl offers the more powerful, although lower-level CLI interface called resources .

The concept is borrowed from the REST programming pattern. A cluster is composed of different objects (aka, resources) and there are just three common operations that can be performed on them: get , create , remove .

A resource is defined as a YAML file. Every resource in Teleport has three required fields:

Everything else is resource-specific and any component of a Teleport cluster can be manipulated with just 3 CLI commands:

Command Description Examples
tctl get Get one or multiple resources tctl get users or tctl get user/joe
tctl rm Delete a resource by type/name tctl rm user/joe
tctl create Create a new resource from a YAML file. Use -f to override / update tctl create -f joe.yaml

YAML Format

By default Teleport uses YAML format to describe resources. YAML is a wonderful and very human-readable alternative to JSON or XML, but it's sensitive to white space. Pay attention to spaces vs tabs!

Here's an example how the YAML resource definition for a user Joe might look like. It can be retrieved by executing tctl get user/joe

kind: user
version: v2
  name: joe
  roles: admin
    # users can be temporarily locked in a Teleport system, but this
    # functionality is reserved for internal use for now.
    is_locked: false
    lock_expires: 0001-01-01T00:00:00Z
    locked_time: 0001-01-01T00:00:00Z
    # these are "allowed logins" which are usually specified as the
    # last argument to `tctl users add`
    - joe
    - root
  # any resource in Teleport can automatically expire.
  expires: 0001-01-01T00:00:00Z
  # for internal use only
    time: 0001-01-01T00:00:00Z
      name: builtin-Admin


Some of the fields you will see when printing resources are used only internally and are not meant to be changed. Others are reserved for future use.

Here's the list of resources currently exposed via tctl :

Resource Kind Description
user A user record in the internal Teleport user DB.
node A registered SSH node. The same record is displayed via tctl nodes ls
cluster A trusted cluster. See here for more details on connecting clusters together.
role A role assumed by users. The open source Teleport only includes one role: "admin", but Enterprise teleport users can define their own roles.
connector Authentication connectors for single sign-on (SSO) for SAML, OIDC and Github.


# list all connectors:
$ tctl get connectors

# dump a SAML connector called "okta":
$ tctl get saml/okta

# delete a SAML connector called "okta":
$ tctl rm saml/okta

# delete an OIDC connector called "gsuite":
$ tctl rm oidc/gsuite

# delete a github connector called "myteam":
$ tctl rm github/myteam

# delete a local user called "admin":
$ tctl rm users/admin


Although tctl get connectors will show you every connector, when working with an individual connector you must use the correct kind, such as saml or oidc. You can see each connector's kind at the top of its YAML output from tctl get connectors.

Trusted Clusters

As explained in the architecture document, Teleport can partition compute infrastructure into multiple clusters. A cluster is a group of nodes connected to the cluster's auth server, acting as a certificate authority (CA) for all users and nodes.

To retrieve an SSH certificate, users must authenticate with a cluster through a proxy server. So, if users want to connect to nodes belonging to different clusters, they would normally have to use a different --proxy flag for each cluster. This is not always convenient.

The concept of trusted clusters allows Teleport administrators to connect multiple clusters together and establish trust between them. Trusted clusters allow users of one cluster to seamlessly SSH into the nodes of another cluster without having to "hop" between proxy servers. Moreover, users don't even need to have a direct connection to other clusters' proxy servers. Trusted clusters also have their own restrictions on user access. The user experience looks like this:

# login using the "main" cluster credentials:
$ tsh login

# SSH into some host inside the "main" cluster:
$ tsh ssh host

# SSH into the host located in another cluster called "east"
# The connection is established through
$ tsh ssh --cluster=east host

# See what other clusters are available
$ tsh clusters

Selecting the Default Cluster

To avoid using --cluster switch with tsh commands, you can also specify which trusted cluster you want to become the default from the start:

# login into "main" but request "east" to be the default for subsequent
# tsh commands:
$ tsh login east


The design of trusted clusters allows Teleport users to connect to compute infrastructure located behind firewalls without any open TCP ports. The real world usage examples of this capability include:

Let's take a look at how a connection is established between the "main" cluster and the "east" cluster:


This setup works as follows:

  1. The "east" creates an outbound reverse SSH tunnel to "main" and keeps the tunnel open.

  2. Accessibility only works in one direction. The "east" cluster allows users from "main" to access its nodes but users in the "east" cluster can not access the "main" cluster.

  3. When a user tries to connect to a node inside "east" using main's proxy, the reverse tunnel from step 1 is used to establish this connection shown as the green line above.

Load Balancers

The scheme above also works even if the "main" cluster uses multiple proxies behind a load balancer (LB) or a DNS entry with multiple values. This works by "east" establishing a tunnel to every proxy in "main". This requires that an LB uses round-robin or a similar balancing algorithm. Do not use sticky load balancing algorithms (a.k.a. "session affinity") with Teleport proxies.

Example Configuration

Connecting two clusters together is similar to adding nodes:

  1. Generate an invitation token on "main" cluster, or use a pre-defined static token.

  2. On the "east" side, create a trusted cluster resource.

Creating a Cluster Join Token

Just like with adding nodes, you can use either a static cluster token defined in /etc/teleport.yaml or you can generate an auto-expiring token:

To define a static cluster join token using the configuration file on "main":

# fragment of /etc/teleport.yaml:
  enabled: true
  tokens: trusted_cluster:secret-token-to-add-new-clusters

If you wish to use auto-expiring cluster tokens, execute this CLI command on the "main" side:

$ tctl tokens add --type=trusted_cluster
The cluster invite token: generated-token-to-add-new-clusters

Using a Cluster Join Token

Now, the administrator of "east" must create the following resource file:

# cluster.yaml
kind: trusted_cluster
version: v2
  # the trusted cluster name MUST match the 'cluster_name' setting of the
  # cluster
  name: main
  # this field allows to create tunnels that are disabled, but can be enabled later.
  enabled: true
  # the token expected by the "main" cluster:
  token: secret-token-to-add-new-clusters
  # the address in 'host:port' form of the reverse tunnel listening port on the
  # "main" proxy server:
  # the address in 'host:port' form of the web listening port on the
  # "main" proxy server:
  # the role mapping allows to map user roles from one cluster to another
  # (enterprise editions of Teleport only)
    - remote: "admin"    # users who have "admin" role on "main"
      local: ["auditor"] # will be assigned "auditor" role when logging into "east"

Then, use tctl create to add the file:

$ tctl create cluster.yaml

At this point the users of the main cluster should be able to see "east" in the list of available clusters.

HTTPS configuration

If the web_proxy_addr endpoint of the main cluster uses a self-signed or invalid HTTPS certificate, you will get an error: "the trusted cluster uses misconfigured HTTP/TLS certificate". For ease of testing the teleport daemon of "east" can be started with --insecure CLI flag to accept self-signed certificates. Make sure to configure HTTPS properly and remove the insecure flag for production use.

Using Trusted Clusters

As mentioned above, accessibility is only granted in one direction. So, only users from the "main" (trusted cluster) can now access nodes in the "east" (trusting cluster). Users in the "east" cluster will not be able to access the "main" cluster.

# login into the main cluster:
$ tsh --proxy=proxy.main login joe

# see the list of available clusters
$ tsh clusters

Cluster Name   Status
------------   ------
main           online
east           online

# see the list of machines (nodes) behind the eastern cluster:
$ tsh ls --cluster=east

Node Name Node ID            Address        Labels
--------- ------------------ -------------- -----------
db1.east  cf7cc5cd-935e-46f1  role=db-leader
db2.east  3879d133-fe81-3212  role=db-worker

# SSH into any node in "east":
$ tsh ssh --cluster=east [email protected]

Disabling Trust

To temporarily disable trust between clusters, disconnect the "east" cluster from "main", edit the YAML definition of the trusted cluster resource and set enabled to "false", then update it:

$ tctl create --force cluster.yaml

If you want to permanently disconnect one cluster from the other:

# execute this command on "main" side to disconnect "east":
$ tctl rm tc/east

While accessibility is only granted in one direction, trust is granted in both directions. If you remove "east" from "main", the following will happen:

If you wish to permanently remove all trust relationships and the connections between both clusters:

# execute on "main":
$ tctl rm tc/east
# execute on "east":
$ tctl rm tc/main

Advanced Configuration

Take a look at Trusted Clusters Guide to learn more about advanced topics:

Github OAuth 2.0

Teleport supports authentication and authorization via external identity providers such as Github. You can watch the video for how to configure Github as an SSO provider, or you can follow the documentation below.

First, the Teleport auth service must be configured to use Github for authentication:

# snippet from /etc/teleport.yaml
      type: github

Next step is to define a Github connector:

# Create a file called github.yaml:
kind: github
version: v3
  # connector name that will be used with `tsh --auth=github login`
  name: github
  # client ID of Github OAuth app
  client_id: <client-id>
  # client secret of Github OAuth app
  client_secret: <client-secret>
  # connector display name that will be shown on web UI login screen
  display: Github
  # callback URL that will be called after successful authentication
  redirect_url: https://<proxy-address>/v1/webapi/github/callback
  # mapping of org/team memberships onto allowed logins and roles
    - organization: octocats # Github organization name
      team: admins # Github team name within that organization
      # allowed logins for users in this org/team
        - root
      # List of Kubernetes groups this Github team is allowed to connect to
      # (see Kubernetes integration for more information)
      kubernetes_groups: ["system:masters"]


For open-source Teleport the logins field contains a list of allowed OS logins. For the commercial Teleport Enterprise offering, which supports role-based access control, the same field is treated as a list of roles that users from the matching org/team assume after going through the authorization flow.

To obtain client ID and client secret, please follow Github documentation on how to create and register an OAuth app. Be sure to set the "Authorization callback URL" to the same value as redirect_url in the resource spec. Teleport will request only the read:org OAuth scope, you can read more about Github OAuth scopes.

Finally, create the connector using tctl resource management command:

$ tctl create github.yaml


When going through the Github authentication flow for the first time, the application must be granted the access to all organizations that are present in the "teams to logins" mapping, otherwise Teleport will not be able to determine team memberships for these orgs.


Some networks funnel all connections through a proxy server where they can be audited and access control rules are applied. For these scenarios Teleport supports HTTP CONNECT tunneling.

To use HTTP CONNECT tunneling, simply set either the HTTPS_PROXY or HTTP_PROXY environment variables and when Teleport builds and establishes the reverse tunnel to the main cluster, it will funnel all traffic though the proxy. Specifically, if using the default configuration, Teleport will tunnel ports 3024 (SSH, reverse tunnel) and 3080 (HTTPS, establishing trust) through the proxy.

The value of HTTPS_PROXY or HTTP_PROXY should be in the format scheme://host:port where scheme is either https or http . If the value is host:port , Teleport will prepend http .

It's important to note that in order for Teleport to use HTTP CONNECT tunnelling, the HTTP_PROXY and HTTPS_PROXY environment variables must be set within Teleport's environment. You can also optionally set the NO_PROXY environment variable to avoid use of the proxy when accessing specified hosts/netmasks. When launching Teleport with systemd, this will probably involve adding some lines to your systemd unit file:



localhost and are invalid values for the proxy host. If for some reason your proxy runs locally, you'll need to provide some other DNS name or a private IP address for it.

PAM Integration

Teleport node service can be configured to integrate with PAM. This allows Teleport to create user sessions using PAM session profiles.

To enable PAM on a given Linux machine, update /etc/teleport.yaml with:

         # "no" by default
         enabled: yes
         # use /etc/pam.d/sshd configuration (the default)
         service_name: "sshd"

Please note that most Linux distributions come with a number of PAM services in /etc/pam.d and Teleport will try to use sshd by default, which will be removed if you uninstall openssh-server package. We recommend creating your own PAM service file like /etc/pam.d/teleport and specifying it as service_name above.


Teleport only supports the account and session stack. The auth PAM module is currently not supported with Teleport.

Using Teleport with OpenSSH

Review our dedicated Using Teleport with OpenSSH guide.

Certificate Rotation

Take a look at the Certificates chapter in the architecture document to learn how the certificate rotation works. This section will show you how to implement certificate rotation in practice.

The easiest way to start the rotation is to execute this command on a cluster's auth server:

$ tctl auth rotate

This will trigger a rotation process for both hosts and users with a grace period of 48 hours.

This can be customized, i.e.

# rotate only user certificates with a grace period of 200 hours:
$ tctl auth rotate --type=user --grace-period=200h

# rotate only host certificates with a grace period of 8 hours:
$ tctl auth rotate --type=host --grace-period=8h

The rotation takes time, especially for hosts, because each node in a cluster needs to be notified that a rotation is taking place and request a new certificate for itself before the grace period ends.


Be careful when choosing a grace period when rotating host certificates. The grace period needs to be long enough for all nodes in a cluster to request a new certificate. If some nodes go offline during the rotation and come back only after the grace period has ended, they will be forced to leave the cluster, i.e. users will no longer be allowed to SSH into them.

To check the status of certificate rotation:

$ tctl status

Version Warning

Certificate rotation can only be used with clusters running version 2.6 of Teleport or newer. If trusted clusters are used, make sure all connected clusters are running version 2.6+. If one of the trusted clusters is running an older version of Teleport the trust/connection to that cluster will be lost.

CA Pinning Warning

If you are using CA Pinning when adding new nodes, the CA pin will changes after the rotation. Make sure you use the new CA pin when adding nodes after rotation.

Ansible Integration

Ansible uses the OpenSSH client by default. This makes it compatible with Teleport without any extra work, except configuring OpenSSH client to work with Teleport Proxy:

scp_if_ssh = True

Kubernetes Integration

Teleport can be configured as a compliance gateway for Kubernetes clusters. This allows users to authenticate against a Teleport proxy using tsh login command to retrieve credentials for both SSH and Kubernetes API.

Follow our Kubernetes guide which contains some more specific examples and instructions.

High Availability


Before continuing, please make sure to take a look at the Cluster State section in the Teleport Architecture documentation.

Usually there are two ways to achieve high availability. You can "outsource" this function to the infrastructure. For example, using a highly available network-based disk volumes (similar to AWS EBS) and by migrating a failed VM to a new host. In this scenario, there's nothing Teleport-specific to be done.

If high availability cannot be provided by the infrastructure (perhaps you're running Teleport on a bare metal cluster), you can still configure Teleport to run in a highly available fashion.

Auth Server HA

In order to run multiple instances of Teleport Auth Server, you must switch to a highly available secrets back-end first. Also, you must tell each node in a cluster that there is more than one auth server available. There are two ways to do this:

IMPORTANT: with multiple instances of the auth servers running, special attention needs to be paid to keeping their configuration identical. Settings like cluster_name , tokens , storage , etc must be the same.

Teleport Proxy HA

The Teleport Proxy is stateless which makes running multiple instances trivial. If using the default configuration, configure your load balancer to forward ports 3023 and 3080 to the servers that run the Teleport proxy. If you have configured your proxy to use non-default ports, you will need to configure your load balancer to forward the ports you specified for listen_addr and web_listen_addr in teleport.yaml . The load balancer for web_listen_addr can terminate TLS with your own certificate that is valid for your users, while the remaining ports should do TCP level forwarding, since Teleport will handle its own SSL on top of that with its own certificates.


If you terminate TLS with your own certificate at a load balancer you'll need to run Teleport with --insecure-no-tls

If your load balancer supports HTTP health checks, configure it to hit the /readyz diagnostics endpoint on machines running Teleport. This endpoint must be enabled by using the --diag-addr flag to teleport start: teleport start --diag-addr= The endpoint will reply {"status":"ok"} if the Teleport service is running without problems.


As the new auth servers get added to the cluster and the old servers get decommissioned, nodes and proxies will refresh the list of available auth servers and store it in their local cache /var/lib/teleport/authservers.json - the values from the cache file will take precedence over the configuration file.

We'll cover how to use etcd and DynamoDB storage back-ends to make Teleport highly available below.

Using etcd

Teleport can use etcd as a storage backend to achieve highly available deployments. You must take steps to protect access to etcd in this configuration because that is where Teleport secrets like keys and user records will be stored.

To configure Teleport for using etcd as a storage back-end:

     type: etcd

     # list of etcd peers to connect to:
     peers: ["", ""]

     # required path to TLS client certificate and key files to connect to etcd
     # to create these, follow
     # or use the etcd-provided script
     tls_cert_file: /var/lib/teleport/etcd-cert.pem
     tls_key_file: /var/lib/teleport/etcd-key.pem

     # optional file with trusted CA authority
     # file to authenticate etcd nodes
     # if you used the script above to generate the client TLS certificate,
     # this CA certificate should be one of the other generated files
     tls_ca_file: /var/lib/teleport/etcd-ca.pem

     # alternative password based authentication, if not using TLS client
     # certificate
     # See for setting
     # up a new user
     username: username
     password_file: /mnt/secrets/etcd-pass

     # etcd key (location) where teleport will be storing its state under.
     # make sure it ends with a '/'!
     prefix: /teleport/

     # NOT RECOMMENDED: enables insecure etcd mode in which self-signed
     # certificate will be accepted
     insecure: false

Using Amazon S3


Before continuing, please make sure to take a look at the cluster state section in Teleport Architecture documentation.

AWS Authentication

The configuration examples below contain AWS access keys and secret keys. They are optional, they exist for your convenience but we DO NOT RECOMMEND using them in production. If Teleport is running on an AWS instance it will automatically use the instance IAM role. Teleport also will pick up AWS credentials from the ~/.aws folder, just like the AWS CLI tool.

S3 buckets can only be used as a storage for the recorded sessions. S3 cannot store the audit log or the cluster state. Below is an example of how to configure a Teleport auth server to store the recorded sessions in an S3 bucket.

      # The region setting sets the default AWS region for all AWS services
      # Teleport may consume (DynamoDB, S3)
      region: us-east-1

      # Path to S3 bucket to store the recorded sessions in.
      audit_sessions_uri: "s3://Example_TELEPORT_S3_BUCKET/records"

      # Teleport assumes credentials. Using provider chains, assuming IAM role or
      # standard .aws/credentials in the home folder.

The AWS authentication settings above can be omitted if the machine itself is running on an EC2 instance with an IAM role.

Using DynamoDB


Before continuing, please make sure to take a look at the cluster state section in Teleport Architecture documentation.

If you are running Teleport on AWS, you can use DynamoDB as a storage back-end to achieve high availability. DynamoDB back-end supports two types of Teleport data:

DynamoDB cannot store the recorded sessions. You are advised to use AWS S3 for that as shown above. To configure Teleport to use DynamoDB:

    type: dynamodb
    # Region location of dynamodb instance,
    region: us-east-1

    # Name of the DynamoDB table. If it does not exist, Teleport will create it.
    table_name: Example_TELEPORT_DYNAMO_TABLE_NAME

    # This setting configures Teleport to send the audit events to three places:
    # To keep a copy on a local filesystem, in DynamoDB and to Stdout.
    # NOTE: The DynamoDB events table has a different schema to the regular Teleport
    # database table, so attempting to use same table for both will result in errors.
    audit_events_uri:  ['file:///var/lib/teleport/audit/events', 'dynamodb://events_table_name', 'stdout://']

    # This setting configures Teleport to save the recorded sessions in an S3 bucket:
    audit_sessions_uri: s3://Example_TELEPORT_S3_BUCKET/records

Access to DynamoDB

Make sure that the IAM role assigned to Teleport is configured with the sufficient access to DynamoDB. Below is the example of the IAM policy you can use:

    "Version": "2012-10-17",
    "Statement": [{
            "Sid": "AllAPIActionsOnTeleportAuth",
            "Effect": "Allow",
            "Action": "dynamodb:*",
            "Resource": "arn:aws:dynamodb:eu-west-1:123456789012:table/prod.teleport.auth"
            "Sid": "AllAPIActionsOnTeleportStreams",
            "Effect": "Allow",
            "Action": "dynamodb:*",
            "Resource": "arn:aws:dynamodb:eu-west-1:123456789012:table/prod.teleport.auth/stream/*"

Using GCS


Before continuing, please make sure to take a look at the cluster state section in Teleport Architecture documentation.

Google Cloud Storage (GCS) can only be used as a storage for the recorded sessions. GCS cannot store the audit log or the cluster state. Below is an example of how to configure a Teleport auth server to store the recorded sessions in a GCS bucket.

      # Path to GCS to store the recorded sessions in.
      audit_sessions_uri: "gs://Example_TELEPORT_STORAGE/records"
      credentials_path: /var/lib/teleport/gcs_creds

Using Firestore


Before continuing, please make sure to take a look at the cluster state section in Teleport Architecture documentation.

If you are running Teleport on GCP, you can use Firestore as a storage back-end to achieve high availability. Firestore back-end supports two types of Teleport data:

Firestore cannot store the recorded sessions. You are advised to use Google Cloud Storage (GCS) for that as shown above. To configure Teleport to use Firestore:

    type: firestore
    # Project ID
    project_id: Example_GCP_Project_Name

    # Name of the Firestore table. If it does not exist, Teleport won't start
    collection_name: Example_TELEPORT_FIRESTORE_TABLE_NAME

    credentials_path: /var/lib/teleport/gcs_creds

    # This setting configures Teleport to send the audit events to three places:
    # To keep a copy on a local filesystem, in Firestore and to Stdout.
    # NOTE: The Firestore events table has a different schema to the regular Teleport
    # database table, so attempting to use same table for both will result in errors.
    audit_events_uri:  ['file:///var/lib/teleport/audit/events', 'firestore://Example_TELEPORT_FIRESTORE_EVENTS_TABLE_NAME', 'stdout://']

    # This setting configures Teleport to save the recorded sessions in GCP storage:
    audit_sessions_uri: gs://Example_TELEPORT_S3_BUCKET/records

Upgrading Teleport

Teleport is always a critical component of the infrastructure it runs on. This is why upgrading to a new version must be performed with caution.

Teleport is a much more capable system than a bare bones SSH server. While it offers significant benefits on a cluster level, it also adds some complexity to cluster upgrades. To ensure robust operation Teleport administrators must follow the upgrade rules listed below.

Production Releases

First of all, avoid running pre-releases (release candidates) in production environments. Teleport development team uses Semantic Versioning which makes it easy to tell if a specific version is recommended for production use.

Component Compatibility

When running multiple binaries of Teleport within a cluster (nodes, proxies, clients, etc), the following rules apply:

As an extra precaution you might want to backup your application prior to upgrading. We provide more instructions in Backup before upgrading.

Upgrading to Teleport 4.0+

Teleport 4.0+ switched to GRPC and HTTP/2 as an API protocol. The HTTP/2 spec bans two previously recommended ciphers. tls-rsa-with-aes-128-gcm-sha256 & tls-rsa-with-aes-256-gcm-sha384, make sure these are removed from teleport.yaml Visit our community for more details

If upgrading you might want to consider rotating CA to SHA-256 or SHA-512 for RSA SSH certificate signatures. The previous default was SHA-1, which is now considered weak against brute-force attacks. SHA-1 certificate signatures are also no longer accepted by OpenSSH versions 8.2 and above. All new Teleport clusters will default to SHA-512 based signatures. To upgrade an existing cluster, set the following in your teleport.yaml:

  ca_signature_algo: "rsa-sha2-512"

After updating to 4.3+ rotate the cluster CA following these docs.

Backup Before Upgrading

As an extra precaution you might want to backup your application prior to upgrading. We have more instructions in Backing up Teleport.

Upgrade Sequence

When upgrading a single Teleport cluster:

  1. Upgrade the auth server first. The auth server keeps the cluster state and if there are data format changes introduced in the new version this will perform necessary migrations.

  2. Then, upgrade the proxy servers. The proxy servers are stateless and can be upgraded in any sequence or at the same time.

  3. Finally, upgrade the SSH nodes in any sequence or at the same time.


If several auth servers are running in HA configuration (for example, in AWS auto-scaling group) you have to shrink the group to just one auth server prior to performing an upgrade. While Teleport will attempt to perform any necessary migrations, we recommend users create a backup of their backend before upgrading the Auth Server, as a precaution. This allows for a safe rollback in case the migration itself fails.

When upgrading multiple clusters:

  1. First, upgrade the main cluster, i.e. the one which other clusters trust.
  2. Upgrade the trusted clusters.

Backing Up Teleport

When planning a backup of Teleport, it's important to know what is where and the importance of each component. Teleport's Proxies and Nodes are stateless, and thus only teleport.yaml should be backed up.

The Auth server is Teleport's brains, and depending on the backend should be backed up regularly.

For example a customer running Teleport on AWS with DynamoDB have these key items of data:

What Where ( Example AWS Customer )
Local Users ( not SSO ) DynamoDB
Certificate Authorities DynamoDB
Trusted Clusters DynamoDB
Connectors: SSO DynamoDB / File System
RBAC DynamoDB / File System
teleport.yaml File System
teleport.service File System
license.pem File System
TLS key/certificate ( File System / Outside Scope )
Audit log DynamoDB
Session recordings S3

For this customer, we would recommend using AWS best practices for backing up DynamoDB. If DynamoDB is used for the audit log, logged events have a TTL of 1 year.

Backend Recommended backup strategy
dir ( local filesystem ) Backup /var/lib/teleport/storage directory and the output of tctl get all.
DynamoDB Follow AWS Guidelines for Backup & Restore
etcd Follow etcD Guidleines for Disaster Recovery
Firestore Follow GCP Guidlines for Automated Backups

Teleport Resources

Teleport uses YAML resources for roles, trusted clusters, local users and auth connectors. These could be created via tctl or via the UI.


If running Teleport at scale, it's important for teams to have an automated way to restore Teleport. At a high level, this is our recommended approach:

Migrating Backends.

As of version v4.1 you can now quickly export a collection of resources from Teleport. This feature was designed to help customers migrate from local storage to etcd.

Using tctl get all will retrieve the below items:

When migrating backends, you should back up your auth server's data_dir/storage directly.

Example of backing up and restoring a cluster.

# export dynamic configuration state from old cluster
$ tctl get all > state.yaml

# prepare a new uninitialized backend (make sure to port
# any non-default config values from the old config file)
$ mkdir fresh && cat > fresh.yaml << EOF
  data_dir: fresh

# bootstrap fresh server (kill the old one first!)
$ teleport start --config fresh.yaml --bootstrap state.yaml

# from another terminal, verify state transferred correctly
$ tctl --config fresh.yaml get all
# <your state here!>

The --bootstrap flag has no effect, except during backend initialization (performed by auth server on first start), so it is safe for use in supervised/HA contexts.


Daemon Restarts

As covered in the Graceful Restarts section, Teleport supports graceful restarts. To upgrade a host to a newer Teleport version, an administrator must:

  1. Replace the Teleport binaries, usually teleport and tctl

  2. Execute systemctl restart teleport

This will perform a graceful restart, i.e.the Teleport daemon will fork a new process to handle new incoming requests, leaving the old daemon process running until existing clients disconnect.

License File

Commercial Teleport subscriptions require a valid license. The license file can be downloaded from the Teleport Customer Portal.

The Teleport license file contains a X.509 certificate and the corresponding private key in PEM format. Place the downloaded file on Auth servers and set the license_file configuration parameter of your teleport.yaml to point to the file location:

    license_file: /var/lib/teleport/license.pem

The license_file path can be either absolute or relative to the configured data_dir . If license file path is not set, Teleport will look for the license.pem file in the configured data_dir .


Only Auth servers require the license. Proxies and Nodes that do not also have Auth role enabled do not need the license.


To diagnose problems you can configure teleport to run with verbose logging enabled by passing it -d flag.


It is not recommended to run Teleport in production with verbose logging as it generates a substantial amount of data.

Sometimes you may want to reset teleport to a clean state. This can be accomplished by erasing everything under "data_dir" directory. Assuming the default location, rm -rf /var/lib/teleport/* will do.

Teleport also supports HTTP endpoints for monitoring purposes. They are disabled by default, but you can enable them:

$ teleport start --diag-addr=

Now you can see the monitoring information by visiting several endpoints:

Getting Help

If you need help, please ask on our community forum. You can also open an issue on Github.

For commercial support, you can create a ticket through the customer dashboard.

For more information about custom features, or to try our Enterprise edition of Teleport, please reach out to us at [email protected].

apartmentTeleport Enterprise

Teleport Enterprise is built around the open-source core, with premium support and additional, enterprise-grade features. It is for organizations that need to secure critical production infrastructure and meet compliance and audit requirements.

Demo Teleport Enterprise

get_appTeleport Community

Teleport Community provides modern SSH best practices out of the box for managing elastic infrastructure. Teleport Community is open-source software that anyone can download and install for free.


Download Teleport Community