Teleport User Manual
This User Manual covers usage of the Teleport client tool,
tsh. In this
document you will learn how to:
- Log into interactive shell on remote cluster nodes.
- Copy files to and from cluster nodes.
- Connect to SSH clusters behind firewalls without any open ports using SSH reverse tunnels.
- Explore a cluster and execute commands on specific nodes in a cluster.
- Share interactive shell sessions with colleagues or join someone else's session.
- Replay recorded interactive sessions.
In addition to this document, you can always simply type
tsh into your
terminal for the CLI reference.
For the impatient, here's an example of how a user would typically use
# Login into a Teleport cluster. This command retrieves user's certificates # and saves them into ~/.tsh/teleport.example.com $ tsh login --proxy=teleport.example.com # SSH into a node, as usual: $ tsh ssh [email protected] # `tsh ssh` takes the same arguments as OpenSSH client: $ tsh ssh -o ForwardAgent=yes [email protected] $ tsh ssh -o AddKeysToAgent=yes [email protected] # you can even create a convenient symlink: $ ln -s /path/to/tsh /path/to/ssh # ... and now your 'ssh' command is calling Teleport's `tsh ssh` $ ssh [email protected] # This command removes SSH certificates from a user's machine: $ tsh logout
In other words, Teleport was designed to be fully compatible with existing
SSH-based workflows and does not require users to learn anything new, other than
tsh login in the beginning.
A user identity in Teleport exists in the scope of a cluster. The member nodes of a cluster may have multiple OS users on them. A Teleport administrator assigns allowed logins to every Teleport user account.
When logging into a remote node, you will have to specify both logins. Teleport
identity will have to be passed as
--user flag, while the node login will be
[email protected], using syntax compatible with traditional
# Authenticate against the "work" cluster as joe and then login into the node # as root: $ tsh ssh --proxy=work.example.com --user=joe [email protected]
To retrieve a user's certificate, execute:
# Full form: $ tsh login --proxy=proxy_host:<https_proxy_port>,<ssh_proxy_port> # Using default ports: $ tsh login --proxy=work.example.com # Using custom HTTPS port: $ tsh login --proxy=work.example.com:5000 # Using custom SSH proxy port, which is set on the Auth Server: $ tsh login --proxy=work.example.com:2002
|https_proxy_port||the HTTPS port the proxy host is listening to (defaults to 3080).|
|ssh_proxy_port||the SSH port the proxy is listening to (defaults to 3023).|
The login command retrieves a user's certificate and stores it in
directory as well as in the ssh
agent, if there is one running.
This allows you authenticate just once, maybe at the beginning of the day.
tsh ssh commands will run without asking for credentials until the
temporary certificate expires. By default, Teleport issues user certificates
with a TTL (time to live) of 12 hours.
A Teleport cluster can be configured for multiple user identity sources. For
example, a cluster may have a local user called "admin" while regular users
should authenticate via Github. In this case,
you have to pass
--auth flag to
tsh login to specify which identity storage
# Login using the local Teleport 'admin' user: $ tsh --proxy=proxy.example.com --auth=local --user=admin login # Login using Github as an SSO provider, assuming the Github connector is called "github" $ tsh --proxy=proxy.example.com --auth=github --user=admin login
Inspecting SSH Certificate
To inspect the SSH certificates in
~/.tsh, a user may execute the following
$ tsh status > Profile URL: https://proxy.example.com:3080 Logged in as: johndoe Roles: admin* Logins: root, admin, guest Valid until: 2017-04-25 15:02:30 -0700 PDT [valid for 1h0m0s] Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty
SSH Agent Support
If there is an ssh agent running,
tsh login will store the user certificate in the agent. This can be verified
$ ssh-add -L
SSH agent can be used to feed the certificate to other SSH clients, for example
tsh login can also save the user certificate into a
# Authenticate user against proxy.example.com and save the user # certificate into joe.pem file $ tsh login --proxy=proxy.example.com --out=joe # Use joe.pem to login into a server 'db' $ tsh ssh --proxy=proxy.example.com -i joe [email protected]
--out flag will create an identity file suitable for
tsh -i but
if compatibility with OpenSSH is needed,
--format=openssh must be specified.
In this case the identity will be saved into two files:
$ tsh login --proxy=proxy.example.com --out=joe --format=openssh $ ls -lh total 8.0K -rw------- 1 joe staff 1.7K Aug 10 16:16 joe -rw------- 1 joe staff 1.5K Aug 10 16:16 joe-cert.pub
SSH Certificates for Automation
Regular users of Teleport must request an auto-expiring SSH certificate, usually every day. This doesn't work for non-interactive scripts, like cron jobs or CI/CD pipeline.
For such automation, it is recommended to create a separate Teleport user for bots and request a certificate for them with a long time to live (TTL).
In this example we're creating a certificate with a TTL of 10 years for the
jenkins user and storing it in jenkins.pem file, which can be later used with
-i (identity) flag for
# to be executed on a Teleport auth server $ tctl auth sign --ttl=87600h --user=jenkins --out=jenkins.pem
jenkins.pem can be copied to the jenkins server and passed to
(identity file) flag of
tctl auth sign is an admin's
tsh login --out and allows for unrestricted certificate TTL
Exploring the Cluster
In a Teleport cluster, all nodes periodically ping the cluster's auth server and
update their status. This allows Teleport users to see which nodes are online
tsh ls command:
# This command lists all nodes in the cluster which you previously logged in via "tsh login": $ tsh ls # Output: Node Name Node ID Address Labels --------- ------- ------- ------ turing 11111111-dddd-4132 10.1.0.5:3022 os:linux turing 22222222-cccc-8274 10.1.0.6:3022 os:linux graviton 33333333-aaaa-1284 10.1.0.7:3022 os:osx
tsh ls can apply a filter based on the node labels.
# only show nodes with os label set to 'osx': $ tsh ls os=osx Node Name Node ID Address Labels --------- ------- ------- ------ graviton 33333333-aaaa-1284 10.1.0.7:3022 os:osx
To launch an interactive shell on a remote node or to execute a command, use
tsh tries to mimic the
ssh experience as much as possible, so it supports
the most popular
ssh flags like
-L. For example, if you have
the following alias defined in your ~/.bashrc:
alias ssh="tsh ssh" then you
can continue using familiar SSH syntax:
# Have this alias configured, perhaps via ~/.bashrc $ alias ssh=/usr/local/bin/tsh # Login into a cluster and retrieve your SSH certificate: $ tsh --proxy=proxy.example.com login # These commands execute `tsh ssh` under the hood: $ ssh [email protected] $ ssh -p 6122 [email protected] ls $ ssh -o ForwardAgent=yes [email protected] $ ssh -o AddKeysToAgent=yes [email protected]
A Teleport proxy uses two ports:
3080 for HTTPS and
3023 for proxying SSH
connections. The HTTPS port is used to serve Web UI and also to implement 2nd
factor auth for the
If a Teleport proxy is configured to listen on non-default ports, they must be
--proxy flag as shown:
tsh --proxy=proxy.example.com:5000,5001 <subcommand>
This means use port
5000 for HTTPS and
5001 for SSH
tsh ssh supports OpenSSH
-L flag which allows forwarding incoming
connections from localhost to the specified remote host:port. The syntax of
where "bind_ip" defaults to
$ tsh ssh -L 5000:web.remote:80 node
This will connect to remote server
proxy.example.com, then it will
open a listening socket on
localhost:5000 and will forward all incoming
web.remote:80 via this SSH tunnel.
It is often convenient to establish port forwarding, execute a local command
which uses such connection and disconnect. You can do this with the
$ tsh ssh -L 5000:google.com:80 --local node curl http://localhost:5000
- Connects to
- Binds the local port 5000 to port 80 on google.com
curlcommand locally, which results in
curlhitting google.com:80 via
While implementing ProxyJump for Teleport, we have extended the feature to
$ tsh ssh -J proxy.example.com telenode
- Only one jump host is supported (
-J supports chaining that Teleport does not utilise)
and tsh will return with error in case of two jumphosts:
will not work.
tsh ssh -J [email protected]is used, it overrides the SSH proxy defined in the tsh profile and port forwarding is used instead of the existing Teleport proxy subsystem.
Resolving Node Names
tsh supports multiple methods to resolve remote node names.
- Traditional: by IP address or via DNS.
- Nodename setting: teleport daemon supports
nodenameflag, which allows Teleport administrators to assign alternative node names.
- Labels: you can address a node by
If we have two nodes, one with
os:linux label and one node with
can log into the OSX node with:
$ tsh ssh os=osx
This only works if there is only one remote node with the
os:osx label, but
you can still execute commands via SSH on multiple nodes using labels as a
selector. This command will update all system packages on machines that run
$ tsh ssh os=ubuntu apt-get update -y
The default TTL of a Teleport user certificate is 12 hours. This can be modified
at login with the
--ttl flag. This command logs you into the cluster with a
very short-lived (1 minute) temporary certificate:
$ tsh --ttl=1 login
You will be logged out after one minute, but if you want to log out immediately, you can always do:
$ tsh logout
To securely copy files to and from cluster nodes, use the
tsh scp command. It
is designed to mimic traditional
scp as much as possible:
$ tsh scp example.txt [email protected]:/path/to/dest
Again, you may want to create a bash alias like
alias scp="tsh --proxy=work
scp" and use the familiar syntax:
$ scp -P 61122 -r files [email protected]:/path/to/dest
Suppose you are trying to troubleshoot a problem on a remote server. Sometimes
it makes sense to ask another team member for help. Traditionally, this could be
done by letting them know which node you're on, having them SSH in, start a
terminal multiplexer like
screen and join a session there.
Teleport makes this a bit more convenient. Let's log into a server named "luna" and ask Teleport for our current session status:
$ tsh ssh luna >luna $ teleport status User ID : joe, logged in as joe from 10.0.10.1 43026 3022 Session ID : 7645d523-60cb-436d-b732-99c5df14b7c4 Session URL: https://work:3080/web/sessions/7645d523-60cb-436d-b732-99c5df14b7c4
Now you can invite another user account to the "work" cluster. You can share the URL for access through a web browser. Or you can share the session ID and she can join you through her terminal by typing:
$ tsh join 7645d523-60cb-436d-b732-99c5df14b7c4
Connecting to SSH Clusters behind Firewalls
Teleport supports creating clusters of servers located behind firewalls without any open listening TCP ports. This works by creating reverse SSH tunnels from behind-firewall environments into a Teleport proxy you have access to. This feature is called "Trusted Clusters".
This chapter explains how to a user may connect to a trusted cluster. Refer to the admin manual to learn how a trusted cluster can be configured.
Assuming the "work" Teleport proxy server is configured with a few trusted
clusters, a user may use
tsh clusters command to see a list of them:
$ tsh --proxy=work clusters Cluster Name Status ------------ ------ staging online production offline
Now you can use
--cluster flag with any
tsh command. For example, to list
SSH nodes that are members of the "production" cluster, simply do:
$ tsh --proxy=work ls --cluster=production Node Name Node ID Address Labels --------- ------- ------- ------ db-1 xxxxxxxxx 10.0.20.31:3022 kernel:4.4 db-2 xxxxxxxxx 10.0.20.41:3022 kernel:4.2
Similarly, if you want to SSH into
db-1 inside the "production" cluster:
$ tsh --proxy=work ssh --cluster=production db-1
This is possible even if nodes of the "production" cluster are located behind a firewall without open ports. This works because the "production" cluster establishes a reverse SSH tunnel back into "work" proxy and this tunnels is used to establish inbound SSH connections.
Teleport proxy serves the web UI on
https://proxyhost:3080. The UI allows you
to see the list of online nodes in a cluster, open a web-based Terminal to them,
see recorded sessions and replay them. You can also join other users in active
Using OpenSSH Client
There are a few differences between Teleport's
tsh and OpenSSH's
most of them can be mitigated.
tshalways requires the
tshneeds to know which cluster you are connecting to. But if you execute
tsh --proxy=xxx login, the current proxy will be saved in your
~/.tshprofile and won't be needed for other
tsh sshoperates two usernames: one for the cluster and another for the node you are trying to log into. See User Identities section below. For convenience,
$USERfor both by default. But again, if you use
tsh ssh, your Teleport username will be stored in
To avoid typing
tsh ssh [email protected] when logging into servers,
you can create a symlink
ssh -> tsh and execute the symlink. It will
behave exactly like a standard
ssh command, i.e.
ssh [email protected]. This is
helpful with other tools that expect
ssh to just work.
Teleport is built using standard SSH constructs: keys, certificates and protocols. This means that a Teleport system is 100% compatible with both OpenSSH clients and servers.
For a OpenSSH client (
ssh) to work with a Teleport proxy, two conditions must
sshmust be configured to connect through a Teleport proxy.
sshneeds to be given the SSH certificate issued by
SSH Proxy Configuration
ssh to use a Teleport proxy on
proxy.example.com, a user must
~/.ssh/config. A few examples are shown
# When "ssh db" is executed, OpenSSH will connect to proxy.example.com on port 3023 # and will request a proxied connection to "db" on port 3022 (default Teleport SSH port) Host db Port 3022 ProxyJump proxy.example.com:3023 # When connecting to a node behind a trusted cluster named "remote-cluster", # the name of the trusted cluster must be appended to the proxy subsystem # after '@': Host *.remote-cluster.example.com Port 3022 ProxyJump proxy.example.com:[email protected]
The configuration above is all you need to
ssh [email protected] if there's an
SSH agent running on a client computer. You can verify it by executing
tsh login. If the SSH agent is running, the cluster certificates will
be printed to stdout.
If there is no ssh-agent available, the certificate must be passed to the OpenSSH client explicitly.
When proxy is in "Recording mode" the following will happen with SSH:
forward.config enables agent forwarding:
Host teleport.proxy ForwardAgent yes
Passing Teleport SSH Certificate to OpenSSH Client
If a user does not want to use an SSH agent or if the agent is not available,
the certificate must be passed to
IdentityFile option (see
ssh_config). Consider this example: the Teleport user "joe" wants to login into
the proxy named "lab.example.com".
tsh login command:
$ tsh --proxy=lab.example.com login --user=joe
His identity is now stored in
~/.tsh/keys/lab.example.com, so his
~/.ssh/config needs to look like this:
# ~/.ssh/config file: Host *.lab.example.com Port 3022 IdentityFile ~/.tsh/keys/lab.example.com/joe ProxyCommand ssh -i ~/.tsh/keys/lab.example.com/joe -p 3023 %[email protected] -s proxy:%h:%p
Now he can SSH into any machine behind
lab.example.com using the OpenSSH
$ ssh jenkins.lab.example.com
If you encounter strange behaviour, you may want to try to solve it by enabling
the verbose logging by specifying
-d flag when launching
tsh. Also, you may
want to reset it to a clean state by deleting temporary keys and other data from