Welcome to Telekube

Welcome to Telekube, by Gravitational. Telekube allows developers to package and deploy their complex multi-node, multi-tier applications into a variety of target infrastructure options, such as their own AWS accounts, 3rd party AWS accounts or even into air-gapped, bare metal server clusters.

Telekube uses Google's Kubernetes as a portable cluster runtime and allows ops teams to remotely access any application environment via SSH with a single command, even if it's running behind a firewall.

This overview will walk you through the basic concepts of Telekube.

Application Lifecycle

Telekube packages your application together with Kubernetes, forming a self-contained, deployable .tar tarball which can be installed into a variety of infrastructure options.

Every installed instance of an application becomes a standalone server cluster managed by Kubernetes with your application running in it.

The typical application lifecycle using Telekube includes the following:

  1. Prepare your application to run on Kubernetes. If you do not have Kubernetes expertise, our Implementation Services team can help.
  2. Package the application into a deployable tar tarball.
  3. Publish the application for distribution.
  4. Deploy the application into your own server clusters or into 3rd party private infrastructure.
  5. Securely connect to any server cluster to monitor health and roll out updates of all instances of the application.


Telekube works with Kubernetes applications. This means the following prerequisites exist in order to use Telekube:

  • The application is packaged into Docker containers.
  • You have Kubernetes resource definitions for application services, pods, etc. Kubernetes resources should be stored in the resources directory.

To prepare a Kubernetes application for distribution via Telekube, you have to write an application manifest. Below is a sample of a minimalistic application manifest in YAML format. As you can see, it follows the Kubernetes configuration conventions:

apiVersion: bundle.gravitational.io/v2
kind: Bundle
  name: telekube
  resourceVersion: "1.0.0"
logo: "http://example.com/logo.jpg"
    prompt: "Select a flavor"
      - name: "one"
        description: "1 node"
          - profile: node
            count: 1
  - name: node
    description: "Telekube Node"
        min: 1
        min: "2GB"

The sample above is intentionally simplistic to illustrate the concept. The manifest's job is to describe the infrastructure requirements and custom steps required for installing and updating the application.

The application manifest works in conjunction with Kubernetes jobs and configuration maps. Together, these tools provide high degree of flexibility for specifying how applications are installed, updated and configured. You can learn more about the application manifest in the Packaging & Deployment section of the documentation.


Save the manifest as app.yaml and place it into the same directory where the reset of Kubernetes YAML (or JSON) resources are stored.

As you can see in this diagram, the build machine (often times it's a developer's laptop, or perhaps a CI/CD server) should contain everything needed to build a self-sufficient portable package to distribute any Kubernetes application:

Telekube Build Diagram

Use Telekube's build command to create an application tarball:

$ tele build -o app-name.tar app.yaml

This will produce a self-sufficient app-name.tar which you can deploy into your own AWS regions or distribute to your customers so they can deploy it onto their bare metal servers or their own AWS account.


Publishing can be as simple as uploading the generated app.tar file into an S3 bucket and posting its URL for anyone to download and install.

Another option is to publish the tarball into the Telekube "Ops center", a centralized repository of your applications and its deployed instances. If an application is distributed via the Ops center, every installed instance of it can optionally "phone home", enabling online updates, remote monitoring and troubleshooting by the application publisher.

The Ops center allows application developers to oversee how many instances of their application are running and perform administration and maitenance across all of them in a repeatable, scalable way, even if they are deployed on 3rd party infrastructure.

Deployment Modes

Telekube supports two modes of application deployment:

  • Online mode: In this mode you publish the resulting tarball into the Telekube Ops center. The Ops center generates an Installer URL which can be shared with application users. They will use this URL to install the application into their own infrastructure. In this mode the application can be remotely managed and updated.

  • Offline mode: In this mode the tarball can be simply copied to the target infrastructure which can even be air-gapped and not connected to the Internet. The end users would then unpack it and launch the enclosed installer on their servers.

In both modes the Telekube installer can run as a command line (CLI) tool or by opening a browser-based graphical Install Wizard.

Online Installer

The Online Mode of deployment allows you to generate a URL to a web-based installer for your application. The URL can be distributed to the end users for them to install the application.

The diagram below illustrates how the end state looks like. First, the application tarball is published into the Ops center. Then the application is deployed into a private infrastructure by using the installer URL. Once the application is up and running, an SSH tunnel is established for remote maintenance:

Telekube Online Installer

There are two types of online installer URLs:

  • Single-use installers, to be used by a specific customer.

  • Multi-use installers, which can be used many times. This would be suitable for publishing on a web site in order to let potential customers install an evaluation version of an application.


The end users can turn off the SSH tunnel and disconnect their application instances from the Ops center.

Offline Installer

Offline mode allows users to install complex multi-node stacks into air-gapped (offline) server clusters. The application tarball, app-name.tar from the example above contains everything an application needs to be installed and launched.

To install an application in offline mode using the graphical wizard, you will need a Linux desktop (with a browser) connected to the same network which the target Linux servers are on.

Expanding the tarball will produce the following:

$ tar -xf app-name.tar
$ ls -l
-rwxr--r-- 1 user staff 679  Oct 24 12:01 install
-rw-r--r-- 1 user staff 1.1K Oct 24 12:01 README
-rwxr--r-- 1 user staff 170  Oct 24 12:01 packages
-rwxr--r-- 1 user staff 170  Oct 24 12:01 upload
-rwxr--r-- 1 user staff 170  Oct 24 12:01 upgrade

Launch the installer by typing ./install. This will print an HTTP URL pointing to an Install Wizard running on localhost.

The Install Wizard will guide the user to install the application onto their servers as long as they are on the same network as the machine the installer is launched on.

Telekube Offline Installer

Automatic Installer

Instead of running a graphical installer, you can deploy an application via CLI which is useful for integrations with configuration management scripts or other types of intrastructure automation. Sometimes this method is called "unattended installation".

For this to work the information needed to complete the installation has to be supplied via the command line flags passed to the installer.

Assume you have 3 nodes which you want to install an application on.

  1. Copy the application tarball onto all nodes.
  2. Execute ./gravity install on the first node.
  3. Execute ./gravity join on two other nodes.

On the first node:

$ sudo ./gravity install --advertise-addr= --token=XXX

This will initiate the process of setting up a new cluster for the application. The command accepts the following arguments:

Flag Description
--token Secure token which prevents rogue nodes from joining the cluster during installation. Carefully pick a hard-to-guess value.
--advertise-addr IP address this node should be visible as. This setting is needed to correctly configure Kubernetes on every machine.
--cluster (Optional) Name of the cluster. Autogenerated if not set.
--cloud-provider (Optional) Cloud provider integration, generic or aws. Autodetected if not set.
--flavor (Optional) Application flavor. See Application Manifest for details.
--config (Optional) File with Kubernetes resources to create in the cluster during installation.
--pod-network-cidr (Optional) CIDR range Kubernetes will be allocating node subnets and pod IPs from. Must be a minimum of /16 so Kubernetes is able to allocate /24 to each node. Defaults to
--service-cidr (Optional) CIDR range Kubernetes will be allocating service IPs from. Defaults to
--wizard (Optional) Start the installer in interactive mode.

Now, with gravity install running on the first node, run the following on the remaining nodes:

$ sudo ./gravity join --advertise-addr= --token=XXX

This tells the new node to join a cluster initiated by gravity install on node Also make sure to correctly set --advertise-addr for every node, and make sure you're using the same value for --token.

The result of running these commands will be a fully functioning Kubernetes cluster with your application running inside!


This method works for provisioning "empty" Kubernetes clusters as well.

Remote Updates

If you are using Telekube on your own AWS environment, you can simply use standard Kubernetes tooling to deploy new builds of your application, for example via kubectl.

If you are running multiple instances of your application on your customers' infrastructure, you could use the Ops Center to get remote access to the running instances. In this case you can either manually run updates via kubectl or use Telekube's built-in connection.

Finally, if you have customers who are running your application in "offline" mode and you do not have remote access to it, the Telekube updating mechanism is the only way to perform updates.

See more details in Remote Management section.