The Gravity Hub is a multi-cluster control plane available in the Enterprise version of Gravity. It serves two purposes:

  1. Gravity Hub acts as a central repository for Cluster Images, allowing an organization to share pre-built clusters.
  2. Gravity Hub reduces the operational overhead of managing multiple Kubernetes clusters created from Cluster Images.

Users of Gravity Hub can:

This chapter will guide you through the process of downloading and installing your own instance of Gravity Hub.

Gravity Hub Catalog Example Gravity Hub

Installing Gravity Hub

In this section we'll cover how to install your own instance of Gravity Hub on your own infrastructure. The end result will be an autonomous Kubernetes cluster with Gravity Hub running inside.

Gravity Hub itself is packaged and distributed as a Cluster Image, but you also need an Enterprise version of tele CLI tool. Please contact us to receive a trial license key.

As with any Gravity Cluster Image, you will also need a Linux server to install Gravity Hub. Assuming you have an enterprise version of tele CLI tool, pull the Cluster Image:

$ tele pull hub:6.0.1
* [1/3] Requesting Cluster Image from
* [2/3] Downloading hub:6.0.1
    Still downloading hub:6.0.1 (10 seconds elapsed)
    Still downloading hub:6.0.1 (20 seconds elapsed)
    Still downloading hub:6.0.1 (30 seconds elapsed)
    Still downloading hub:6.0.1 (40 seconds elapsed)
    Still downloading hub:6.0.1 (50 seconds elapsed)
    Still downloading hub:6.0.1 (1 minute elapsed)
* [3/3] Application hub:6.0.1 downloaded
* [3/3] Download completed in 1 minute

$ ls -lh
-rw-r--r-- 1 user user 1.3G Feb 20 13:02 hub-6.0.1.tar

The name of the image doesn't have to be hub:6.0.1, it will vary based on the version of Gravity you're using, so we'll refer to it simply as gravity-hub.tar below.

Installing Gravity Hub is no different from installing any other Cluster Image, as explained in the Installation chapter.

To establish trust between Gravity Hub and future Kubernetes clusters, a common shared hard-to-guess secret (token) must be generated first. Therefore, before installing Gravity Hub, a shared token needs to be generated. You may want to store it in an environment variable named TOKEN so it can be reused later:

# Generate a hard-to-guess token and store in an environment variable:
$ export TOKEN="$(uuidgen)"

# Next, expand the Cluster Image and launch the installer:
$ tar xvf ./gravity-hub.tar
$ ./gravity install --advertise-addr= \
                    --token=$TOKEN \
                    --flavor=standalone \

After gravity install from the example above completes, you'll have a single-node Kubernetes cluster running with Gravity Hub inside.

Next, let's apply some minimal configuration on it.


Setting up DNS

After provisioning of Gravity Hub cluster, create the DNS A-records pointing at either the provisioned cloud load balancer (if the cluster was created on a cloud account) or at the IP of the host.

Wildcard DNS name

The Gravity Hub DNS records must contain the wildcard, both * and should point to the public IP address of the Gravity Hub cluster.

Setting up OIDC

After installation OIDC provider should be set up in order to log into Gravity Hub.

Gravity OIDC Connector

Setting up TLS Key Pair

After installation, a valid TLS key pair should be set up in order to log into Gravity Hub. Self-signed certificates are currently not supported.

Gravity Hub Certificates

Configuring endpoints

By default, Gravity Hub is configured with a single endpoint set via --hub-advertise-addr flag during the installation. This means that all Gravity Hub clients will use this address to connect to it.

But Gravity Hub can also be configured to advertise different addresses to users and remote Clusters via the endpoints resource. It has the following format:

kind: endpoints
version: v2
  name: endpoints
  public_advertise_addr: "<public-host>:<public-port>"
  agents_advertise_addr: "<agents-host>:<agents-port>"

Create the resource to update Gravity Hub endpoints:

$ gravity resource create endpoints.yaml


Updating the endpoints resource will result in restart of gravity-site pods so the changes can take effect.

To view currently configured endpoints, run:

$ gravity resource get endpoints

Let's take a look at how Gravity Hub behavior changes with different endpoint configurations.

Single advertise address

This is the default configuration, when agents_advertise_addr is either not specified or equals to public_advertise_addr:

  public_advertise_addr: ""

With this configuration, Gravity Hub Cluster will provide a single Kubernetes service called gravity-public configured to serve both user and Cluster traffic:

$ kubectl get services -n kube-system -l app=gravity-hub
NAME             TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                       AGE
gravity-public   LoadBalancer   <pending>     443:31033/TCP,3024:30561/TCP,3023:31043/TCP   40m

Setting up ingress

On cloud installations that support Kubernetes integration such as AWS, a load balancer will be created automatically, so you will only need to configure DNS to point the advertised hostname ( in this example) to it. For onprem installations, an ingress should be configured for the appropriate NodePort of the service (31033 in this example).

Same hostname, different port

In this scenario both user and Cluster traffic should be accessible on the same hostname but on different ports:

  public_advertise_addr: ""
  agents_advertise_addr: ""

With this configuration, Gravity Hub will provide a single Kubernetes service called gravity-public (which can point at) with two different ports for user and Cluster traffic respectively:

kubectl get services -n kube-system -l app=gravity-hub
NAME             TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                      AGE
gravity-public   LoadBalancer   <pending>     443:31265/TCP,4443:30080/TCP,3024:32109/TCP,3023:30716/TCP   54m

Different hostnames

In this scenario user and Cluster traffic have different advertise hostnames:

  public_advertise_addr: ""
  agents_advertise_addr: ""

The ports may be the same or different which does not affect the general behavior, only the respective service configuration.

With this configuration, an additional Kubernetes service called gravity-agents is created for the Cluster traffic which can be point at:

# kubectl get services -n kube-system -l app=gravity-hub
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
gravity-public   LoadBalancer    <pending>     443:31792/TCP,3023:32083/TCP    59m
gravity-agents   LoadBalancer   <pending>     4443:30873/TCP,3024:30185/TCP   8s


This section assumes that you have downloaded the newer version of Gravity Hub Cluster Image called new-hub.tar. Log into a root terminal on one of the servers running Gravity Hub and extract the tarball there:

$ tar xvf new-hub.tar

Start the upgrade procedure using upgrade script:

$ ./upgrade

Read more about upgrade procedure here.


Users who use an external load balancer may need to update their configuration after the upgrade to reference new port assignments.

Accessing Gravity Hub

You can log into Gravity Hub with tsh login command.

$ tsh login

Based on the Gravity Hub configuration, the login command will open the web browser and users will have to go through a single sign-on (SSO) process with the identity provider of their choice.

Gravity Hub Certificates Gravity Hub - User List

Publishing Cluster Images

Once logged into Gravity Hub, the commands below are used to manage the publishing process.

Once a Cluster Image is built by tele build, it can be deployed and installed by publishing it into the Gravity Hub.

# Use tele push to upload a Cluster Image to the Gravity Hub:
$ tele push [options] tarball.tar

  --force, -f  Forces to overwrite the already-published application if it exists.

tele pull will download a Cluster Image from the Gravity Hub:

$ tele [options] pull [application]

  -o   Name of the output tarball.

tele rm app deletes a Cluster Image from the Gravity Hub.

$ tele rm app [options] [application]

  --force  Do not return error if the application cannot be found or removed.

tele ls lists the Cluster Images currently published in the Gravity Hub:

$ tele [options] ls

  --all   Shows all available versions of images, instead of the latest versions only

Remote Cluster Management

Gravity uses Teleport to connect to remote Clusters. Teleport is an open source privileged management solution for both SSH and Kubernetes and it comes bundled with Gravity.

Gravity Hub Certificates

To see the list of Gravity Clusters available:

$ tsh clusters
Name                          Status     Cloud Provider     Region
----                          ------     --------------     ------
east                          active     aws                us-east
west                          active     aws                us-west-2

Now you can make one of these Clusters "current":

$ tsh login west

This command will automatically update your local kubeconfig file with Kubernetes credentials, and kubectl command will automatically connect to the Cluster you've selected.

To see which Cluster is current, execute tsh status command.

Gravity Hub administrators can limit access to Clusters using where expressions in roles and user traits fetched from identity providers.

Cluster RBAC Using Labels

Sometimes it is necessary to limit users access to a subset of Clusters via Gravity Hub. For this, use Gravity Hub roles with where expressions in their rules:

kind: role
version: v3
  name: developers
    - developers
    - default
    - admin
    - resources:
      - role
      - read
    - resources:
      - app
      - list
    - resources:
      - cluster
      - connect
      - read
      where: contains(user.spec.traits["roles"], resource.metadata.labels["team"])

The role developers uses special property user.spec.traits that contains user OIDC claims or SAML attribute statements after users have successfully logged into Gravity Hub.

The property resource.spec.labels["team"] refers to cluster label team. One can set cluster labels when creating Clusters via UI or CLI.

And finally where expression contains(user.spec.traits["roles"], resource.metadata.labels["team"]) matches members with developers OIDC claim or SAML attribute statement to have admin Kubernetes access to Clusters marked with label team:developers

Cluster RBAC With Deny Rules

Users can use deny rules to limit access to some privileged Clusters:

kind: role
version: v3
  name: deny-production
    - default
    - resources:
      - role
      - read
    - resources:
      - app
      - list
    - resources:
      - cluster
      - connect
      - read
      - list
      where: equals(resource.metadata.labels["env"], "production")

The role deny-production when assigned to the user, will limit access to all Clusters with label env:production.

SSH Into Nodes

Users can use tsh ssh command to SSH into any node inside any remote Clusters. For example:

$ tsh --cluster=east ssh [email protected]

You can also copy files using secure file copy AKA scp:

$ tsh --cluster=east scp example.txt [email protected]:/path/to/dest/

tsh ssh supports all the usual flags ssh users are used to. You can forward ports, execute commands and so on. Run tsh help for more information.

Gravity Enterprise

Gravity Enterprise enhances Gravity Community, the open-source Kubernetes packaging solution, to meet security and compliance requirements. It is trusted by some of the largest enterprises in software, finance, healthcare, security, telecom, government, and other industries.

Demo Gravity Enterprise

Gravity Community

Gravity Community is an upstream Kubernetes packaging solution that takes the drama out of on-premise deployments. Gravity Community is open-source software that anyone can download and install for free.

Download Gravity Community