Simplifying FedRAMP Compliance with Teleport
Jun 27
Virtual
Register Today
Teleport logoTry For Free
Fork me on GitHub

Teleport

Set up Automatic Agent Updates

On cloud-hosted Teleport Enterprise accounts, users must set up automatic agent updates to ensure that the version of Teleport running on agents remains compatible with the version running on the Auth Service and Proxy Service. If an agent does not maintain version compatibility with your Teleport cluster, connections to those agents will become degraded or lost.

Cloud-hosted Teleport clusters are updated on a weekly basis. Major version updates are performed every 4 months. You can monitor and subscribe to the Teleport Status page to be notified of scheduled updates.

Teleport supports automatic agent updates for systemd-based Linux distributions using apt, yum, and zypper package managers, as well as Kubernetes clusters.

This guide explains how to enable automatic updates for Teleport agents on Teleport Enterprise clusters, including both self-hosted and cloud-hosted clusters.

How it works

When automatic updates are enabled, a Teleport updater is installed alongside each Teleport agent. The updater communicates with the Teleport Proxy Service to determine when an update is available. When an update is available, the updater will update the Teleport agent during the next maintenance window. However, if a critical update is available, the Teleport agent will be updated outside the regular maintenance window.

Prerequisites

  • A Teleport Enterprise cluster. If you do not have one, sign up for a free trial or consult the Update Reference to read about manual updates.
  • Familiarity with the Upgrading Compatibility Overview guide, which describes the sequence in which to upgrade components in your cluster.
  • Teleport agents that are not yet enrolled in automatic updates.
  • The tctl and tsh client tools version >= 17.0.0-dev. Read Installation for how to install these.
  • To check that you can connect to your Teleport cluster, sign in with tsh login, then verify that you can run tctl commands using your current credentials. tctl is supported on macOS and Linux machines. For example:
    tsh login --proxy=teleport.example.com --user=email@example.com
    tctl status

    Cluster teleport.example.com

    Version 17.0.0-dev

    CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

    If you can connect to the cluster and run the tctl status command, you can use your current credentials to run subsequent tctl commands from your workstation. If you host your own Teleport cluster, you can also run tctl commands on the computer that hosts the Teleport Auth Service for full permissions.

Step 1/4. Enable automatic agent upgrades

If you are running a cloud-hosted Teleport Enterprise cluster, skip to Step 2.

Before enabling automatic upgrades in your self-hosted Teleport cluster, you must enable a version server. This section shows you how to enable a version server in your cluster. Automatic upgrades in self-hosted Teleport clusters require at least v14.3.7 or v15.1.3.

Configure a maintenance schedule

To enable automatic upgrades in your cluster, you must create a cluster maintenance configuration. This configures a maintenance schedule for the Teleport cluster that agents use to determine when to check for upgrades.

  1. Create a Teleport role that can manage cluster maintenance configurations through the cluster_maintenance_config dynamic resource. No preset Teleport roles provide this ability, so you will need to create one.

    Create a file called cmc-editor.yaml with the following content:

    kind: role
    version: v7
    metadata:
      name: cmc-editor
    spec:
      allow:
        rules:
        - resources: ['cluster_maintenance_config']
          verbs: ['create', 'read', 'update', 'delete']
    

    Create the role resource:

    tctl create cmd-editor.yaml
  2. Add the role to your Teleport user:

Assign the myrole role to your Teleport user by running the appropriate commands for your authentication provider:

  1. Retrieve your local user's roles as a comma-separated list:

    ROLES=$(tsh status -f json | jq -r '.active.roles | join(",")')
  2. Edit your local user to add the new role:

    tctl users update $(tsh status -f json | jq -r '.active.username') \ --set-roles "${ROLES?},myrole"
  3. Sign out of the Teleport cluster and sign in again to assume the new role.

  1. Retrieve your github authentication connector:

    tctl get github/github --with-secrets > github.yaml

    Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the github.yaml file. Because this key contains a sensitive value, you should remove the github.yaml file immediately after updating the resource.

  2. Edit github.yaml, adding myrole to the teams_to_roles section.

    The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.

    Here is an example:

      teams_to_roles:
        - organization: octocats
          team: admins
          roles:
            - access
    +       - myrole
    
  3. Apply your changes:

    tctl create -f github.yaml
  4. Sign out of the Teleport cluster and sign in again to assume the new role.

  1. Retrieve your saml configuration resource:

    tctl get --with-secrets saml/mysaml > saml.yaml

    Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the saml.yaml file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource.

  2. Edit saml.yaml, adding myrole to the attributes_to_roles section.

    The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

    Here is an example:

      attributes_to_roles:
        - name: "groups"
          value: "my-group"
          roles:
            - access
    +       - myrole
    
  3. Apply your changes:

    tctl create -f saml.yaml
  4. Sign out of the Teleport cluster and sign in again to assume the new role.

  1. Retrieve your oidc configuration resource:

    tctl get oidc/myoidc --with-secrets > oidc.yaml

    Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the oidc.yaml file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource.

  2. Edit oidc.yaml, adding myrole to the claims_to_roles section.

    The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

    Here is an example:

      claims_to_roles:
        - name: "groups"
          value: "my-group"
          roles:
            - access
    +       - myrole
    
  3. Apply your changes:

    tctl create -f oidc.yaml
  4. Sign out of the Teleport cluster and sign in again to assume the new role.

  1. Create a cluster maintenance config in a file called cmc.yaml. The following example allows maintenance on Monday, Wednesday and Friday between 02:00 and 03:00 UTC:

    kind: cluster_maintenance_config
    spec:
      agent_upgrades:
        # Maintenance window start hour in UTC.
        # The maintenance window lasts 1 hour.
        utc_start_hour: 2
        # Week days when maintenance is allowed
        # Possible values are:
        # - Short names: Sun, Mon, Tue, Wed, Thu, Fri, Sat
        # - Long names: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday
        weekdays:
          - Mon
          - Wed
          - Fri
    
  2. Apply the manifest using tctl:

    tctl create cmc.yaml
    maintenance window has been updated

[Optional] Assign the version served by the version server

By default, the version server has a single default channel, serving the version of the Teleport Proxy Service. If you want to override the default version or add other channels you can use the automatic_upgrades_channels field in the Proxy Service configuration file:

proxy_service:
  enabled: "yes"
  automatic_upgrades_channels:
    # Override the default version channel reachable at
    # https://teleport.example.com:443/v1/webapi/automaticupgrades/channel/default/version
    default:
      static_version: v14.2.1
    # Define a new version channel with a static version reachable at
    # https://teleport.example.com:443/v1/webapi/automaticupgrades/channel/m-static-channel/version
    my-static-channel:
      static_version: v14.2.0
    # Define a new version channel forwarding requests to an upstream version server
    my-remote-channel:
      forward_url: https://updates.releases.teleport.dev/v1/stable/cloud

You must ensure that all Proxy Service instances share the same automatic_upgrades_channels configuration. If some Proxy Service instances are configured differently, you will experience agents flickering between versions as the version served is not consistent across instances.

If your Proxy Service public address is teleport.example.com:443, you can query the version server with the following command:

curl "https://teleport.example.com:443/v1/webapi/automaticupgrades/channel/default/version"
17.0.0-dev

Step 2/4. Find agents to enroll in automatic updates

Use the tctl inventory ls command to list connected agents along with their current version. Use the --upgrader=none flag to list agents that are not enrolled in automatic updates.

tctl inventory ls --upgrader=none
Server ID Hostname Services Version Upgrader------------------------------------ ------------- -------- ------- --------00000000-0000-0000-0000-000000000000 ip-10-1-6-130 Node v14.4.5 none...

Note that the inventory ls command will also list teleport-auth and teleport-proxy services. If you use managed Teleport Enterprise, the Teleport team updates these services automatically.

Step 3/4. Enroll agents on Linux servers in automatic updates

  1. For each agent ID returned by the tctl inventory ls command, copy the ID and run the following tctl command to access the host via tsh:

    HOST=00000000-0000-0000-0000-000000000000
    USER=root
    tsh ssh "${USER?}@${HOST?}"
  2. Determine the Teleport version to install by querying the Teleport Proxy Service. This way, the Teleport installation has the same major version as the automatic updater.

    Replace example.teleport.sh with the domain name of the Teleport Proxy Service and stable/cloud with the name of your automatic update channel. For cloud-hosted Teleport Enterprise accounts, this is always stable/cloud:

    TELEPORT_VERSION="$(curl https://example.teleport.sh/v1/webapi/automaticupgrades/channel/stable/cloud/version | sed 's/v//')"
  3. Ensure that the Teleport repository is properly configured to use the stable/cloud channel, and install the teleport-ent-updater package. You must install teleport-ent-updater on each agent you would like to enroll into automatic updates:

    curl https://goteleport.com/static/install.sh | bash -s ${TELEPORT_VERSION?} cloud
    curl https://goteleport.com/static/install.sh | bash -s ${TELEPORT_VERSION?} enterprise

    The installation script detects the package manager on your Linux server and uses it to install Teleport binaries. To customize your installation, learn about the Teleport package repositories in the installation guide.

  4. Confirm that the version of the teleport binary is the one you expect:

    teleport version

If you changed the agent user to run as non-root, create /etc/teleport-upgrade.d/schedule and grant ownership to your Teleport user:

sudo mkdir -p /etc/teleport-upgrade.d/
sudo touch /etc/teleport-upgrade.d/schedule
sudo chown <your-teleport-user> /etc/teleport-upgrade.d/schedule
  1. Verify that the upgrader can see your version endpoint by checking for upgrades:

    sudo teleport-upgrade dry-run
  2. You should see one of the following messages, depending on the target version you are currently serving:

    no upgrades available (1.2.3 == 1.2.3)
    an upgrade is available (1.2.3 -> 2.3.4)
    

    teleport-upgrade may display warnings about not having a valid upgrade schedule. This is expected immediately after install as the maintenance schedule might not be exported yet.

Step 4/4. Enroll Kubernetes agents in automatic updates

This section assumes that the name of your teleport-kube-agent release is teleport-agent, and that you have installed it in the teleport namespace.

  1. Confirm that you are using the Teleport Enterprise edition of the teleport-kube-agent chart. You should see the following when you query your teleport-kube-agent release:

    helm -n `teleport` get values `teleport-agent` -o json | jq '.enterprise'
    true

    If another value such as null is returned, update your existing agent values.yaml to use the Enterprise version:

    enterprise: true
    
  2. Add the following chart values to the values file for the teleport-kube-agent chart:

    updater:
      enabled: true
    
  3. Update the Teleport Helm repository to include any new versions of the teleport-kube-agent chart:

    helm repo update teleport
  4. Update the Helm chart release with the new values:

    helm -n teleport upgrade teleport-agent teleport/teleport-kube-agent \--values=values.yaml \--version=15.4.4
    helm -n teleport upgrade teleport-agent teleport/teleport-kube-agent \--values=values.yaml \--version=17.0.0-dev
  5. You can validate the updater is running properly by checking if its pod is ready:

    kubectl -n teleport-agent get pods
    NAME READY STATUS RESTARTS AGE<your-agent-release>-0 1/1 Running 0 14m<your-agent-release>-1 1/1 Running 0 14m<your-agent-release>-2 1/1 Running 0 14m<your-agent-release>-updater-d9f97f5dd-v57g9 1/1 Running 0 16m
  6. Check for any deployment issues by checking the updater logs:

    kubectl -n teleport logs deployment/teleport-agent-updater
    2023-04-28T13:13:30Z INFO StatefulSet is already up-to-date, not updating. {"controller": "statefulset", "controllerGroup": "apps", "controllerKind": "StatefulSet", "StatefulSet": {"name":"my-agent","namespace":"agent"}, "namespace": "agent", "name": "my-agent", "reconcileID": "10419f20-a4c9-45d4-a16f-406866b7fc05", "namespacedname": "agent/my-agent", "kind": "StatefulSet", "err": "no new version (current: \"v12.2.3\", next: \"v12.2.3\")"}

Troubleshooting

Teleport agents are not updated immediately when a new version of Teleport is released, and agent updates can lag behind the cluster by a few days.

If the Teleport agent has not been automatically updating for several weeks, you can consult the updater logs to help troubleshoot the problem:

journalctl -u teleport-upgrade

Troubleshooting automatic agent upgrades on Kubernetes

The updater is a controller that periodically reconciles expected Kubernetes resources with those in the cluster. The updater executes a reconciliation loop every 30 minutes or in response to a Kubernetes event. If you don't want to wait until the next reconciliation, you can trigger an event.

  1. Any deployment update will send an event, so you can trigger the upgrader by annotating the resource:

    kubectl -n teleport annotate statefulset/teleport-agent 'debug.teleport.dev/trigger-event=1'
  2. To suspend automatic updates for an agent, annotate the agent deployment with teleport.dev/skipreconcile: "true", either by setting the annotations.deployment value in Helm, or by patching the deployment directly with kubectl.

Troubleshooting automatic agent upgrades on Linux

  1. If an agent is not automatically upgraded, you can invoke the upgrader manually and look at its logs:

    sudo teleport-upgrade run
  2. To suspend automatic updates, disable the systemd timer:

    sudo systemctl disable --now teleport-upgrade.timer
  3. To enable and start the systemd timer after suspending:

    sudo systemctl enable --now teleport-upgrade.timer