Kubernetes: major version upgrade of Anthos GKE on-prem from 1.9 to 1.10

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to upgrade from 1.9 to 1.10 on VMware.

The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.9 installation article.

Overview

The proper order for a major-version upgrade is:

  • Download the newer gkeadm tool
  • Upgrade the Admin Workstation
  • Install new full bundle for upgrade
  • Upgrade the User Clusters
  • Upgrade the Admin Cluster

Login to the Seed VM

The initial seed VM is the guest used to create the Admin Workstation.

cd anthos-nested-esx-manual
export project_path=$(realpath .)

# login to intial VM used to create the Admin Workstation
ssh -i ./tf-kvm-seedvm/id_rsa ubuntu@192.168.140.220

Download the newer gkeadm tool (from the Seed VM)

Download the newer 1.10 gkeadm tool.

cd ~/seedvm
gsutil cp gs://gke-on-prem-release/gkeadm/1.10.0-gke.194/linux/gkeadm ./gkeadm1100

chmod +x gkeadm1100
./gkeadm1100 version

Upgrade the Admin Workstation (from the Seed VM)

Use the admin-ws-config.yaml used to initially setup the Admin Workstation, and the generated Admin Workstation info file (which matches the name of the Admin Workstation).

$ ./gkeadm1100 upgrade admin-workstation --config admin-ws-config.yaml --info-file gke-admin-ws

Using config file "admin-ws-config.yaml"...
Running validations...
- Validation Category: Tools
    - [SUCCESS] gcloud
    - [SUCCESS] ssh
    - [SUCCESS] ssh-keygen
    - [SUCCESS] scp

- Validation Category: Config Check
    - [SUCCESS] Config

- Validation Category: Internet Access
    - [SUCCESS] Internet access to required domains

- Validation Category: GCP Access
    - [SUCCESS] Read access to GKE on-prem GCS bucket

- Validation Category: vCenter
    - [SUCCESS] Credentials
    - [SUCCESS] vCenter Version
    - [SUCCESS] ESXi Version
    - [SUCCESS] Datacenter
    - [SUCCESS] Datastore
    - [SUCCESS] Resource Pool
    - [SUCCESS] Folder
    - [SUCCESS] Network

All validation results were SUCCESS.
Upgrading admin workstation "gke-admin-ws" from version "1.9.2-gke.4" to version "1.10.0-gke.194"...
Generating local backup of admin workstation VM "gke-admin-ws"...  DONE
Reusing VM template "gke-on-prem-admin-appliance-vsphere-1.10.0-gke.194" that already exists in vSphere.
Do not cancel (double ctrl-c) while the admin workstation "gke-admin-ws" is being decommissioned. Doing so may result in an unrecoverable state.
Decommissioning original admin workstation VM "gke-admin-ws"...  DONE
Do not cancel (double ctrl-c) once the new admin workstation VM has been created. Doing so may result in an unrecoverable state.
Creating admin workstation VM "gke-admin-ws-1-10-0-gke-194-1641581626"...  
DONE
Waiting for admin workstation VM "gke-admin-ws-1-10-0-gke-194-1641581626" to be 
assigned an IP....  DONE

******************************************
Admin workstation VM successfully created:
- Name:    gke-admin-ws-1-10-0-gke-194-1641581626
- IP:      192.168.140.221
- SSH Key: /home/ubuntu/.ssh/gke-admin-workstation
******************************************
Deleting admin workstation VM "gke-admin-ws"...  DONE
Renaming new admin workstation "gke-admin-ws-1-10-0-gke-194-1641581626" to "gke-admin-ws"
Printing gkectl and docker versions on admin workstation...
gkectl version
gkectl 1.10.0-gke.194 (git-764a8477a)
Add --kubeconfig to get more version information.

docker version
Client:
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.8
 Git commit:        20.10.7-0ubuntu5~20.04.2~anthos1
 Built:             Fri Nov 12 18:37:13 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.8
  Git commit:       20.10.7-0ubuntu5~20.04.2~anthos1
  Built:            Fri Nov 12 16:42:06 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.5.8-0ubuntu0~20.04.1~anthos1
  GitCommit:        
 runc:
  Version:          1.0.0~rc95-0ubuntu1~20.04.1~anthos1
  GitCommit:        
 docker-init:
  Version:          0.19.0
  GitCommit:        


Checking NTP server on admin workstation...
timedatectl
               Local time: Fri 2022-01-07 19:05:41 UTC
           Universal time: Fri 2022-01-07 19:05:41 UTC
                 RTC time: Fri 2022-01-07 19:05:41    
                Time zone: Etc/UTC (UTC, +0000)       
System clock synchronized: yes                        
              NTP service: active                     
          RTC in local TZ: no                         

Getting component access service account...

Preparing "credential.yaml" for gkectl...

Copying files to admin workstation...
    - vcenter.ca.pem
    - anthos-allowlisted.json
    - /tmp/gke-on-prem-vcenter-credentials346635737/credential.yaml

Updating admin-cluster.yaml for gkectl...

********************************************************************
Admin workstation is ready to use.

WARNING: file already exists at "/home/ubuntu/seedvm/gke-admin-ws". Overwriting.
Admin workstation information saved to /home/ubuntu/seedvm/gke-admin-ws
This file is required for future upgrades
SSH into the admin workstation with the following command:
ssh -i /home/ubuntu/.ssh/gke-admin-workstation ubuntu@192.168.140.221
********************************************************************

This command does a backup of the files on your current Admin Workstation, kubeconfig, root certs, and json files; then creates a newer Admin Workstation and copies those files back onto it.

As stated in the output of the command, you will temporarily see a new VM in vCenter. However, this is only a temporary name until the older Admin WS is deleted.

The backing vmdk disk for the AdminWS (‘dataDiskName’ in admin-ws-config.yaml) is re-attached to this new VM.

Test the connection by checking the uptime of the new Admin WS.

# will be low uptime, since AdminWS was just created
ssh -i /home/ubuntu/.ssh/gke-admin-workstation ubuntu@192.168.140.221 "uptime"

# get AdminWS private key from seed, then exit back to host
cat /home/ubuntu/.ssh/gke-admin-workstation
exit

Then cat the gke-admin-workstation private key so we can copy it back to our host and login to the new Admin WS.

cd $project_path/needed_on_adminws

# paste in AdminWS private key
vi gke-admin-workstation

# clear old fingerprint to AdminWS
ssh-keygen -f ~/.ssh/known_hosts -R 192.168.140.221

# login to new Admin WS
ssh -i $project_path/needed_on_adminws/gke-admin-workstation ubuntu@192.168.140.221

# reset the ssh server timeout (destroyed during rebuild)
./adminws_ssh_increase_timeout.sh

# back to host
exit

Install full bundle for upgrade (from the Admin WS)

To do an upgrade, the newer full bundle needs to downloaded and prepared.

# login to new Admin WS
ssh -i $project_path/needed_on_adminws/gke-admin-workstation ubuntu@192.168.140.221

# view current bundles already downloaded locally
$ ls /var/lib/gke/bundles
gke-onprem-vsphere-1.10.0-gke.194-full.tgz gke-onprem-vsphere-1.10.0-gke.194.tgz

# view bundles currently in use by admin and user clusters
# will show older versions
$ gkectl version --kubeconfig /home/ubuntu/kubeconfig --details

gkectl version: 1.10.0-gke.194 (git-764a8477a)

onprem user cluster controller version: 1.9.2-gke.4

current admin cluster version: 1.9.2-gke.4

current user cluster versions (VERSION: CLUSTER_NAMES):
- 1.9.2-gke.4: user1

available admin cluster versions:
- 1.9.2-gke.4

available user cluster versions:
- 1.9.2-gke.4

Info: The admin workstation and gkectl is NOT ready to upgrade to "1.11" yet, because there are "1.9" clusters.
Info: The admin cluster can't be upgraded to "1.10", because there are still "1.9" user clusters.

This shows us that the Admin and User cluster are still using 1.9.2-gke.4. Now we need to prepare the 1.10.0-gke.194 full bundle.

# full bundle already found locally in /var/lib/gke/bundles
# assign appropriate permissions
$ sudo chmod ugo+r /var/lib/gke/bundles/*.tgz

$ gkectl prepare --bundle-path /var/lib/gke/bundles/gke-onprem-vsphere-1.10.0-gke.194-full.tgz --kubeconfig /home/ubuntu/kubeconfig

- Validation Category: Config Check
    - [SUCCESS] Config

- Validation Category: Internet Access
    - [SUCCESS] Internet access to required domains

- Validation Category: GCP
    - [SUCCESS] GCP service
    - [SUCCESS] GCP service account

- Validation Category: Container Registry
    - [SUCCESS] Docker registry access

- Validation Category: VCenter
    - [SUCCESS] Credentials
    - [SUCCESS] vCenter Version
    - [SUCCESS] ESXi Version
    - [SUCCESS] Datacenter
    - [SUCCESS] Datastore
    - [SUCCESS] Resource pool
    - [SUCCESS] Folder
    - [SUCCESS] Network

All validation results were SUCCESS.
Logging in to gcr.io/gke-on-prem-release
Finished preparing the container images.
Reusing VM template "gke-on-prem-ubuntu-1.10.0-gke.194" that already exists in vSphere.
Reusing VM template "gke-on-prem-cos-1.10.0-gke.194" that already exists in vSphere.
Finished preparing the OS images.
    Applying Bundle CRD YAML...  DONE
    Applying Bundle CRs...  DONE
Applied bundle in the admin cluster.
Successfully prepared the environment.

We are using the full bundle, which contains the large binary images needed for a major upgrades. For minor upgrades, the image binaries do not change and so the regular bundle is all that is needed (remove the “-full” suffix).

You can use the regular bundle for major upgrades, but it requires gkectl to download the full bundle as part of its upgrade processing in later steps. So, for major upgrades you may as well prepare the full bundle now.

Upgrade the User Clusters (from the Admin WS)

Before upgrading, the cluster must be registered in the Anthos>Clusters of the Cloud Console (https://console.cloud.google.com). Also, there needs to be at least one free IP address from the user-block.yaml to accommodate the serial creation of a new worker node.

Run the gkectl command as shown below using the Admin Cluster kubeconfig and the config used to originally setup the User Cluster.

$ gkectl upgrade cluster --kubeconfig /home/ubuntu/kubeconfig --config user-cluster.yaml -v 3

Reading config with version "v1"
- Validation Category: Config Check
    - [SUCCESS] Config

- Validation Category: Ingress
    Running validation check for "User cluster Ingress"... |
W1205 20:02:50.435067    4961 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.    - [SUCCESS] User cluster Ingress

- Validation Category: OS Images
    - [SUCCESS] User OS images exist

- Validation Category: Cluster Health
    Running validation check for "Admin cluster health"... |
    - [SUCCESS] Admin cluster health
    - [SUCCESS] Admin PDB
    - [SUCCESS] User cluster health
    - [SUCCESS] User PDB

- Validation Category: Reserved IPs
    - [SUCCESS] Admin cluster reserved IP for upgrading cluster
    - [SUCCESS] User cluster reserved IP for upgrading a user cluster

- Validation Category: GCP
    - [SUCCESS] GCP service
    - [SUCCESS] GCP service account

- Validation Category: Container Registry
    - [SUCCESS] Docker registry access

- Validation Category: VCenter
    - [SUCCESS] Credentials
    - [SUCCESS] VSphere CSI Driver

All validation results were SUCCESS.
Upgrading to bundle version: "1.10.0-gke.194"

Start deleting konnectivity-server deployment in user1 workspace.
Done deleting konnectivity-server deployment in user1 workspace.
Start deleting konnectivity-server service in user1 workspace.
Done deleting konnectivity-server service in user1 workspace.
Updating onprem cluster controller component status in admin package deployment... 
DONE
Upgrading onprem user cluster controller... /

W1205 20:04:39.002722    4961 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
...

Upgrading onprem user cluster controller...  DONE
Reading config with version "v1"

Seesaw Upgrade Summary:
OS Image updated (old -> new): "gke-on-prem-ubuntu-1.9.2-gke.4" -> "gke-on-prem-ubuntu-1.10.0-gke.194"

Upgrading loadbalancer "seesaw-for-user1"
Deleting LB VM:  seesaw-for-user1-vlr4tbccfc-1...  DONE
Creating new LB VMs...  DONE
Saved upgraded Seesaw group information of "seesaw-for-user1" to file: seesaw-for-user1.yaml
Waiting LBs to become ready...  DONE
Updating create-config secret...  DONE
Loadbalancer "seesaw-for-user1" is successfully upgraded.

Skipping admin cluster backup since clusterBackup section is not set in admin cluster seed config
Waiting for user cluster "user1" to be ready... \
Waiting for user cluster "user1" to be ready...  DONE
    Creating or updating user cluster control plane workloads: deploying 
user-kube-apiserver-base, user-control-plane-base, 
user-control-plane-clusterapi-vsphere, user-control-plane-etcddefrag: 0/1 
statefulsets are ready...
    Creating or updating user cluster control plane workloads: deploying 
user-control-plane-base, user-control-plane-clusterapi-vsphere, 
user-control-plane-etcddefrag...
    Creating or updating user cluster control plane workloads...
    Creating or updating user cluster control plane workloads: 11/15 pods are ready...
    Creating or updating user cluster control plane workloads: 12/15 pods are ready...
    Creating or updating user cluster control plane workloads: 13/15 pods are ready...
    Creating or updating user cluster control plane workloads: 14/15 pods are ready...
    Creating or updating node pools: pool-1: 1/3 replicas are updated...
    Creating or updating node pools: pool-1: Creating or updating node pool...
    Creating or updating node pools: pool-1: 2/3 replicas are updated...
    Creating or updating node pools: pool-1: Creating or updating node pool...
    Creating or updating node pools: pool-1: 4/3 replicas show up...
    Creating or updating addon workloads: 3/4 machines are ready...
    Creating or updating addon workloads: 44/50 pods are ready...
    Cluster is running...
Skipping admin cluster backup since clusterBackup section is not set in admin cluster seed config
Done upgrading user cluster user1.
Done upgrade.

This upgrades the load balancers first, then the User Cluster control plane and finally the User Cluster worker nodes.

During this process, in vCenter you will see the newer template being cloned as newer worker nodes are spun up to replace the older versions.

Also, invocations of “kubectl get nodes” will show nodes being swapped up serially as new nodes are brought in and older ones deleted. There are small windows of time when there are N+1 worker nodes.

# Anthos 1.9 and 1.10.0 use the same kubelet version, BUT
# 1.9 uses containerd 1.4.11 and 1.10 uses 1.5.8
$ kubectl --kubeconfig kubeconfig get nodes -o=custom-columns=NAME:.metadata.name,kubever:.status.nodeInfo.kubeletVersion,contver:.status.nodeInfo.containerRuntimeVersion
NAME          kubever            contver
user-host1   v1.21.5-gke.1200   containerd://1.5.8
user-host2   v1.21.5-gke.1200   containerd://1.5.8
user-host5   v1.21.5-gke.1200   containerd://1.5.8

If this upgrade failed half-way through, you would want to invoke the exact same command but with the “skip-validation-all” flag to resume the upgrade.

Checking kubectl, the Admin cluster is still at the older version.  Which will be upgraded in the next section.

# Admin Cluster server version is still at older version
$ kubectl --kubeconfig kubeconfig get nodes -o=custom-columns=NAME:.metadata.name,kubever:.status.nodeInfo.kubeletVersion,contver:.status.nodeInfo.containerRuntimeVersion
NAME          kubever            contver
admin-host1   v1.21.5-gke.1200   containerd://1.4.11
admin-host2   v1.21.5-gke.1200   containerd://1.4.11
admin-host3   v1.21.5-gke.1200   containerd://1.4.11
admin-host5   v1.21.5-gke.1200   containerd://1.4.11

Upgrade the Admin Cluster (from the Admin WS)

There needs to be at least one free IP address from the admin-block.yaml to accommodate the serial creation of a new master nodes.

You also need to make sure that certs on the Admin Cluster master with a “sudo kubeadm certs check-expiration” on the master node, which is described in detail in the docs.

Run the gkectl command as shown below using the Admin Cluster kubeconfig and the config used to originally setup the Admin Cluster.

$ gkectl upgrade admin --kubeconfig kubeconfig --config admin-cluster.yaml -v 3

Reading config with version "v1"
Reading bundle at path: "/var/lib/gke/bundles/gke-onprem-vsphere-1.10.0-gke.194-full.tgz".
- Validation Category: Config Check
    - [SUCCESS] Config

- Validation Category: OS Images
    - [SUCCESS] Admin OS images exist

- Validation Category: Cluster Health
    Running validation check for "Admin cluster health"... |
    - [SUCCESS] Admin cluster health
    - [SUCCESS] Admin PDB
    - [SUCCESS] All user clusters health

- Validation Category: Reserved IPs
    - [SUCCESS] Admin cluster reserved IP for upgrading cluster

- Validation Category: GCP
    - [SUCCESS] GCP service
    - [SUCCESS] GCP service account

- Validation Category: Container Registry
    - [SUCCESS] Docker registry access

- Validation Category: VCenter
    - [SUCCESS] Credentials

All validation results were SUCCESS.
Upgrading to bundle version "1.10.0-gke.194"
Reading config with version "v1"

Seesaw Upgrade Summary:
OS Image updated (old -> new): "seesaw-os-image-v1.8-20211118-3370aa09b5" -> "gke-on-prem-ubuntu-1.9.2-gke.4"

Upgrading loadbalancer "seesaw-for-gke-admin"
Deleting LB VM:  seesaw-for-gke-admin-v88mp6gkgm-1...  DONE
Creating new LB VMs...  DONE
Saved upgraded Seesaw group information of "seesaw-for-gke-admin" to file: seesaw-for-gke-admin.yaml
Waiting LBs to become ready...  DONE
Updating create-config secret...  DONE
Loadbalancer "seesaw-for-gke-admin" is successfully upgraded.

Skipping admin cluster backup since clusterBackup section is not set in admin cluster seed config
Creating cluster "gkectl" ...
DEBUG: docker/images.go:67] Pulling image: gcr.io/gke-on-prem-release/kindest/node:v0.11.1-gke.25-v1.21.5-gke.1200 ...
 ✓ Ensuring node image (gcr.io/gke-on-prem-release/kindest/node:v0.11.1-gke.25-v1.21.5-gke.1200) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Waiting ≤ 5m0s for control-plane = Ready ⏳ 
 • Ready after 43s 💚
    Waiting for external cluster control plane to be healthy... |
    Waiting for external cluster control plane to be healthy...  DONE
Applying admin bundle to external cluster
    Applying Bundle CRD YAML...  DONE
    Applying Bundle CRs...  DONE

...
    Waiting for external cluster cluster-api to be ready...  DONE
Resuming existing create/update internal cluster with existing kubeconfig at /home/ubuntu/kubeconfig. Remove file or specify another path if resume is not desired
    Pivoting existing Cluster API objects from internal to external cluster...  DONE
    Waiting for cluster to be ready for external cluster...  DONE
Provisioning master vm for internal cluster via external cluster
    Creating cluster object gke-admin-t9dpd on external cluster...  DONE
Applying admin master bundle components... /

...
Applying admin master bundle components...  DONE
    Creating master...  DONE
    Updating admin cluster checkpoint...  DONE
    Updating external cluster object with master endpoint...  DONE
Creating internal cluster
    Getting internal cluster kubeconfig...  DONE
    Waiting for internal cluster control plane to be healthy... /
    Waiting for internal cluster control plane to be healthy...  DONE
Applying admin bundle to internal cluster
    Applying Bundle CRD YAML...  DONE
    Applying Bundle CRs... -
...
    Waiting for internal cluster cluster-api to be ready...  DONE
    Pivoting Cluster API objects from external to internal cluster...  DONE
Waiting for admin addon and user master nodes in the internal cluster to become ready...  
DONE
    Waiting for control plane to be ready...  DONE
    Waiting for kube-apiserver VIP to be configured on the internal cluster...  DONE
Applying admin node bundle components... /
Creating node Machines in internal cluster...  DONE
Pruning unwanted admin node bundle components... /

...
Applying admin addon bundle to internal cluster...  DONE
    Waiting for admin cluster system workloads to be ready...  DONE
Waiting for admin cluster machines and pods to be ready...  DONE
Pruning unwanted admin base bundle components... -

...

Pruning unwanted admin addon bundle components...  DONE
    Waiting for admin cluster system workloads to be ready...  DONE
Waiting for admin cluster machines and pods to be ready...  DONE
Cleaning up external cluster...  DONE
Skipping admin cluster backup since clusterBackup section is not set in admin cluster seed config
Trigger reconcile on user cluster 'user1/user1-gke-onprem-mgmt' to upgrade its user master VMs to the same version "1.10.0-gke.194" as the admin cluster
Waiting for reconcile to complete...  DONE
Done upgrading admin cluster.

This upgrades the load balancers first, then the Admin Cluster nodes.

Invocations of “kubectl get nodes” will show master node versions being swapped up serially as new nodes are brought in and older ones deleted. There are small windows of time when there are N+1 worker nodes.

# called half-way through the upgrade process
# notice 3 nodes at older version, and 2 at newer version
$ kubectl --kubeconfig kubeconfig get nodes -o=custom-columns=NAME:.metadata.name,kubever:.status.nodeInfo.kubeletVersion,contver:.status.nodeInfo.containerRuntimeVersion

NAME          kubever            contver
admin-host1   v1.21.5-gke.1200   containerd://1.5.8
admin-host2   v1.21.5-gke.1200   containerd://1.4.11
admin-host3   v1.21.5-gke.1200   containerd://1.4.11
admin-host4   v1.21.5-gke.1200   containerd://1.5.8
admin-host5   v1.21.5-gke.1200   containerd://1.4.11

Admin Cluster upgrades are resumable (with caveats) starting with Anthos 1.10.  See here for ‘gkectl repair admin-master‘ details, or contact Google Support for assistance.

 

REFERENCES

Anthos 1.10 upgrade guide

Anthos 1.10 downloads

fabianlee.org, install Anthos 1.9

NOTES

Checking for expired certs on master before admin cluster upgrade

KUBECONFIG=$(realpath kubeconfig)

kubectl --kubeconfig "${KUBECONFIG}" get secrets -n kube-system sshkeys 
-o jsonpath='{.data.vsphere_tmp}' | base64 -d > ~/.ssh/admin-cluster.key && chmod 600 ~/.ssh/admin-cluster.key

export MASTER_NODE_IP=$(kubectl --kubeconfig "${KUBECONFIG}" get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' --selector='node-role.kubernetes.io/master')

# login and run
ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}"
$ sudo kubeadm certs check-expiration