GCP: pushing GKE images into gcr.io to avoid pull rate limits

Docker hub now enforces pull rate limits (since November 2020).  And unfortunately, this limit is often reached at critical moments such as upgrades or infrastructure events when bulk pod recreation is happening.

One way to avoid this problem is to place your images into an alternate image registry.  This could mean a lot of work building a custom container registry, but if you are on GCP you can simple use Google’s private Container Registry located at “gcr.io/<projectId>”.

Here is the initial setup:

# check if container registry access enabled
gcloud services list | grep containerregistry

# enable access. no harm if run multiple times
gcloud services enable containerregistry.googleapis.com

# show current project
gcloud config list core/project

# put project id into variable
project_name=$(gcloud config list core/project 2>/dev/null | grep -Po "(?<=project = ).*")
project_id=$(gcloud projects list --filter="name ~ ^$project_name" --format="csv(projectId)" | tail -n+2)
echo "project id/name = $project_id/$project_name"

# credentials configured for gcr container registry
gcloud auth configure-docker

Then the pull from docker hub, and tag/push into the private GCR:

# pull image from standard dockerhub at docker.io
docker pull busybox:1.32.1

# tag with private gcr registry name
# 'latest' is not required, adding for testing
docker tag busybox:1.32.1 gcr.io/$project_id/busybox:1.32.1
docker tag busybox:1.32.1 gcr.io/$project_id/busybox:latest

# push to gcr private project repo
# 'latest' is not required, adding for testing
docker push gcr.io/$project_id/busybox:1.32.1
docker push gcr.io/$project_id/busybox:latest

# run pod using image pulled from gcr private repo
# type 'exit' to quit shell
kubectl run -it --rm busybox-1321-run --image=gcr.io/$project_id/busybox:1.32.1 -n default -- sh

The Google Container Registry is backed by a Cloud Storage bucket.  And the ability to pull from this private project container registry is automatically available to GKE clusters deployed in the same project.

When you have a GKE cluster in a different project, that is when it is necessary to create a docker-registry secret and assign an imagePullSecrets to the service account, as I discuss in this article.

 

 

REFERENCES

docker blog, checking docker pull rate and count

google cloud docs, enable service to containerregistry

google cloud docs, authentication using ‘gcloud auth configure-docker’

google cloud docs, pulling down images and pushing to gcr.io

google cloud docs, assigning “Storage Object Viewer” for service account that needs access to gcr

medium.com Paul Czarkowsi, kubectl exec busyboxd

docker hub, busybox

stackoverflow.com, pullings images from gcr into gke, says no auth needed if gke cluster/registry are in same project

estl.tech, using images from private registry on gke

 

NOTES

adding user to ‘docker’ group, making sudo unnecessary

sudo usermod -a -G docker ${USER}

get all service accounts in all k8s namespaces

kubectl get ns -o custom-columns=NAME:.metadata.name --no-headers | xargs -L 1 kubectl get serviceaccount -n