GCP: pulling an image from the Container Registry of another project

In a previous article I discussed the advantages to keeping container images in the private Google Container Registry of a project.  And if you have a GKE cluster in the exact same project, then image pulls happen seamlessly without any additional configuration required.

However, if the GKE cluster is in a different project than the source container registry, then you need additional configuration steps.  Specifically, you need a service account that has permissions to the source project’s Container Registry storage bucket and this key must then be added to the pod’s imagePullSecrets.

But instead of setting the imagePullSecrets manually for each deployment, it is usually a better experience to assign the secret to the pod’s serviceAccount owner.  Then each new pod creation will automatically have the imagePullSecrets injected.

Push images to Container Registry in source GCP project

The first step is to have images pushed to the private Container Registry of a GCP source project.  For this testing, we are going to use busy-box as an example.

See my previous article here on enabling the Container Registry for a project: pulling the busybox image from docker hub, and then pushing those images to a private ‘gcr.io/<projectId>’ registry.

Service account in source GCP project

Then you need a service account in the source GCP project that has access to the backing storage bucket of the Container Registry.  Although you could use any of your existing service accounts, it makes more sense to have a dedicated service account specifically for this function.

For this example, create a service account named ‘gcr-io’ with the ‘roles/storage.objectViewer’ IAM role, and download the json key.

I provided a script called “create_serviceaccount_cr.sh” on github, if you want to create this service account from the shell.

Test pull from target GCP project

For our validation, we will use our own namespace (test-sa) and service account (my-sa).  Let’s first explore what happens when we try to pull an image from our source repository without any additional configuration.

# id of source project (not necessarily name)
sourceProjectId="myproject-123"

# create namespace and service account
kubectl create namespace test-sa
kubectl create serviceaccount my-sa -n test-sa

# will have two service accounts (default and my-sa)
kubectl get serviceaccounts -n test-sa

# attempt pull from private gcr registry in another project
kubectl run -it --rm gcr-busybox -n test-sa --image=gcr.io/${sourceProjectId}/busybox:latest --overrides='{ "spec": { "serviceAccount":"default" } }' -- sh

You should see a “error: timed out waiting for condition”.  This is because the service account running this pod does not have any permissions to pull from this private container registry running in a completely different project.

We can address this by creating a secret of type ‘docker-registry’ in this namespace, and then adding this secret to the imagePullSecrets of the service account running the pod like below:

# create secret of type docker-registry in same namespace
# using source project's service account json key
kubectl -n test-sa create secret docker-registry gcr-io-secret --docker-server gcr.io --docker-username _json_key --docker-email anything@gcr --docker-password="$(cat gcr-io.json)"

# validation that key was loaded
kubectl -n test-sa describe secret gcr-io-secret

# assign secret reference to default service account
kubectl -n test-sa patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-io-secret"}]}'

With the secret now created and referenced from the pod’s serviceAccount, the pod can now be pulled and materialized.

# this should now pull properly, 'exit' to leave shell
kubectl run -it --rm gcr-busybox -n test-sa --image=gcr.io/${sourceProjectId}/busybox:latest --overrides='{ "spec": { "serviceAccount":"default" } }' -- sh

However, take note that the same pull but using the “my-sa” service account we created earlier is going to fail because that service account does not have any imagePullSecrets.

# running pod as 'my-sa' will still fail
kubectl run -it --rm gcr-busybox -n test-sa --image=gcr.io/${sourceProjectId}/busybox:latest --overrides='{ "spec": { "serviceAccount":"my-sa" } }' -- sh

If we need pods to run under this non-default service account, then we must also add the secret reference.

# assign secret reference to default service account
kubectl -n test-sa patch serviceaccount my-sa -p '{"imagePullSecrets": [{"name": "gcr-io-secret"}]}'

 

NOTE: I have purposely used the “latest” tag for this example, because that causes a remote pull to happen every time.  If I had used a tag such as “1.32.1”, then a successful pull by one service account would cache the image and allow another service account (that might not have the secret) to use it.  In a real-world scenario, the manifest’s imagePullPolicy can ensure an enforcement of the behavior you desire and you don’t have to use ‘latest’.

NOTE: Given the need to potentially have multiple service accounts in multiple namespaces loaded with a docker-registry secret, the imagepullsecret-serviceaccount-patcher github project by Neutryno might be a tool of assistance.

 

REFERENCES

google cloud docs, enable service to containerregistry

google cloud docs, authentication using ‘gcloud auth configure-docker’

google cloud docs, pulling down images and pushing to gcr.io

google cloud docs, assigning “Storage Object Viewer” for service account that needs access to gcr

medium.com Paul Czarkowsi, kubectl exec busyboxd

kubernetes.io, default service accounts and adding imagePullSecrets to it

adamhancock.co.uk, imagepull secrets with Kubernetes

docker hub, busybox

stackoverflow.com, pullings images from gcr into gke, says no auth needed if gke cluster/registry are in same project

estl.tech, using images from private registry on gke

container-solutions.com, using gcr with k8s, describes patching service account with ‘ImagePullSecrets’ to be used when pods are create

knative.dev, settting imagePullSecrets on default service account

stackoverflow.com, dirty way of modifying each k8s node .docker/config.json to hit private registry

github.com neutryno, ImagePullSecret service account patcher for multiple private registry

stackoverflow.com, default service account for k8s namespace and setting rbac roles to allow access to cluster api by providing token

devopstales.github.io, manually set imagePullSecrets to default serviceaccount or use imagepullsecret-patcher to apply to all namespaces

github.com, imagepullsecret-patcher tool that can apply across all namespaces and serviceaccounts

medium.com, imagepullsecret-patcher article

google docs, kubectl create secret and decoding with base64

google, default service account for gke (PROJ_NUM-compute@developer.gserviceaccount.com)

faun.pub sindhuja cynixit, Kubernetes image pull pullicy

ahmet.im, gcr tips

 

NOTES

get all service accounts in all k8s namespaces

kubectl get ns -o custom-columns=NAME:.metadata.name --no-headers | xargs -L 1 kubectl get serviceaccount -n

using override of spec.nodeName to place pod on specific worker node

kubectl run -it --rm gcr-busybox -n test-sa --image=gcr.io/${sourceProjectId}/busybox:latest --overrides='{ "spec": { "serviceAccount":"default", "nodeName":"mynode-2" } }' -- sh

decoding base64 docker-registry secret requires escaping period

kubectl get secret gcr-io-secret -n test-sa -o jsonpath="{.data.\.dockerconfigjson}" | base64 --decode

how much storage are your images taking up

gsutil du -hs gs://artifacts.${projectId}.appspot.com