GitLab Agent for Kubernetes is an integration for the GitLab CI/CD pipeline that provides kubectl access from pipeline jobs, allowing Continuous Deployment into a live Kubernetes Cluster.
However, the default role for this Agent is cluster-admin when doing a basic Helm install, which is far too permissive and needs to be scoped down to only namespaces and resources that should be deployed via the pipeline.
In this article, I will show how to assign Roles that limit the Service Account permissions of the running Agent.
Create Service Account with limited privileges
First we need to create a new Service Account that has a limited set of permissions.
Corresponding with my other article describing the full installation steps for Gitlab Agent for Kubernetes, I will assume that the Agent for Kubernetes is deployed to one namespace “$agent_name”, while the workload it deploys into is a different namespace, “$project_name”.
Create Service Account and assign Roles
# fetch my example project git clone https://gitlab.com/gitlab-agent2/gitlab-agent-for-k8s-helm.git && cd $(basename $_ .git) # namespace where Agent deploys workload (pods,deployment,services, etc) project_name=$(basename $PWD) # namespace where Agent is installed agent_name=agent-${project_name} # create service account 'agent-role-limited' with Roles in Agent and workload namespaces cat svcaccounts/agent-workload-ns-different.yaml | sed "s/AGENT_NAMESPACE/$agent_name/g; s/WORKLOAD_NAMESPACE/$project_name/g; s/MY_SVCACCOUNT/agent-role/g" | kubectl apply -f - # validate objects in Agent and workload namespace kubectl get serviceaccount,roles,rolebindings -n $agent_name kubectl get rolebindings -n $project_name # show permissions in Agent and workload namespace kubectl auth can-i --list --namespace=$agent_name --as=system:serviceaccount:${agent_name}:agent-role-limited kubectl auth can-i --list --namespace=$project_name --as=system:serviceaccount:${agent_name}:agent-role-limited
Permissions of namespaced Roles
The new Role “agent-role” in the Agent namespace assigns the following privileges:
- resources: ["*"] apiGroups: [""] verbs: ["get","list","watch"] - resources: ["*"] apiGroups: ["apps"] verbs: ["get","list","watch"]
Which only allows read-only actions in the Agent installed namespace.
The new Role “agent-role” in the workload namespace assigns the following privileges:
- resources: ["namespaces","pods","pods/log","pods/exec","services","secrets","configmap"] apiGroups: [""] verbs: ["create","get","list","watch","update","delete","patch"] - resources: ["deployments"] apiGroups: ["apps"] verbs: ["create","get","list","watch","update","delete","patch"] - resources: ["daemonsets"] # not statefulset apiGroups: ["apps"] verbs: ["get","list","watch"]
Which allows full control of pods/services/secrets/configmap and deployments, read-only to daemonsets, and no privileges to statefuleset.
This is where you would tailor the exact permissions to your real-world use case, the privileges assigned here are the only ones that will work for kube-API calls made from GitLab CI/CD pipeline jobs.
Update existing Helm Agent release
If you have previously installed Agent for Kubernetes using Helm with only the minimal default settings, then the service account for the Agent has a ClusterRoleBinding with the ‘cluster-admin’ role which is an excessive set of privileges.
We can update the settings of the Helm release and change the service agent of the running Agent deployment to limit the privileges.
# get current service account of running Agent existing_svcacct=$(kubectl get deployment -n $agent_name --output=jsonpath="{range .items[0]}{.spec.template.spec.serviceAccount}{end}") # Show ClusterRole where this service account is referenced kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].name==\"$existing_svcacct\")]}[{.roleRef.kind},{.roleRef.name}]{\"\n\"}{end}"
To update the pre-existing Helm release to run the Agent with the new Service Account:
# show local helm repo, expecting 'gitlab' which contains chart 'gitlab-agent' helm repo add gitlab https://charts.gitlab.io && helm repo update gitlab helm_repo=gitlab # show history of release we want to update helm history $agent_name -n $agent_name # update helm values to use our new limited service account helm upgrade $agent_name $helm_repo/gitlab-agent --namespace $agent_name --reuse-values --set rbac.create=false --set rbac.useExistingRole=agent-role --set rbac.create=false --set serviceAccount.create=false --set serviceAccount.name=agent-role-limited # values should reflect new customization to service account helm get values $agent_name -n $agent_name # service account of running Agent will be changed to 'agent-role-limited' kubectl get deployment -n $agent_name --output=jsonpath="{range .items[0]}{.spec.template.spec.serviceAccount}{end}"
The kubectl commands run from the pipeline will now be limited to the privileges assigned to the Service Account, “agent-role-limited”.
Create new Helm Agent release
I have written a full article on the configuration required for installing the Agent for Kubernetes, please follow all those steps as detailed, BUT when you get to the point of running “helm upgrade”, use the additional flags below so the Helm chart knows to assign our custom service account to the Agent deployment.
# install Agent, have it use our service account helm upgrade --install $agent_name gitlab/gitlab-agent --namespace ${agent_name} --create-namespace --set image.tag=v16.5.0 --set config.token=$token_secret --set config.kasAddress=$KAS_URL --set rbac.create=false --set serviceAccount.create=false --set serviceAccount.name=agent-role-limited # service account of running Agent will be 'agent-role-limited' kubectl get deployment -n $agent_name --output=jsonpath="{range .items[0]}{.spec.template.spec.serviceAccount}{end}"
The kubectl commands run from the pipeline are limited to the privileges assigned to the Service Account, “agent-role-limited”.
Validate from GitLab pipeline
There will be an additional predefined variable, ‘KUBECONFIG’ exposed in the pipeline when the Agent is connected.
We can run ‘kubectl auth can-i’ commands as a test of the Agent’s privileges in its installed namespace, as well as the workload namespace where it will deploy your custom application objects (pods, deployments, services, etc).
Below is a snippet from my full pipeline definition .gitlab-ci.yaml
k8s-access-test: stage: test rules: - if: $KUBECONFIG image: name: bitnami/kubectl:1.27.7-debian-11-r0 entrypoint: [''] script: | kubectl config use-context $CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME:agent-$CI_PROJECT_NAME set +e echo "===============================================" ns="agent-$CI_PROJECT_NAME" echo "test kubectl privileges in Agent ns '$ns'" set -x kubectl auth can-i create pods -n $ns kubectl auth can-i create services -n $ns kubectl auth can-i create configmaps -n $ns kubectl auth can-i create secrets -n $ns kubectl auth can-i create deployments -n $ns set +x echo "==== list of permissions for Agent ns $ns" kubectl auth can-i --list -n $ns echo "===============================================" echo "test kubectl privileges in workload ns '$CI_PROJECT_NAME'" set -x kubectl auth can-i create pods -n $CI_PROJECT_NAME kubectl auth can-i create services -n $CI_PROJECT_NAME kubectl auth can-i create configmaps -n $CI_PROJECT_NAME kubectl auth can-i create secrets -n $CI_PROJECT_NAME kubectl auth can-i create deployments -n $CI_PROJECT_NAME kubectl auth can-i list daemonsets -n $CI_PROJECT_NAME kubectl auth can-i create daemonsets -n $CI_PROJECT_NAME kubectl auth can-i list statefulset -n $CI_PROJECT_NAME set +x echo "==== list of permissions for workload ns $CI_PROJECT_NAME" kubectl auth can-i --list -n $CI_PROJECT_NAME
All the ‘can-i’ for the Agent namespace should return “yes”, but notice in the workload namespace ($CI_PROJECT_NAME) that when it asks if daemonset can be listed the answer is ‘yes’ but for daemonset creation the answer is ‘no’. Statefulset cannot even be listed because they were not mentioned in the Role.
... test kubectl privileges in workload ns 'gitlab-agent-for-k8s-manifest' ... ++ kubectl auth can-i list daemonset -n gitlab-agent-for-k8s-manifest yes ++ kubectl auth can-i create daemonset -n gitlab-agent-for-k8s-manifest no ++ echo '' ++ kubectl auth can-i list statefulset -n gitlab-agent-for-k8s-manifest no
REFERENCES
gitlab fabianlee, source for this article
GitLab, customizing the Helm installation of Agent
GitLab, deploying Agent to cluster using Helm
GitLab blog, agent for Kubernetes with limited permission
gitlab-agent source project, values.yaml for Helm chart
Kubernetes.io, RBAC with Role/ClusterRole and bindings
stackoverflow, restrict with role to single namespace
stackoverflow, kubectl with jsonpath to find entry in subject[0]
NOTES
Show cluster api resources and group names
kubectl api-resources --sort-by name -o wide
Show Role/ClusterRole membership of service account
svcacct_name=agent-role-limited kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath="{range .items[?(@.subjects[0].name==\"$svcacct_name\")]}[{.metadata.namespace},{.roleRef.kind},{.roleRef.name}]{"\n"}{end}"
Uninstall Agent
helm uninstall $agent_name --namespace $agent_name