In this article, I will detail how to use Vault JWT auth mode to isolate the secrets of two different deployments in the same Kubernetes cluster. This will be done by using two different Kubernetes Service Accounts, each of which generates unique JWT that are tied to a different Vault role.
JWT auth mode is an excellent match for a Kubernetes deployed applications because by default pods already run as a Service Account and have a JWT auto-mounted into the pod. Using this JWT to then authenticate into a Vault role is both convenient and eliminates the “secret zero” issue.
Furthermore, JWT auth mode is a better fit for many enterprises because the network call for JWT validation is initiated from the Cluster back to the shared enterprise Vault server, similar to other shared infrastructure services such as DNS, storage, etc.
Contrast this to Kubernetes auth mode where the shared enterprise Vault server must initiate a network connection to the Cluster for a kubeAPI TokenReview, which assumes a reachable kubeAPI endpoint (secure clusters typically use private endpoints) and maintaining the firewall rules allowing the shared Vault server to initiate a connection to every distinct Kubernetes cluster/network it serves.
Solution Overview
We start with a local development mode Vault server that binds to our host at port 8200. This will represent the shared external Vault server that has the following objects:
- Vault role = sales-role, policy = sales-policy, able to read secret at path below:
- Vault secret at /secret/webapp/sales
- num_competitors=6
- q1_profitability=true
- Vault secret at /secret/webapp/sales
- Vault role = eng-role, policy = eng-policy, able to read secret at path below:
- Vault secret at /secret/webapp/eng
- aws_storage_key=xyz123
- Vault secret at /secret/webapp/eng
Then we will configure a Kubernetes cluster with two namespaces each with their own service account.
- Kubernetes namespace = sales, service account = sales-auth
- app deployment = tiny-tools-sales-auth
- Kubernetes namespace = engineering, service account = eng-auth
- app deployment = tiny-tools-eng-auth
We then configure JWT auth mode on the Vault server by providing the cluster OIDC public key so it can verify the validity and integrity of the presented JWT.
Now that Vault can determine the validity of JWT tokens from this cluster, it can return a Vault token that can subsequently be used to retrieve secrets according to the Vault role.
Prerequisites
minikube
We could use any Kubernetes implementation (k3s, kubeadm, GCE, EKS, etc), but for the simplicity of this article let’s use minikube to host a local Kubenernetes cluster, which is also used extensively in the Vault tutorials.
If you do not already have minkube installed, see the official starting documentation for installation instructions. There are other installation references available if you need further detail (1, 2, 3).
Local utilities
In order to parse json results and analyze the JWT used in this article, install the utilties below.
# json parser sudo apt install -y jq # install step utility for analying JWT wget -q https://github.com/smallstep/cli/releases/download/v0.25.0/step_linux_0.25.0_amd64.tar.gz tar xvfz step_linux_0.25.0_amd64.tar.gz step_0.25.0/bin/step --strip-components 2 # install jwker for JWK/PEM conversions wget -q https://github.com/jphastings/jwker/releases/download/v0.2.1/jwker_Linux_x86_64.tar.gz tar xvfz jwker_Linux_x86_64.tar.gz jwker
Deploy apps into cluster using service accounts
Start a fresh minikube Kubernetes cluster using the command below.
minikube start
Then create the sales and engineering Kubernetes namespaces, along with their distinct service accounts.
# create namespaces kubectl create ns sales kubectl create ns engineering # create service accounts kubectl create sa sales-auth -n sales kubectl create sa eng-auth -n engineering
We are not creating explicit secrets for the service accounts because we will be using short-lived JWT, not long-lived tokens.
Apply the deployments into the cluster using my tiny-tools-template.yaml.
# use template from my project to deploy apps running as respective service accounts wget https://raw.githubusercontent.com/fabianlee/blogcode/master/vault/jwt/tiny-tools-template.yaml cat tiny-tools-template.yaml | ns=sales name=sales-auth envsubst | kubectl apply -f - cat tiny-tools-template.yaml | ns=engineering name=eng-auth envsubst | kubectl apply -f - # validate readiness of each deployment kubectl get deployment tiny-tools-sales-auth -n sales kubectl get deployment tiny-tools-eng-auth -n engineering # note that each deployment is running under its own service account kubectl get deployment tiny-tools-sales-auth -n sales -o=jsonpath='{.spec.template.spec.serviceAccount}' kubectl get deployment tiny-tools-eng-auth -n engineering -o=jsonpath='{.spec.template.spec.serviceAccount}'
Notice in tiny-tools-template.yaml that we are using service account token volume projection so we have finer grained control of the mounted token, specifically being able to set the audience and TTL.
... # spec.template.spec.volumeMounts volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: jwt-token readOnly: true ... # spec.template.spec.volumes - name: jwt-token projected: defaultMode: 420 sources: - serviceAccountToken: audience: $ns-$name expirationSeconds: 600 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace
If we relied on the default token auto-mount at “/var/run/secrets/kubernetes.io/serviceaccount/token“, a JWT would still be mounted, but with a year long expiration and the default audience, “https://kubernetes.default.svc.cluster.local”.
Test JWT validity, manually
For third parties to verify a JWT was created by the expected issuer and not tampered with, the standard message signing comparison is required on the JWT.
# JWT=<header>.<payload>.<signature> Send: { JWT.header + JWT.payload + encrypt(hash(JWT.header+JWT.payload), private_key) } Verify: hash(JWT.header + JWT.payload) == decrypt(JWT.signature, public_key)
The JWT is verified to come from the expected source and not tampered with if the computed hash of the JWT header+payload matches the decrypted signature using the public key provided at the cluster’s OIDC endpoint.
This is the logic that will be applied by Vault when testing whether a JWT is valid and can be given access to a Vault role.
Fetch cluster OIDC public key
As described above, we need the issuer’s public key in order to verify the JWT source and integrity. This can be retrieved from the ‘/openid/v1/jwks’ endpoint in the cluster.
# get OIDC discovery endpoint, including jwks_uri kubectl get --raw /.well-known/openid-configuration | jq # query the jwks_uri to get public key and save it kubectl get --raw /openid/v1/jwks | jq .keys[0] | tee minikube.jwk
Grabbing the cluster OIDC public key this way (versus OIDC self-discovery) avoids multiple issues:
- Is there network-level connectivity that allows Vault to reach the .well-known and jwks_uri ?
- Does the cluster have the jwks_uri on a public Ingress that allows it to be fetched externally ?
- Can the .well-known location be pulled securely, or do we need the ca.crt to validate ?
- Can the jwks_uri be reach by anonymous users ?
For these reasons, we simply fetch the public key from the jwks_uri using kubectl as shown above.
Validate JWT from running container
Now copy out the JWT from a running container and inspect and validate the JWT using the ‘step‘ utility.
Analyze JWT from sales service account
# fetch sales JWT locally kubectl exec -it deployment/tiny-tools-sales-auth -n sales -- cat /var/run/secrets/kubernetes.io/serviceaccount/token > tiny-tools-sales-auth.jwt # informally inspect JWT, notice 'aud' and 'sub' are tailored to service account cat tiny-tools-sales-auth.jwt | ./step crypto jwt inspect --insecure # test formal validity using cluster OIDC pub key cat tiny-tools-sales-auth.jwt | ./step crypto jwt verify --key minikube.jwk --iss https://kubernetes.default.svc.cluster.local --aud sales-sales-auth
Analyze JWT from engineering service account
# fetch enginering JWT locally kubectl exec -it deployment/tiny-tools-eng-auth -n engineering -- cat /var/run/secrets/kubernetes.io/serviceaccount/token > tiny-tools-eng-auth.jwt # informally inspect JWT, notice 'aud' and 'sub' are tailored to service account cat tiny-tools-eng-auth.jwt | ./step crypto jwt inspect --insecure # test formal validity using cluster OIDC pub key cat tiny-tools-eng-auth.jwt | ./step crypto jwt verify --key minikube.jwk --iss https://kubernetes.default.svc.cluster.local --aud engineering-eng-auth
We went through this exercise because it illustrates the same logic used by the Vault server to authenticate into a Vault role. The short-lived JWT will be presented, and if valid, this can be traded for a Vault token used to retrieve secrets.
Start local Vault server
In order to emulate a Vault server external to the Kubernetes cluster, we will install Vault on our host server and start it in development mode. The official Vault installation document is here, below are instructions for an Ubuntu host as detailed here.
# download trusted key sudo apt update && sudo apt install gpg wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg sudo chmod go+r /usr/share/keyrings/hashicorp-archive-keyring.gpg # add to repo list echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list # install Vault sudo apt update && sudo apt install -y vault # start Vault, use IP address of your host server vaultIP=<yourLocalIPAddress> echo "Starting vault on ${vaultIP}:8200, this IP needs to be used from cluster later !" vault server -dev -dev-root-token-id root -dev-listen-address $vaultIP:8200 -log-level debug
Copy down the $vaultIP value because it will need to be used immediately below when invoking the Vault client, but also later when our pods need to communicate with the Vault server and when installing the Vault sidecar injector.
From another console, confirm login into the Vault server as ‘root’.
export VAULT_ADDR=http://$vaultIP:8200 vault login root
Configure Vault secrets and policy
As described in the Solution Overview section, we are going to create distinct secrets and policies for the sales and engineering roles.
# secret for sales vault kv put secret/webapp/sales num_competitors=6 q1_profitability=true # 'data' path needs to be inserted when dealing with API level echo 'path "secret/data/webapp/sales" { capabilities=["read"] }' | vault policy write sales-policy - # secret for engineering vault kv put secret/webapp/eng aws_storage_key=xyz123 # 'data' path needs to be inserted when dealing with API level echo 'path "secret/data/webapp/eng" { capabilities=["read"] }' | vault policy write eng-policy -
Configure Vault JWT and role
Enable JWT auth mode for this cluster. We supply the cluster OIDC public key retrieved earlier, which is required in order to test JWT validity/integrity.
# convert jwk OIDC public key to pem format for use by vault ./jwker minikube.jwk > minikube.pem # enable JWT authentication vault auth enable -path=minikube jwt vault auth list # create authentication endpoint for this cluster using public key of cluster # not using self-discovery, because too many connectivity, ingress, and auth issues can exist vault write auth/minikube/config jwt_validation_pubkeys=@minikube.pem
Then create the Vault roles for the sales and engineering teams that ties back to their Vault policies.
# create sales role associated with sales-policy and bound to sales audience+subject vault write auth/minikube/role/sales role_type=jwt token_policies=sales-policy ttl=10m bound_audiences=sales-sales-auth bound_subject=system:serviceaccount:sales:sales-auth user_claim=sub verbose_oidc_logging=true # create engineering role associated with eng-policy and bound to eng audience+subject vault write auth/minikube/role/eng role_type=jwt token_policies=eng-policy ttl=10m bound_audiences=engineering-eng-auth bound_subject=system:serviceaccount:engineering:eng-auth user_claim=sub verbose_oidc_logging=true
Test secret access from running container
It is now time to test our ability to pull a Vault secret using the JWT inside the container. The JWT is sent to the Vault server for authentication and if valid, it will trade it for a Vault token.
It is this Vault token that can be used to fetch the secret.
Fetch sales secret
# enter running container kubectl exec -it deployment/tiny-tools-sales-auth -n sales -- sh # set to external vault address from previous section vaultIP=<yourLocalVaultIP> # inspection of JWT cat /var/run/secrets/kubernetes.io/serviceaccount/token | step crypto jwt inspect --insecure # exchange JWT for Vault token vault_token=$(curl -Ss http://$vaultIP:8200/v1/auth/minikube/login --data "{\"jwt\": \"$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\", \"role\": \"sales\"}" | jq -r ".auth.client_token") echo "traded short-lived JWT for vault token: $vault_token" # fetch secret using Vault token $ curl -s -H "X-Vault-Token: $vault_token" http://$vaultIP:8200/v1/secret/data/webapp/sales | jq '.data.data' { "num_competitors": "6", "q1_profitability": "true" } # attempt to fetch engineering secret, should throw 403 auth error (which is expected) curl -v -H "X-Vault-Token: $vault_token" http://$vaultIP:8200/v1/secret/data/webapp/eng # leave running container exit
Fetch engineering secret
# enter running container kubectl exec -it deployment/tiny-tools-eng-auth -n engineering -- sh # set to external vault address from previous section vaultIP=<yourLocalVaultIP> # inspection of JWT cat /var/run/secrets/kubernetes.io/serviceaccount/token | step crypto jwt inspect --insecure # exchange JWT for Vault token vault_token=$(curl -Ss http://$vaultIP:8200/v1/auth/minikube/login --data "{\"jwt\": \"$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\", \"role\": \"eng\"}" | jq -r ".auth.client_token") echo "traded short-lived JWT for vault token: $vault_token" # fetch enginering secret using Vault token $ curl -s -H "X-Vault-Token: $vault_token" http://$vaultIP:8200/v1/secret/data/webapp/eng | jq '.data.data' { "aws_storage_key": "xyz123" } # attempt to fetch sales secret, should throw 403 auth error (which is expected) curl -v -H "X-Vault-Token: $vault_token" http://$vaultIP:8200/v1/secret/data/webapp/sales # leave running container exit
Test secret access using Vault sidecar
If you need a more transparent method of fetching the Vault secrets using JWT auth mode, the Vault sidecar can be installed into the cluster. It uses a mutatingwebhook to silently insert a Vault sidecar into your workload, which makes it more transparent to access Vault secrets.
Install Vault Sidecar injector using Helm
Install the Vault sidecar injector using Helm (values.yaml).
# add local helm repo helm repo add hashicorp https://helm.releases.hashicorp.com helm search repo hashicorp/vault # custom vault namespace vault_ns=vault # install only Vault injector into custom namespace (not Server and not UI) helm upgrade --install vault hashicorp/vault --namespace vault --create-namespace --set "injector.logLevel=trace" --set "global.externalVaultAddr=http:///$vaultIP:8200"
Use annotations to control Vault sidecar
The tiny-tools-template-sidecar.yaml we will use to test the Vault sidecar has annotations that control the sidecar. In this way, the Vault sidecar takes all the responsibility for understanding how to connect to the remote Vault server.
... # spec.template.metadata.annotations annotations: #sidecar.istio.io/inject: "true" #traffic.sidecar.istio.io/excludeOutboundPorts: "8200" vault.hashicorp.com/agent-inject: 'true' vault.hashicorp.com/agent-init-first: 'false' vault.hashicorp.com/agent-cache-enable: 'true' vault.hashicorp.com/auth-type: 'jwt' vault.hashicorp.com/auth-config-path: '/var/run/secrets/kubernetes.io/serviceaccount/token' vault.hashicorp.com/remove-jwt-after-reading : 'false' vault.hashicorp.com/auth-path: $auth_path vault.hashicorp.com/role: $vault_role #vault.hashicorp.com/namespace: $namespace_vault # only available to Vault Enterprise # write into filesystem of container, formatted as we choose vault.hashicorp.com/agent-inject-secret-mysecret.txt: $vault_secret_path vault.hashicorp.com/agent-inject-template-mysecret.txt: | {{- with secret "$vault_secret_path" -}} {{- range $k, $v := .Data.data -}} {{ $k }} = {{ $v }} {{ end }} {{- end -}} ...
Create new sales deployment with Vault sidecar
Below, we deploy a new workload to the Cluster that has an auto-injected Vault sidecar. The Vault sidecar takes care of communication with the external Vault server, and based on the annotation directives, writes a text file “/vault/secrets/mysecret.txt” that contains the sales secret.
We can also curl directly against localhost:8200 to fetch the secret, which means even the most rudimentary tools can fetch their allowed secrets.
# pull template that uses Vault sidecar wget https://raw.githubusercontent.com/fabianlee/blogcode/master/vault/jwt/tiny-tools-template-sidecar.yaml # apply deployment that uses annotations to control Vault sidecar cat tiny-tools-template-sidecar.yaml | DOLLAR_SIGN='$' auth_path=/auth/minikube vault_secret_path=/secret/webapp/sales vault_role=sales ns=sales name=sales-auth envsubst | kubectl apply -f - # secret placed as file into container by sidecar annotations $ kubectl exec -it deployment/tiny-tools-sidecar-sales-auth -n sales -c tiny-tools -- cat /vault/secrets/mysecret.txt num_competitors = 6 q1_profitability = true # instead of having to curl remote Vault server, curl localhost:8200 kubectl exec -it deployment/tiny-tools-sidecar-sales-auth -n sales -c tiny-tools -- curl -Ss http://localhost:8200/v1/secret/data/webapp/sales | jq -Mc .data.data
REFERENCES
stackoverflow.com, k3s allowing public access to OIDC using api args and clusterrole
Banzaicloud blog, step by step through OIDC discovery for kubenetes
Github rancher issue, public access to OIDC for rancher
Google blog, new Kubernetes bound service account tokens
Boris Djurdjevic, OIDC for kubectl access
smallsstep, binary utility for JWT and certificates
mmerill3 on github, testing service account token volume projection
Stackoverflow, enabling OIDC endpoints by unauthenticated users for k3s
HashiCorp Vault docs, JWT overview
HashiCorp Vault docs, JWT auth parameters
Github jwker, utility for JWT to PEM conversions
Datadog, enabling JWT auth for gathering Vault metrics
JWK RFC 7517, detailed specification
JWT debugger that shows public/private keys
HashiCorp docs, check validity of JWT token in kubernetes
rtfm.co.ua – kubernetes service accounts and jwt token authentication
HashiCorp Vault, helm variables explained
HashiCorp Vault docs, Vault dev server on local host
HashiCorp Vault docs, sidecar annotations
HashiCorp, Better togteher JWT and Vault in modern apps
Kubernetes docs, token Automounting in pod
stackoverflow.com, explaining message signing and the reversed roles of public/private key
NOTES
If you need to upgrade minikube and its default kubernetes version
wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 chmod +x minikube-linux-amd64 sudo mv minikube-linux-amd64 /usr/local/bin/minikube minikube destroy minikube start
Copy a file from local host to pod
pod_name=$(kubectl get pod -l app=tiny-tools-sales-auth -n sales -o jsonpath='{.items[].metadata.name}') kubectl cp /tmp/minikube.jwk $pod_name:/tmp/minikube.jwk -n sales -c tiny-tools
check expiration date of JWT
date -d @$(cat tiny-tools-sales-auth.jwt | step crypto jwt inspect --insecure | jq .payload.exp)
Show project service account token from deployment
# show volume mount kubectl get deployment tiny-tools-sales-auth -n sales -o=jsonpath='{.spec.template.spec.containers[].volumeMounts}' | jq # show entire projected volume kubectl get deployment tiny-tools-sales-auth -n sales -o=jsonpath='{.spec.template.spec.volumes[].projected}' | jq # show just service account token section kubectl get deployment tiny-tools-sales-auth -n sales -o=jsonpath='{.spec.template.spec.volumes[].projected.sources[?(@.serviceAccountToken)]}' | jq
Inspecting JWT token from non-expiring service account secret
kubectl get secret sales-auth-with-token -n sales -o=jsonpath='{.data.token}' | base64 -d | step crypto jwt verify --key minikube.jwk --iss kubernetes/serviceaccount --subtle
If you need to match sidecar version to server version
vault --versionhelm upgrade --install vault hashicorp/vault --namespace vault --create-namespace --set "injector.logLevel=trace" --set "global.externalVaultAddr=http:///$vaultIP:8200" --set agentImage.tag=1.15.2
Install Vault server on minikube in non-development mode (but one key)
minikube start minikube status helm repo add hashicorp https://helm.releases.hashicorp.com helm search repo hashicorp/vault # install Vault vault_ns=vault helm upgrade --install vault hashicorp/vault --namespace $vault_ns --create-namespace --set "server.logLevel=trace" kubectl get all -n $vault_ns # initialize Vault kubectl exec -it vault-0 -n $vault_ns -- vault operator init -key-shares=1 -key-threshold=1 -format=json | tee cluster-keys.json # unseal Vault VAULT_UNSEAL_KEY=$(jq -r ".unseal_keys_b64[]" cluster-keys.json) kubectl exec -it vault-0 -n $vault_ns -- vault operator unseal $VAULT_UNSEAL_KEY # first time login to Vault jq -r ".root_token" cluster-keys.json kubectl exec -it vault-0 -n $vault_ns -- sh $ vault login $ exit
Use vault token for introspection, then pulling secret
curl -s -H "X-Vault-Token: $vault_token" http://vault.vault.svc.cluster.local:8200/v1/auth/token/lookup-self | jq curl -s -H "X-Vault-Token: $vault_token" http://vault.vault.svc.cluster.local:8200/v1/secret/data/demo/app | jq '.data.data'
If .well-known/openid-configuration needs to be open to unauthenticated access
curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes.default.svc.cluster.local:443/.well-known/openid-configuration kubectl create clusterrolebinding service-account-issuer-discovery-unauthenticated --clusterrole=system:service-account-issuer-discovery --group=system:unauthenticated
If problems with issuer or audience, link
# check the 'aud' of a default token for system # this must match your 'aud' and 'bound_issuers' # may be different than expected if kube-API '--api-audiences' flag set echo '{"apiVersion": "authentication.k8s.io/v1", "kind": "TokenRequest"}' | kubectl create -f- --raw /api/v1/namespaces/default/serviceaccounts/default/token | jq -r '.status.token' | cut -d . -f2 | base64 -d | jq .aud kubectl get --raw /.well-known/openid-configuration | jq .issuer
JWT setup through OIDC self-discovery can also be done, but beware there are many network connectivity and authentication issues that can cause issues. So typically easier just to use jwt_validation_pubkeys, which is all self-discovery is doing anyway
# shows 'issuer' and 'jwks_uri' kubectl get --raw /.well-known/openid-configuration | jq # do not add trailing slash or .well-known path public_issuer=https://URL-TO-CLUSTER:port private_issuer=https://kubernetes.default.svc.cluster.local vault auth list vault auth enable --path=minikube2 jwt # get ca.crt from any of the pods in cluster kubectl exec -it deployment/tiny-tools-sales-auth -n sales -c tiny-tools -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt > ca.crt ca=$(cat ca.crt) # will get 400 unless clusterrolebinding is in place kubectl create clusterrolebinding service-account-issuer-discovery-unauthenticated --clusterrole=system:service-account-issuer-discovery --group=system:unauthenticated # configure vault write auth/minikube2/config oidc_discovery_url=$public_issuer oidc_discovery_ca_pem="$ca" bound_issuer=$private_issuer