Kubernetes: targeting workloads to a node pool/group using taints and tolerations

If you have specific intentions for a Kubernetes node pool/group (workload isolation, cpu type, etc.), then you can assign labels to attract workloads in conjunction with taints to repel workloads that do not have explicit tolerations applied.

And although the generalized kubectl utility can assign labels and taints to specific nodes, the assignment of labels and taints at the node pool/group level is preferable because regardless of node scaling events and rebuilds, the node members in that grouping will always inherit the labels and taints.

Node pool/group labels and taints

I won’t detail the exact commands to assign a label or taint to a node pool/group because the commands are different for each hyperscaler CLI.  And if you are using a Terraform resource for provisioning, the required parameters should be found in its documentation.

Labels

Let’s assume that you have applied the following as a label to the node pool/group:

  • purpose=batch

This label allows workloads to use a ‘spec.nodeSelector’ to target the nodes (affinity attraction).

Taints

And let’s assume you have applied the following taint to the node pool/group:

  • { key=processingtype, value=batch, effect=NoSchedule }

This taint repels any other workloads from running on the node pool/group, unless they explicitly define a toleration.

Assigning workloads to node pool/group

From the Kubernetes workload perspective (e.g. Deployment, StatefulSet), you need to add a ‘spec.nodeSelector’ and ‘spec.tolerations’ to have your workload assigned to the new node pool/group.

Given our label and taint above, we would add the below to a Deployment manifest to target the node pool/group.

spec:
  template:
    spec:

    # place on node with this label
    nodeSelector:
      purpose: batch

    # allow on node with this taint
    tolerations:
      - key: processingtype
        operator: Equal
        value: batch
        effect: NoSchedule

A full example can be found on my github repo as tiny-tools-nodeselector-tolerations.yaml

 

REFERENCES

kubectl label

kubectl taints

kubectl taints and tolerations

linuxhandbook, how to label nodes

GKE node pool concept

EKS node group concept

Azure node pool concept

github fabianlee, example of K8S Deployment manifest using nodeSelector and tolerations

NOTES

Applying label and taint to minikube single node using kubectl

# apply label
kubectl label nodes minikube purpose=batch
# list labels
kubectl label --list nodes minikube

# apply taint
kubectl taint nodes minikube processingtype=batch:NoSchedule
# list taints
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints --no-headers
# list taints using jq utility
kubectl get nodes -o json | jq '.items[].spec.taints'