The default setting for pod DNS resolution has CoreDNS use the settings from the underlying OS of the worker node. If your Kubernetes VMs are joined to multiple networks or search domains, this can cause unexpected results as well as performance issues.
If you are using kubeadm, you can provide an independent resolv.conf file for the Kubelet that will be used by CoreDNS and will not conflict with the OS level settings.
Solution Overview
A pod created without any explicit DNS policies or options uses ‘ClusterFirst‘ policy which forwards non-cluster resources to the upstream of the worker node and also has the pod inherit the DNS search suffixes of the worker node.
This may not be ideal for Kubernetes intra-cluster resolution, and we may choose to create a custom resolv.conf used only for cluster DNS resolution.
To implement this, we need to create a custom ‘/etc/kubeadm-resolv.conf’, update the kubelet ConfigMap with this custom file path, do a rolling restart of the kubelet DaemonSet, and then finally restart the CoreDNS pods for the changes to take affect.
Create custom resolv.conf
We will create a simple custom resolv.conf file named “/etc/kubeadm-resolv.conf” that contains the upstream DNS server for any external domains.
echo "nameserver 192.168.1.1" | sudo tee /etc/kubeadm-resolv.conf
This should be created on all master and worker VMs, since the CoreDNS pods can be run from any node.
Update kubelet resolvConf
By default, the kubelet ConfigMap has the value of ‘resolvConf’ embedded as ‘/etc/resolv.conf’. Update this value because this is where new kubeadm nodes get their initial values.
# make sure we have the jq utility sudo apt install jq -y # update resolvConf key in ConfigMap kubectl get cm -n kube-system kubelet-config -o=json | \ jq 'del(.metadata.resourceVersion,.metadata.uid,.metadata.selfLink,.metadata.creationTimestamp,.metadata.annotations,.metadata.generation,.metadata.ownerReferences,.status)' | \ sed -E 's#resolvConf: [^\n ]*\\n#resolvConf: /etc/kubeadm-resolv.conf\\n#' | \ kubectl replace -f - # restart kubelet kubectl -n kube-system rollout restart daemonset/kube-proxy
Restart CoreDNS pods
The CoreDNS pod(s) need to be restarted.
# enable CoreDNS logging if not found kubectl get cm -n kube-system coredns -o=json | grep -q log || kubectl get cm -n kube-system coredns -o=json | jq 'del(.metadata.resourceVersion,.metadata.uid,.metadata.selfLink,.metadata.creationTimestamp,.metadata.annotations,.metadata.generation,.metadata.ownerReferences,.status)' | sed 's#\.:53 {#\.:53 {\\n log#' | kubectl replace -f - # restart all CoreDNS pods kubectl get pod -n kube-system -l k8s-app=kube-dns --no-headers | awk '{print $1}' | xargs -I{} kubectl delete pod -n kube-system {} # wait to be available again kubectl wait deployment -n kube-system coredns --for condition=Available=True --timeout=90s # tail CoreDNS logs kubectl logs deployment/coredns -n kube-system -f
Validate DNS upstream
First test the internal ‘kubernetes’ service, which the pod internally suffixes with ‘.default.svc.cluster.local’.
# try alpine based image with musl library which can act differently than libc kubectl run -ti --rm alpine-musl --image=giantswarm/tiny-tools:3.12 --restart=Never --timeout=5s -- nslookup kubernetes # try libc based library kubectl run -ti --rm busybox-libc --image=busybox:1.35.0-glibc --restart=Never --timeout=5s -- nslookup kubernetes
The DNS queries will be output in the CoreDNS logs tailed earlier.
Then test an external name like ‘google.com’ that must use the upstream from /etc/kubeadm-resolv.conf
# alpine musl image kubectl run -ti --rm alpine-musl --image=giantswarm/tiny-tools:3.12 --restart=Never --timeout=5s -- nslookup google.com # libc image kubectl run -ti --rm busybox-libc --image=busybox:1.35.0-glibc --restart=Never --timeout=5s -- nslookup google.com
The DNS queries will be output in the CoreDNS logs tailed earlier.
REFERENCES
kubernetes.io, CoreDNS for Service Discovery
Digital Ocean, how to customize CoreDNS
coredns.io, corefile configuration explained
kubernetes.io, customizing DNS
kubernetes.io, troubleshooting DNS resolution
pracucci.com, why 5 ndots for kubernetes dns lookup can negatively affect performance
infracloud.io, using CoreDNS effectively
NOTES
get Corefile from DNSCore configmap
$ kubectl get configmap -n kube-system coredns -o=jsonpath='{.data.Corefile}'
The DNS policy can be specified at pod creation using ‘overrides’
kubectl run -ti --rm busybox-glib --image=giantswarm/tiny-tools:3.12 --restart=Never --timeout=5s --overrides='{"kind":"Pod", "apiVersion":"v1", "spec": {"dnsPolicy":"Default"}}' -- nslookup prometheus.kubeadm.local