The default setting for pod DNS resolution has CoreDNS use the settings from the underlying OS of the worker node. If your Kubernetes VMs are joined to multiple networks or search domains, this can cause unexpected results as well as performance issues.
If you are using K3s, you can provide an independent resolv.conf file for the Kubelet that will be used by CoreDNS and will not conflict with the OS level settings.
Solution Overview
A pod created without any explicit DNS policies or options uses ‘ClusterFirst‘ policy which forwards non-cluster resources to the upstream of the worker node and also has the pod inherit the DNS search suffixes of the worker node.
This may not be ideal for Kubernetes intra-cluster resolution, and we may choose to create a custom resolv.conf used only for cluster DNS resolution.
To implement this, we need to create a custom ‘/etc/k3s-resolv.conf’, update the kubelet argument for resolv-conf, restart the K3s service, and then finally restart the CoreDNS pods for the changes to take affect.
Create custom resolv.conf
We will create a simple custom resolv.conf file named “/etc/k3s-resolv.conf” that contains the upstream DNS server for any external domains.
echo "nameserver 192.168.1.1" | sudo tee /etc/k3s-resolv.conf
This should be created on all the master and worker VMs, since the CoreDNS pods can be run from any node.
Update kubelet arguments
Instead of using flags, we will use the standard ‘/etc/rancher/k3s/config.yaml’, that K3s will read automatically at startup.
# append kubelet arg echo 'kubelet-arg:' | sudo tee -a /etc/rancher/k3s/config.yaml echo '- "resolv-conf=/etc/k3s-resolv.conf"' | sudo tee -a /etc/rancher/k3s/config.yaml # check values sudo cat /etc/rancher/k3s/config.yaml # restart k3s service sudo systemctl stop k3s sleep 10 sudo systemctl start k3s systemctl status k3s
Restart CoreDNS pods
The CoreDNS pod(s) need to be restarted.
# enable CoreDNS logging if not found kubectl get cm -n kube-system coredns -o=json | grep -q log | kubectl get cm -n kube-system coredns -o=json | jq 'del(.metadata.resourceVersion,.metadata.uid,.metadata.selfLink,.metadata.creationTimestamp,.metadata.annotations,.metadata.generation,.metadata.ownerReferences,.status)' | sed 's#\.:53 {#\.:53 {\\n log#' | kubectl replace -f - # restart all CoreDNS pods kubectl get pod -n kube-system -l k8s-app=kube-dns --no-headers | awk '{print $1}' | xargs -I{} kubectl delete pod -n kube-system {} # wait to be available again kubectl wait deployment -n kube-system coredns --for condition=Available=True --timeout=90s # tail CoreDNS logs kubectl logs deployment/coredns -n kube-system -f
Validate DNS upstream
First test the internal ‘kubernetes’ service, which the pod internally suffixes with ‘.default.svc.cluster.local’.
# try alpine based image with musl library which can act differently than libc kubectl run -ti --rm alpine-musl --image=giantswarm/tiny-tools:3.12 --restart=Never --timeout=5s -- nslookup kubernetes # try libc based library kubectl run -ti --rm busybox-libc --image=busybox:1.35.0-glibc --restart=Never --timeout=5s -- nslookup kubernetes
The DNS queries will be output in the CoreDNS logs tailed earlier.
Then test an external name like ‘google.com’ that must use the upstream from /etc/k3s-resolv.conf
# alpine musl image kubectl run -ti --rm alpine-musl --image=giantswarm/tiny-tools:3.12 --restart=Never --timeout=5s -- nslookup google.com # libc image kubectl run -ti --rm busybox-libc --image=busybox:1.35.0-glibc --restart=Never --timeout=5s -- nslookup google.com
The DNS queries will be output in the CoreDNS logs tailed earlier.
REFERENCES
kubernetes.io, CoreDNS for Service Discovery
Digital Ocean, how to customize CoreDNS
coredns.io, corefile configuration explained
kubernetes.io, customizing DNS
kubernetes.io, troubleshooting DNS resolution
pracucci.com, why 5 ndots for kubernetes dns lookup can negatively affect performance
k3s docs, setting extra kubelet args
k3s docs, setting resolv-conf flag
infracloud.io, using CoreDNS effectively
NOTES
get Corefile from DNSCore configmap
$ kubectl get configmap -n kube-system coredns -o=jsonpath='{.data.Corefile}'
The DNS policy can be specified at pod creation using ‘overrides’
kubectl run -ti --rm busybox-glib --image=giantswarm/tiny-tools:3.12 --restart=Never --timeout=5s --overrides='{"kind":"Pod", "apiVersion":"v1", "spec": {"dnsPolicy":"Default"}}' -- nslookup prometheus.k3s.local