Kube-Oddities - The Quirks That Keep Kubernetes Interesting - Marcus Noble & Márk Sági-Kazár

A presentation at KubeCon + CloudNativeCon Europe in March 2026 in Amsterdam, Netherlands by Marcus Noble

Slide 1

Slide 1

Kube-Oddities The Quirks That Keep Kubernetes Interesting KubeCon + CloudNativeCon Europe March 25th 2026

Slide 2

Slide 2

Hi 👋 I’m Marcus Noble! I’m a platform engineer at I’m a CNCF Ambassador at CloudNative.Now and I run a monthly newsletter 7+ years experience with cloud native and Kubernetes Hi 👋 I’m Márk Sági-Kazár! Don’ t try to pronounce it I’m (also) a CNCF Ambassador and I organize the Hungarian Cloud Native community (meetup and KCD). Look out for KCD Budapest later this year I like building and breaking stuff, hoping that it helps others. My latest project is Kubernetes the Very Hard Way.

Slide 3

Slide 3

Kube-Oddities Over our combined many years of working with Kubernetes we’ve seen some weird things that resulted in confusion and frustration. We’re here to share some of those with y’all to help you avoid the same. Whether you’re a Kubernetes veteran from the pre-CRD days or you’re just getting started with kubectl we’ve got something for you!

Slide 4

Slide 4

Kube-Oddities Over our combined many years of working with Kubernetes we’ve seen some weird things that resulted in confusion and frustration. We’re here to share some of those with y’all to help you avoid the same. Whether you’re a Kubernetes veteran from the pre-CRD days or you’re just getting started with kubectl we’ve got something for you! Kube-Control

Slide 5

Slide 5

Kube-Oddities Over our combined many years of working with Kubernetes we’ve seen some weird things that resulted in confusion and frustration. We’re here to share some of those with y’all to help you avoid the same. Whether you’re a Kubernetes veteran from the pre-CRD days or you’re just getting started with kubectl we’ve got something for you! Kube-Control Kube-C-T-L

Slide 6

Slide 6

Kube-Oddities Over our combined many years of working with Kubernetes we’ve seen some weird things that resulted in confusion and frustration. We’re here to share some of those with y’all to help you avoid the same. Whether you’re a Kubernetes veteran from the pre-CRD days or you’re just getting started with kubectl we’ve got something for you! Kube-Control Kube-C-T-L Kube-Cuddle!!!

Slide 7

Slide 7

Pods

Slide 8

Slide 8

Sidecar Containers ● Secondary containers that run within a Pod alongside your app container

Slide 9

Slide 9

Sidecar Containers ● Secondary containers that run within a Pod alongside your app container ● Concept popularised by Istio service mesh

Slide 10

Slide 10

Sidecar Containers ● Secondary containers that run within a Pod alongside your app container ● Concept popularised by Istio service mesh ● Officially supported, and enabled by default, from Kubernetes v1.29

Slide 11

Slide 11

Sidecar Containers ● Secondary containers that run within a Pod alongside your app container ● Concept popularised by Istio service mesh ● Officially supported, and enabled by default, from Kubernetes v1.29 apiVersion: v1 kind: Pod metadata: name: “example-pod” spec: initContainers: - name: iptables-setup image: alpine

Slide 12

Slide 12

Sidecar Containers ● Secondary containers that run within a Pod alongside your app container ● Concept popularised by Istio service mesh ● Officially supported, and enabled by default, from Kubernetes v1.29 apiVersion: v1 kind: Pod metadata: name: “example-pod” spec: initContainers: - name: iptables-setup image: alpine containers: - name: application image: my-super-app:latest

Slide 13

Slide 13

Sidecar Containers ● Secondary containers that run within a Pod alongside your app container ● Concept popularised by Istio service mesh ● Officially supported, and enabled by default, from Kubernetes v1.29 apiVersion: v1 kind: Pod metadata: name: “example-pod” spec: initContainers: - name: iptables-setup image: alpine containers: - name: application image: my-super-app:latest sidecarContainers: - name: service-mesh image: istio:latest

Slide 14

Slide 14

Sidecar Containers ● Secondary containers that run within a Pod alongside your app container ● Concept popularised by Istio service mesh ● Officially supported, and enabled by default, from Kubernetes v1.29 apiVersion: v1 kind: Pod metadata: name: “example-pod” spec: initContainers: - name: iptables-setup image: alpine containers: - name: application image: my-super-app:latest sidecarContainers: - name: service-mesh image: istio:latest

Slide 15

Slide 15

Sidecar Containers ● Secondary containers that run within a Pod alongside your app container ● Concept popularised by Istio service mesh ● Officially supported, and enabled by default, from Kubernetes v1.29 apiVersion: v1 kind: Pod metadata: name: “example-pod” spec: initContainers: - name: iptables-setup image: alpine - name: service-mesh image: istio restartPolicy: Always This is the “magic” containers: - name: application image: my-super-app:latest

Slide 16

Slide 16

Image tag & SHA ● Recommended best practice - use image SHA! apiVersion: v1 kind: Pod metadata: name: “tiny-pod” spec: containers: - name: “nginx” image: “nginx:1.25.1@sha256:9d6b58feebd2db…2072c9496” imagePullPolicy: “Always”

Slide 17

Slide 17

Image tag & SHA ● ● Recommended best practice - use image SHA! SHA-based image tag ensure exactly the same image is used each time, even if tag it overwritten apiVersion: v1 kind: Pod metadata: name: “tiny-pod” spec: containers: - name: “nginx” image: “nginx:1.25.1@sha256:9d6b58feebd2db…2072c9496” imagePullPolicy: “Always”

Slide 18

Slide 18

Image tag & SHA ● ● ● Recommended best practice - use image SHA! SHA-based image tag ensure exactly the same image is used each time, even if tag it overwritten If SHA is used, the tag is completely ignored and may no longer match the SHA! ⚠ ○ Be careful with automated dependency updaters - make sure the sha is also updated! apiVersion: v1 kind: Pod metadata: name: “tiny-pod” spec: containers: - name: “nginx” image: “nginx:1.25.1@sha256:9d6b58feebd2db…2072c9496” imagePullPolicy: “Always”

Slide 19

Slide 19

Image tag & SHA ● ● ● Recommended best practice - use image SHA! SHA-based image tag ensure exactly the same image is used each time, even if tag it overwritten If SHA is used, the tag is completely ignored and may no longer match the SHA! ⚠ ○ Be careful with automated dependency updaters - make sure the sha is also updated! apiVersion: v1 kind: Pod metadata: name: “tiny-pod” spec: containers: Meaningless!!! - name: “nginx” image: “nginx:1.25.1@sha256:9d6b58feebd2db…2072c9496” imagePullPolicy: “Always” This is actually 1.25.2 😱

Slide 20

Slide 20

🎲 Let’s play a game! 🎲 Spot the mistake: Pod Volume Edition

Slide 21

Slide 21

Volumes apiVersion: v1 kind: Pod metadata: name: “demo-pod” spec: containers: - name: demo image: nginx volumes: - name: config-vol configMap: name: demo-html apiVersion: v1 kind: Pod metadata: name: “demo-pod” spec: containers: - name: demo image: nginx volumes: - name: secret-vol secret: name: demo-html

Slide 22

Slide 22

Volumes apiVersion: v1 kind: Pod metadata: name: “demo-pod” spec: containers: - name: demo image: nginx volumes: - name: config-vol configMap: name: demo-html apiVersion: v1 kind: Pod metadata: name: “demo-pod” spec: containers: - name: demo image: nginx volumes: - name: secret-vol secret: secretName: demo-html Why not configMapName?! WHY?!?!

Slide 23

Slide 23

Networking

Slide 24

Slide 24

spec.hostname apiVersion: v1 kind: Pod metadata: name: demo-pod The Pod will then see this as its hostname: namespace: default /# hostname -A Even though this is a pod spec: demo.example.default.svc.cluster.local hostname: demo Namespace subdomain: example containers: Great! We can configure how our Pods can be - name: demo reached, right? image: nginx You can control the hostname that is set in a Pod with the hostname and subdomain (among other) properties.

Slide 25

Slide 25

spec.hostname apiVersion: v1 kind: Pod metadata: name: demo-pod The Pod will then see this as its hostname: namespace: default /# hostname -A Even though this is a pod spec: demo.example.default.svc.cluster.local hostname: demo Namespace subdomain: example containers: Great! We can configure how our Pods can be - name: demo reached, right? image: nginx You can control the hostname that is set in a Pod with the hostname and subdomain (among other) properties.

Slide 26

Slide 26

spec.hostname apiVersion: v1 kind: Pod metadata: name: demo-pod The Pod will then see this as its hostname: namespace: default /# hostname -A Even though this is a pod spec: demo.example.default.svc.cluster.local hostname: demo Namespace subdomain: example containers: Great! We can configure how our Pods can be - name: demo reached, right? image: nginx You can control the hostname that is set in a Pod with the hostname and subdomain (among other) properties. 😅

Slide 27

Slide 27

spec.hostname bash-5.0# nslookup demo.example.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10#53 ** server can’t find demo.example.default.svc.cluster.local: NXDOMAIN

Slide 28

Slide 28

spec.hostname bash-5.0# nslookup demo.example.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10#53 ** server can’t find demo.example.default.svc.cluster.local: NXDOMAIN bash-5.0# nslookup demo-pod.default.svc.cluster.local

Slide 29

Slide 29

spec.hostname bash-5.0# nslookup demo.example.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10#53 ** server can’t find demo.example.default.svc.cluster.local: NXDOMAIN bash-5.0# nslookup demo-pod.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10#53 ** server can’t find demo-pod.default.svc.cluster.local: NXDOMAIN Yeah, sorry, Pods can’t actually have DNS assigned to them

Slide 30

Slide 30

Headless Services apiVersion: v1 kind: Service metadata: name: headless spec: clusterIP: None selector: app: web-server ports: - port: 80 Ok, Pods can kinda get DNS thanks to “Headless Services”. These services sit in front of pods and expose each as a DNS entry. bash-5.0# nslookup headless.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10#53 Name: headless.loki.svc.cluster.local Address: 10.244.7.90 Name: headless.loki.svc.cluster.local Address: 10.244.4.67 Name: headless.loki.svc.cluster.local Address: 10.244.5.247

Slide 31

Slide 31

Headless Services apiVersion: v1 kind: Service metadata: name: headless spec: clusterIP: None selector: app: web-server ports: - port: 80 Ok, Pods can kinda get DNS thanks to “Headless Services”. These services sit in front of pods and expose each as a DNS entry. bash-5.0# nslookup pod-0.headless.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10#53 Name: pod-0.headless.loki.svc.cluster.local Address: 10.244.7.90 Each individual Pod also gets their own DNS (pod-0 is the Pod name here)

Slide 32

Slide 32

🎲 Let’s play another game! 🎲 Guess the default! DNS Policy Edition

Slide 33

Slide 33

DNS Policy apiVersion: v1 kind: Pod metadata: name: “demo-pod” spec: dnsPolicy: ???? containers: - name: demo image: nginx a) b) c) d) Default ClusterFirst ClusterFirstWithHostNet None All are valid values

Slide 34

Slide 34

DNS Policy apiVersion: v1 kind: Pod metadata: name: “demo-pod” spec: dnsPolicy: ClusterFirst containers: - name: demo image: nginx Obviously not the default! 🙄 a) b) c) d) Default ClusterFirst ClusterFirstWithHostNet None “Default” inherits the host nodes DNS resolution config with no knowledge of Kubernetes services and endpoints.

Slide 35

Slide 35

Security

Slide 36

Slide 36

Irrevocable Credentials What would you say if I told you it was possible to create credentials in Kubernetes that were impossible to rotate or revoke?

Slide 37

Slide 37

Irrevocable Credentials What would you say if I told you it was possible to create credentials in Kubernetes that were impossible to rotate or revoke? 😱

Slide 38

Slide 38

Irrevocable Credentials Let me introduce the token request API! kubectl create token node-controller -n kube-system ServiceAccount name

Slide 39

Slide 39

Irrevocable Credentials Let me introduce the token request API! kubectl create token node-controller -n kube-system Congratulations! You now have a JWT auth token associated with the node-controller Service Account that cannot be revoked!

Slide 40

Slide 40

Irrevocable Credentials kubectl auth whoami ATTRIBUTE VALUE Username system:serviceaccount:kube-system:node-controller UID bf9b4829-fdb8-41ca-a9c9-dfd2eb7a417a Groups [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] kubectl auth can-i get pods yes kubectl auth can-i delete pods yes kubectl auth can-i create pods no kubectl auth can-i delete nodes yes

Slide 41

Slide 41

Irrevocable Credentials The only way to remove the credentials… kubectl delete serviceaccount -n kube-system node-controller The only alternative is to wait for the credentials to expire. Hopefully you’ve set sensible defaults. —service-account-max-token-expiration: “24h”

Slide 42

Slide 42

RBAC Escalate Any Kubernetes user that has the permission to create Roles / ClusterRoles cannot (thankfully!) use this ability to escalate their permissions by creating a new Role with more permissions than they currently have. Kubernetes says NO! 🙅

Slide 43

Slide 43

RBAC Escalate Any Kubernetes user that has the permission to create Roles / ClusterRoles cannot (thankfully!) use this ability to escalate their permissions by creating a new Role with more permissions than they currently have. Kubernetes says NO! 🙅 Unless…. 😅

Slide 44

Slide 44

RBAC Escalate Let me introduce you to the ESCALATE verb. - apiGroups: - ‘rbac.authorization.k8s.io’ resources: - ‘’ verbs: - ‘’ Includes the ESCALATE verb, oh no! 😱 If you happen to have this you’ve got superpowers! You can now edit your own (Cluster)Roles with whatever you want!

Slide 45

Slide 45

Admission Policy Blind Spot ● ● RBAC = additive only permissions Admission controllers = more complex permissions While admissions controllers (webhooks and policies) are often used to remove specific permissions granted via RBAC - e.g. a DELETE permission could be further restricted to resources with a specific naming pattern - there is a huge blind spot.

Slide 46

Slide 46

Admission Policy Blind Spot ● ● RBAC = additive only permissions Admission controllers = more complex permissions While admissions controllers (webhooks and policies) are often used to remove specific permissions granted via RBAC - e.g. a DELETE permission could be further restricted to resources with a specific naming pattern - there is a huge blind spot. Admissions controllers cannot restrict themselves! Any API calls related to admission webhooks or admission policies skip over the admission controller phase!

Slide 47

Slide 47

Admission Policy Blind Spot // IsExemptAdmissionConfigurationResource determines if an admission.Attributes object is describing // the admission of a ValidatingWebhookConfiguration or a MutatingWebhookConfiguration // or a ValidatingAdmissionPolicy or a ValidatingAdmissionPolicyBinding func IsExemptAdmissionConfigurationResource(attr admission.Attributes) bool { gvk := attr.GetKind() if gvk.Group == “admissionregistration.k8s.io” { if gvk.Kind == “ValidatingWebhookConfiguration” || gvk.Kind == “MutatingWebhookConfiguration” || gvk.Kind == “ValidatingAdmissionPolicy” || gvk.Kind == “ValidatingAdmissionPolicyBinding” || gvk.Kind == “MutatingAdmissionPolicy” || gvk.Kind == “MutatingAdmissionPolicyBinding” { return true } } return false } Taken from: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/rules/rules.go

Slide 48

Slide 48

Operations

Slide 49

Slide 49

crictl pods != kubectl get pods

Slide 50

Slide 50

CRI-Cuddle!!! crictl pods != kubectl get pods

Slide 51

Slide 51

crictl pods != kubectl get pods ● containerd has “Pods”, these are not the same as Kubernetes “Pods” / # crictl pods —latest —output yaml items: - annotations: kubernetes.io/config.seen: “2026-03-03T13:32:20.327405939Z” kubernetes.io/config.source: api createdAt: “1772544742252447431” id: 40188e7730d6bba51f2674662d66a118396ca18931d633219bf5f93edc056f41 labels: app.kubernetes.io/managed-by: kubectl-debug io.kubernetes.pod.name: node-debugger-talos-192-168-1-168-m8zgz io.kubernetes.pod.namespace: default io.kubernetes.pod.uid: 43de432d-29e3-43c6-a726-b06859b93a48 metadata: attempt: 0 name: node-debugger-talos-192-168-1-168-m8zgz namespace: default uid: 43de432d-29e3-43c6-a726-b06859b93a48 runtimeHandler: “” state: SANDBOX_READY

Slide 52

Slide 52

Static Manifests ● ● ● ● ● All nodes can defined Pods that will run on itself Managed by the Kubelet, not Kubernetes api-server Kubelet attempts to create a “mirror pod” on the api-server as a read-only view Cannot be edited or deleted via Kubernetes API Can create hidden pods

Slide 53

Slide 53

Static Manifests ● ● ● ● ● All nodes can defined Pods that will run on itself Managed by the Kubelet, not Kubernetes api-server Kubelet attempts to create a “mirror pod” on the api-server as a read-only view Cannot be edited or deleted via Kubernetes API Can create hidden pods Wait! What?!

Slide 54

Slide 54

Static Manifests ● ● ● ● ● All nodes can defined Pods that will run on itself Managed by the Kubelet, not Kubernetes api-server Kubelet attempts to create a “mirror pod” on the api-server as a read-only view Cannot be edited or deleted via Kubernetes API apiVersion: v1 Can create hidden pods $> kubectl get pods -n not-a-namespace No resources found in not-a-namespace namespace. $> kubectl get namespace not-a-namespace Error from server (NotFound): namespaces “not-a-namespace” not found kind: Pod metadata: name: “demo-pod” namespace: “not-a-namespace” spec: dnsPolicy: ClusterFirst containers: - name: demo image: nginx

Slide 55

Slide 55

Kubelet Standalone Mode ● ● Useful for running Pods on single compute instances Leverage the Kubelet to run containers with a subset of Kubernetes features including: ○ ○ ○ ○ ● ● Pods initContainers CNI Health checks (probes) Limited to what is possible with static manifests (so no ConfigMaps or Volumes) Access kubelet vis localhost REST endpoint or cyberark/kubeletctl for a more kubectl-like feel

Slide 56

Slide 56

Kubernetes Lite! Kubelet Standalone Mode ● ● Useful for running Pods on single compute instances Leverage the Kubelet to run containers with a subset of Kubernetes features including: ○ ○ ○ ○ ● ● Pods initContainers CNI Health checks (probes) Limited to what is possible with static manifests (so no ConfigMaps or Volumes) Access kubelet vis localhost REST endpoint or cyberark/kubeletctl for a more kubectl-like feel

Slide 57

Slide 57

If you’ve enjoyed this talk… Check out Marcus’ Pod Deep Dive: The Interesting Bits talk https://youtu.be/E_r56x92KZw Check out Mark’s Kubernetes the Very Hard Way on iximiuz Labs https://labs.iximiuz.com/courses/kubernetes-the-very-hard-way-0cbfd997

Slide 58

Slide 58

Slides and resources available at: https://go-get.link/kubecon-eu-26 Contact Marcus at: 🌐 MarcusNoble.com 🦋 @averagemarcus.bsky.social 🐘 @Marcus@k8s.social Contact Márk at: 🌐 sagikazarmark.com 🦋 @sagikazarmark.com Feedback and suggestions welcome: https://talkpulse.app/feedback/28XKGMMT Thank you