The Kubernetes ingress controller is the most underappreciated piece of your perimeter. It is internet-facing. It parses untrusted input. It has service-account access to every Ingress object in every namespace. And it runs, typically, as a single replica set shared across the entire cluster.
That's a DMZ. Treat it like one.
What the controller does that you forgot
When you install nginx-ingress, traefik, istio, or any of their siblings, you give it:
- A ClusterRole that can list Ingress resources cluster-wide — so it can configure itself.
- A ClusterRole that can read Secrets referenced by Ingress (TLS certs).
- A network path to every pod it needs to proxy to.
- The ability to reload configuration based on annotations in Ingress objects — which any developer with namespace access can write.
That last one is where it gets interesting.
The controller-as-attack-surface pattern
The 2024 nginx-ingress CVE cluster (CVE-2024-7646 and friends) taught everyone that annotations on an Ingress object are, effectively, input to the controller's templating engine. A user with permission to create Ingress resources in any namespace could, through a crafted annotation, inject nginx configuration — including directives that leak other namespaces' traffic or execute arbitrary Lua.
The CVEs were patched. The pattern persists across controllers because it's structural: anyone who can create Ingress resources can configure the controller, and the controller is cluster-wide.
How we test it
On a Kubernetes red team:
- Enumerate the controller — version, image digest, namespace.
- Read the RBAC — what does the controller's ServiceAccount have? Where does compromising it take us?
- Namespace posture — who can create Ingress resources? What namespaces do their Ingresses reach?
- Annotation fuzzing against a dev namespace we control, iterating known and near-known annotation abuses.
- TLS secret access — if we reach the controller's SA, we read every TLS cert in the cluster. Game over for mTLS trust assumptions.
- Config reload abuse — some controllers expose metrics or health endpoints with sensitive config embedded. Reachable from pods, sometimes from outside.
The findings we ship most often
- Controller version six months old or more, with at least one public CVE applicable. The upgrade story is usually a rollout concern, not a technical one.
- Ingress class not enforced, meaning any Ingress in any namespace is picked up. Attacker-controlled Ingress = attacker-controlled controller config.
- Broad namespace-creation rights via developer tooling. The identity that can create a namespace can create Ingress resources within it.
- No admission policy (CEL, Kyverno, OPA Gatekeeper) on Ingress annotations — no controls on what annotations a developer can set.
Hardening that actually moves the needle
- Ingress class enforcement. Every controller watches only its own class. The default "no class" match is a footgun.
- Admission policy that denies the risky annotation set. Start with a denylist of known-bad; evolve to allowlist.
- Controller isolation: one controller per tier (public, internal, partner) with distinct RBAC and network policies.
- Secret access scoped to referenced namespaces. Projected volumes plus explicit secret-RBAC annotations beat the default "read all TLS secrets" pattern.
- Upgrade cadence. Controllers patch often. Staying within 30 days of upstream is the threshold we recommend.
One quick test
Before your next red team, run this in a dev namespace you control:
kubectl get ingressclasses kubectl auth can-i create ingress -n <target-namespace> kubectl describe clusterrole | grep -A5 ingress
If can-i create ingress returns yes for any namespace you shouldn't reach, or the ClusterRole bindings include Secrets or Pods cluster-wide, you already know the first three findings your red team will deliver.
The bigger pattern
The Kubernetes pattern is "everything is a resource, every resource drives a controller, every controller has privileges." Ingress controllers are the public-facing edge of that pattern — but they aren't the only one. Cert-manager, external-dns, argoCD: same shape. Test the controllers, not just the workloads. That's where modern clusters get owned.
No marketing fluff. Unsubscribe anytime.
