PentStark
Blog · Cloud Security

Your Kubernetes ingress is the new DMZ

PentStark CloudApril 2, 20267 min readMore on Cloud Security

The Kubernetes ingress controller is the most underappreciated piece of your perimeter. It is internet-facing. It parses untrusted input. It has service-account access to every Ingress object in every namespace. And it runs, typically, as a single replica set shared across the entire cluster.

That's a DMZ. Treat it like one.

What the controller does that you forgot

When you install nginx-ingress, traefik, istio, or any of their siblings, you give it:

  • A ClusterRole that can list Ingress resources cluster-wide — so it can configure itself.
  • A ClusterRole that can read Secrets referenced by Ingress (TLS certs).
  • A network path to every pod it needs to proxy to.
  • The ability to reload configuration based on annotations in Ingress objects — which any developer with namespace access can write.

That last one is where it gets interesting.

The controller-as-attack-surface pattern

The 2024 nginx-ingress CVE cluster (CVE-2024-7646 and friends) taught everyone that annotations on an Ingress object are, effectively, input to the controller's templating engine. A user with permission to create Ingress resources in any namespace could, through a crafted annotation, inject nginx configuration — including directives that leak other namespaces' traffic or execute arbitrary Lua.

The CVEs were patched. The pattern persists across controllers because it's structural: anyone who can create Ingress resources can configure the controller, and the controller is cluster-wide.

How we test it

On a Kubernetes red team:

  1. Enumerate the controller — version, image digest, namespace.
  2. Read the RBAC — what does the controller's ServiceAccount have? Where does compromising it take us?
  3. Namespace posture — who can create Ingress resources? What namespaces do their Ingresses reach?
  4. Annotation fuzzing against a dev namespace we control, iterating known and near-known annotation abuses.
  5. TLS secret access — if we reach the controller's SA, we read every TLS cert in the cluster. Game over for mTLS trust assumptions.
  6. Config reload abuse — some controllers expose metrics or health endpoints with sensitive config embedded. Reachable from pods, sometimes from outside.

The findings we ship most often

  • Controller version six months old or more, with at least one public CVE applicable. The upgrade story is usually a rollout concern, not a technical one.
  • Ingress class not enforced, meaning any Ingress in any namespace is picked up. Attacker-controlled Ingress = attacker-controlled controller config.
  • Broad namespace-creation rights via developer tooling. The identity that can create a namespace can create Ingress resources within it.
  • No admission policy (CEL, Kyverno, OPA Gatekeeper) on Ingress annotations — no controls on what annotations a developer can set.

Hardening that actually moves the needle

  1. Ingress class enforcement. Every controller watches only its own class. The default "no class" match is a footgun.
  2. Admission policy that denies the risky annotation set. Start with a denylist of known-bad; evolve to allowlist.
  3. Controller isolation: one controller per tier (public, internal, partner) with distinct RBAC and network policies.
  4. Secret access scoped to referenced namespaces. Projected volumes plus explicit secret-RBAC annotations beat the default "read all TLS secrets" pattern.
  5. Upgrade cadence. Controllers patch often. Staying within 30 days of upstream is the threshold we recommend.

One quick test

Before your next red team, run this in a dev namespace you control:

kubectl get ingressclasses
kubectl auth can-i create ingress -n <target-namespace>
kubectl describe clusterrole | grep -A5 ingress

If can-i create ingress returns yes for any namespace you shouldn't reach, or the ClusterRole bindings include Secrets or Pods cluster-wide, you already know the first three findings your red team will deliver.

The bigger pattern

The Kubernetes pattern is "everything is a resource, every resource drives a controller, every controller has privileges." Ingress controllers are the public-facing edge of that pattern — but they aren't the only one. Cert-manager, external-dns, argoCD: same shape. Test the controllers, not just the workloads. That's where modern clusters get owned.

Get research like this monthly.

No marketing fluff. Unsubscribe anytime.

Talk to an operator

Your next finding is one scoping call away.

Thirty minutes with a real operator tells us what you need and what we can deliver. No BDR handoff, no sales engineer theater — the person you talk to is the person who scopes the work.

Talk to an expertBook a demo
Responses in < 1 business day