← Back to blog
| Engineering

Building a Pod Security Scanner You Actually Use

Most teams know they should scan their pods for security misconfigurations. Most teams don’t. The tooling exists — OPA Gatekeeper, Kyverno, kubesec — but it lives in CI pipelines or admission controllers, far from the moment you’re actually staring at a misbehaving workload. Clusterfudge embeds a security scanner directly in the desktop client, so checking pod security posture is one click away from the resource list.

What We Actually Check

The scanner aligns with Kubernetes Pod Security Standards (PSS) — the three-tier model that replaced the old PodSecurityPolicy. Every pod gets classified as privileged, baseline, or restricted based on what its containers are allowed to do.

The checks are deliberately concrete:

  • Privileged containers — A container with privileged: true has full access to the host kernel. This is a critical finding, always.
  • Running as root — If runAsNonRoot isn’t explicitly set, the container may run as UID 0. Flagged as a warning because some workloads legitimately need root during init.
  • Writable root filesystem — Without readOnlyRootFilesystem: true, a compromised container can write to its own filesystem — useful for attackers dropping binaries.
  • Dangerous capabilities — Adding SYS_ADMIN or NET_ADMIN effectively grants host-level privileges through the back door.
  • Missing resource limits — No CPU or memory limits means a single runaway container can starve the node. This is a security concern, not just a reliability one.
  • Host namespace accesshostNetwork, hostPID, and hostIPC break the isolation boundary between pod and node. Critical when present.

Why Not Just Use an Admission Controller?

Admission controllers are the right place to enforce policy — they prevent bad specs from ever being applied. But they have a blind spot: they only see resources at admission time. A pod that was deployed before the policy existed, or one that was exempted during an incident, sits in the cluster unchecked.

A client-side scanner fills a different role. It answers “what’s the security posture of what’s actually running right now?” — not “what would be blocked if someone tried to deploy it.” This is the difference between a fire alarm and a fire inspection.

In practice, the workflow looks like this: you’re investigating a production issue, you pull up the namespace, and the scanner shows you that three pods are running privileged with host networking. That context changes how you think about the problem — and it’s information you wouldn’t have thought to ask for.

Scanning Across the Cluster

The single-pod check is useful, but the real power comes from scanning every pod in a namespace — or the entire cluster — at once. The scanner uses the Kubernetes dynamic client to list all pods, runs each spec through the same checks, and aggregates the results:

for _, item := range result.Items {
    spec, ok := item.Object["spec"].(map[string]any)
    if !ok {
        continue
    }
    metadata, ok := item.Object["metadata"].(map[string]any)
    if !ok {
        continue
    }

    check := security.CheckPodSecurity(spec)
    for _, v := range check.Violations {
        v.Field = fmt.Sprintf("%s/%s: %s",
            metadata["namespace"], metadata["name"], v.Field)
        scanResult.Violations = append(scanResult.Violations, v)
    }
}

Every violation carries a severity, category, the exact field path, and a remediation step. The frontend renders this as a sortable, filterable table — you can group by severity to tackle criticals first, or by category to fix all the “running as root” issues in one pass.

Keeping It Fast and Local

The scanner runs entirely in the Go backend process. There’s no external service to configure, no agent to deploy into the cluster, and no data leaves the machine. A full scan of a namespace with a few hundred pods completes in under a second because the only API call is a single List — all the analysis happens locally against the returned specs.

This is a deliberate design choice. Security tools that require infrastructure changes tend to get “planned for next quarter” indefinitely. A scanner that works the moment you connect to a cluster actually gets used.

The Takeaway

The best security tool is the one that’s in your path when you’re already working. By embedding pod security checks directly in the Kubernetes client, we make security posture visible without requiring anyone to change their workflow. No new pipeline stages, no new cluster components — just connect and scan.