Skip to content

MCP bypasses cluster readOnly restriction #1751

@PandaScience

Description

@PandaScience

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here
  • I've tried running main-labeled docker image and the issue still persists there
  • I'm running a supported version of the application which is listed here

Describe the bug (actual behavior)

The readOnly cluster setting (KAFKA_CLUSTERS_0_READONLY=true) can be fully bypassed via MCP, if the MCP server is enabled, while the restriction is correctly enforced in the frontend. This has been confirmed for e.g. producing and deleting messages and topics.

Given the nature of the bypass (the WebFilter is never passed for any MCP tool call), it is reasonable to assume that every write operation exposed by the kafka-ui - modifying broker configs, managing ACLs, updating schemas, managing connectors - is equally unprotected.

Expected behavior

The readOnly cluster setting should be respected not only in the frontend / UI (API calls to /api/clusters/... endpoints), but also for any MCP interaction (e.g. /mcp/message or /mcp/sse). MCP clients should receive a meaningful error when writing tools fail.

Your installation details

  1. App version: dfa5a7e
  2. Helm chart version: 1.6.0
  3. Relevant config properties:
    KAFKA_CLUSTERS_0_READONLY: "true"
    MCP_ENABLED: "true"
    

Steps to reproduce

  1. Spin up a kafka-ui deployment in read-only mode and with MCP server enabled (see config above).
  2. Connect MCP client and ask to run a destructive or creative action like producing or deleting messages.
  3. Check if the action has been successfully performed, bypassing the read-only configuration.

Screenshots

No response

Logs

No response

Additional context

Root Cause

The readonly enforcement is implemented as a Spring WebFilter:

public class ReadOnlyModeFilter implements WebFilter {
private static final Pattern CLUSTER_NAME_REGEX =
Pattern.compile("/api/clusters/(?<clusterName>[^/]++)");
private static final Set<Pattern> SAFE_ENDPOINTS = Set.of(
Pattern.compile("/api/clusters/[^/]+/topics/[^/]+/(smartfilters|analysis)$")
);
private final ClustersStorage clustersStorage;
@NotNull
@Override
public Mono<Void> filter(ServerWebExchange exchange, @NotNull WebFilterChain chain) {
var isSafeMethod =
exchange.getRequest().getMethod() == HttpMethod.GET || exchange.getRequest().getMethod() == HttpMethod.OPTIONS;
if (isSafeMethod) {
return chain.filter(exchange);
}
var path = exchange.getRequest().getPath().pathWithinApplication().value();
var decodedPath = URLDecoder.decode(path, StandardCharsets.UTF_8);
var matcher = CLUSTER_NAME_REGEX.matcher(decodedPath);
if (!matcher.find()) {
return chain.filter(exchange);
}
var clusterName = matcher.group("clusterName");
var kafkaCluster = clustersStorage.getClusterByName(clusterName)
.orElseThrow(
() -> new ClusterNotFoundException(
String.format("No cluster for name '%s'", clusterName)));
if (!kafkaCluster.isReadOnly()) {
return chain.filter(exchange);
}
var isSafeEndpoint = SAFE_ENDPOINTS
.stream()
.parallel()
.anyMatch(endpoint -> endpoint.matcher(decodedPath).matches());
if (isSafeEndpoint) {
return chain.filter(exchange);
}
return Mono.error(ReadOnlyModeException::new);
}
}

The filter intercepts all non-GET HTTP requests, extracts the cluster name from the URL path using the regex /api/clusters/(?<clusterName>[^/]++) (lines 24–25), and throws a ReadOnlyModeException if the cluster is read-only. This works correctly for direct REST API calls to /api/clusters/....

The MCP server does NOT route tool calls through the HTTP request pipeline. Tool dispatch is handled by McpSpecificationGenerator, which calls controller methods directly:

@SuppressWarnings("unchecked")
private BiFunction<McpAsyncServerExchange, Map<String, Object>, Mono<CallToolResult>>
methodCall(Method method, Object instance) {
return (ex, args) -> Mono.deferContextual(ctx -> {
try {
ServerWebExchange serverWebExchange = ctx.get(ServerWebExchange.class);
Mono<Object> result = (Mono<Object>) method.invoke(
instance,
toParams(args, method.getParameters(), ex, serverWebExchange)
);
return result.flatMap(this::toCallResult)
.onErrorResume((e) -> Mono.just(this.toErrorResult(e)));
} catch (IllegalAccessException | InvocationTargetException e) {
log.warn("Error invoking method {}: {}", method.getName(), e.getMessage(), e);
return Mono.just(this.toErrorResult(e));
}
});
}

The serverWebExchange in context is the SSE/MCP connection exchange (registered at /mcp/message and /mcp/sse), not a new exchange for an /api/clusters/... endpoint:

// SSE transport
@Bean
public WebFluxSseServerTransportProvider sseServerTransport(ObjectMapper mapper) {
return new WebFluxSseServerTransportProvider(mapper, "/mcp/message", "/mcp/sse");
}

Since ReadOnlyModeFilter only intercepts requests whose path matches /api/clusters/, the MCP dispatch path never triggers it for any tool call.

None of the individual controllers or services check cluster.isReadOnly() independently, they rely entirely on the filter. This means the bypass is not limited to a specific endpoint: every write operation exposed as an MCP tool inherits the same vulnerability.

Impact

Any client with access to the MCP endpoint (/mcp/sse) can bypass the readOnly restriction entirely and perform any write operation available in kafka-ui.

The bypass is silent: all calls return success with no error or indication that a security control was circumvented. An administrator relying on KAFKA_CLUSTERS_0_READONLY=true to protect a cluster has no indication that the MCP server renders this control ineffective.

Suggested Fix

Either of the following would address the issue:

  1. Enforce in MCP dispatch layer: Add a readonly check in McpSpecificationGenerator.methodCall() before invoking the controller method, resolving the cluster name from the tool arguments and checking clustersStorage.getClusterByName(...).isReadOnly().

  2. Enforce in controllers: Have each write controller explicitly check getCluster(clusterName).isReadOnly() and return an error, rather than relying solely on the WebFilter. This makes the guard robust regardless of invocation path.

Disclaimer

This issue has been created only after notifying maintainers via email first and clarifying that this design flaw does not classify as "security vulnerability" in the sense of the project's security policy and may be reported publicly.


Hope this write-up is useful, and thanks for maintaining such a great tool!

I'm not much of a java developer (or developer at all 😅), but I'm happy to test any proposed fix if that would be helpful. ✌️

Metadata

Metadata

Assignees

Projects

Status

Todo

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions