You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 083-stretch-cluster.md
+172-8Lines changed: 172 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -128,14 +128,178 @@ The Kafka reconciler will identify all target clusters from KafkaNodePool resour
128
128
This will ensure that even if the central cluster experiences an outage, external clients can still connect to the stretch cluster and continue their operations without interruption.
129
129
130
130
#### Cross-cluster communication
131
-
Kafka controllers/brokers are distributed across multiple Kubernetes environments and will need to communicate with each other.
132
-
Currently, the Strimzi Kafka operator defines Kafka listeners for internal communication (controlplane and replication) between brokers/controllers (Kubernetes services using ports 9090 and 9091).
133
-
The user is not able to influence how these services are set up and exposed outside the cluster.
134
-
We would remove this limitation and allow users to define how these internal listeners are configured in the Kafka resource, just like they do for Kafka client listeners.
135
-
136
-
Users will also be able to override listener configurations in each KafkaNodePool resource, if the listeners need to be exposed in different ways (ingress host names, Ingress annotations etc.) for each Kubernetes cluster.
137
-
This will be similar to how KafkaNodePools are used to override other configuration like storage etc.
138
-
To override a listener, KafkaNodePool will define configuration with same listener name as in Kafka resource.
131
+
132
+
#### Cross-Cluster Communication Using Submariner
133
+
134
+
For Kafka brokers and controllers distributed across multiple Kubernetes clusters, cross-cluster communication is essential.
135
+
Submariner is a tool that facilitates this type of communication by connecting different Kubernetes clusters through secure networking, allowing data transfer without relying solely on traditional methods such as LoadBalancers or Ingresses.
136
+
137
+
#### Current Limitations
138
+
139
+
The Strimzi Kafka operator currently sets up Kafka listeners for internal communication between brokers and controllers within a single Kubernetes cluster.
140
+
These services are typically defined as headless services, accessible only within the cluster’s network (e.g., `my-cluster-broker-0.my-cluster-kafka-brokers.svc.cluster.local`).
141
+
However, these internal addresses are not accessible outside the cluster,
142
+
while Kubernetes provides native solutions like Ingress to expose services externally, these solutions can be slower
143
+
and may introduce latency due to the additional routing overhead. Submariner offers a more efficient alternative by
144
+
enabling direct communication between clusters through secure IP routing.
145
+
146
+
147
+
#### How Submariner Facilitates Cross-cluster Communication
148
+
149
+
When multiple Kubernetes clusters are connected using Submariner,
150
+
services of type ClusterIP can be exported, making them accessible across participating clusters in the network.
151
+
152
+
To export a service using Submariner, the following command can be used
153
+
154
+
```
155
+
subctl export service --kubeconfig <CONFIG> --namespace <NAMESPACE> my-cluster-kafka-brokers
156
+
```
157
+
158
+
This command creates a ServiceExport resource in the specified namespace,
159
+
signaling Submariner to register the service with the Submariner Broker.
160
+
The Broker acts as the coordinator for cross-cluster service discovery,
161
+
utilizing the Lighthouse component to allow services in different clusters
162
+
to discover and communicate with each other.
163
+
Submariner sets up tunnels and routing tables, ensuring direct and secure traffic flow between clusters.
164
+
165
+
Once a service is exported, it becomes accessible through a global DNS name format: `<service-name>.<namespace>.svc.clusterset.local`.
166
+
This DNS name enables reachability for any cluster in the Submariner deployment.
167
+
For instance, the `advertised.listener` configuration in a Kafka setup would be updated from `my-cluster-broker-0.my-cluster-kafka-brokers.svc.cluster.local`
168
+
to `my-cluster-broker-0.cluster1.my-cluster-kafka-brokers.svc.clusterset.local`, where `cluster1` represents the Submariner cluster ID.
169
+
Similarly the `controller.quorum.voters` property also need to be updated to use Submariner exported servicename.
170
+
This ensures that `advertised.listener` and `controller.quorum.voters` addresses are reachable from any connected cluster.
171
+
172
+
173
+
#### Considerations for SSL/TLS Verification
174
+
175
+
For SSL hostname verification between pods, the Subject Alternative Name (SAN) entries in the certificates must include the FQDNs of the Submariner-exported services.
176
+
This can be achieved in the following ways:
177
+
178
+
1. **Defining SANs in the Kafka CR**: Users can specify the Submariner-exported FQDNs in the alternativeNames field within the Kafka CR's listener configuration.
This ensures that the SANs are included in the certificates provided to each broker.
197
+
However, this approach may inject all broker FQDNs into every broker's certificate, which is not ideal.
198
+
199
+
2.**Controller Pods**: Unlike brokers, controller pods do not inherit listener configurations from the Kafka CR and use a single control plane listener (TCP 9090).
200
+
This requires adding SANs for the controller’s communication manually, which is not currently supported by Strimzi and is not considered an optimal solution.
201
+
202
+
#### Extending KafkaNodePool Custom Resource for Cross-Cluster Communication
203
+
204
+
To integrate Submariner as a cross-cluster communication solution in Strimzi,
205
+
the KafkaNodePool Custom Resource (CR) should be extended to include configuration fields that support various cross-cluster technologies.
206
+
This section outlines the proposed changes to the KafkaNodePool CR,
207
+
along with an explanation of their roles in enabling communication across Kubernetes clusters.
208
+
209
+
#### Proposed Changes to the KafkaNodePool Custom Resource
210
+
To make the KafkaNodePool CR more flexible and future-proof, we propose the following additions
211
+
212
+
1.**New Cross-Cluster Technology Field**: Introduce a new field called crossClusterTechnology in the KafkaNodePool CR.
213
+
This field will allow users to specify the technology they wish to use for cross-cluster communication.
214
+
Initially, Submariner will be supported, but this design accommodates future integration with other technologies.
215
+
216
+
2.**Submariner-Specific Configuration**: If Submariner is chosen as the cross-cluster technology, users must provide a submarinerClusterId.
217
+
This identifier is crucial for Submariner, as it uniquely represents each cluster for tunnel creation and communication.
218
+
219
+
#### Updated KafkaNodePool CR Example
220
+
221
+
Below is an updated example of the KafkaNodePool CR with the proposed changes
222
+
223
+
```yaml
224
+
apiVersion: kafka.strimzi.io/v1beta2
225
+
kind: KafkaNodePool
226
+
metadata:
227
+
name: controller
228
+
labels:
229
+
strimzi.io/cluster: my-cluster
230
+
spec:
231
+
replicas: 3
232
+
roles:
233
+
- controller
234
+
storage:
235
+
.........
236
+
crossClusterTechnology:
237
+
technology: submariner
238
+
configuration:
239
+
submarinerClusterId: cluster1
240
+
---
241
+
242
+
apiVersion: kafka.strimzi.io/v1beta2
243
+
kind: KafkaNodePool
244
+
metadata:
245
+
name: broker
246
+
labels:
247
+
strimzi.io/cluster: my-cluster
248
+
spec:
249
+
replicas: 3
250
+
roles:
251
+
- broker
252
+
storage:
253
+
......
254
+
crossClusterTechnology:
255
+
technology: submariner
256
+
configuration:
257
+
submarinerClusterId: cluster1
258
+
```
259
+
260
+
#### Explanation of Changes
261
+
262
+
1. crossClusterTechnology.technology Field
263
+
264
+
Purpose: Specifies the cross-cluster communication technology to be used (e.g., submariner, skupper, istio...)
265
+
266
+
Example: `technology: submariner` indicates that Submariner is the chosen solution for enabling communication across clusters.
267
+
268
+
2. configuration
269
+
270
+
Purpose: Contains nested section for each possible technology
271
+
Only the relevant section is populated based on the technology specified
Defines the unique cluster ID used by Submariner to establish tunnels and identify clusters.
276
+
277
+
Example: `submarinerClusterId: cluster1`assigns a unique identifier for the cluster involved in cross-cluster communication.
278
+
279
+
#### Operator's Role in Utilizing the CR Fields
280
+
281
+
The Strimzi Operator will parse the crossCluster section to identify the chosen technology and apply the relevant configurations
282
+
283
+
- **Generating advertised.listener and controller.quorum.voter Addresses**: The Operator will use the submarinerClusterId
284
+
to construct addresses in the format `<broker-name>.<submarinerClusterId>.<namespace>.svc.clusterset.local`.
285
+
This ensures that `advertised.listener` and `controller.quorum.voters` are accessible across connected clusters, facilitating data replication and leader election.
286
+
287
+
- **Certificate SANs Configuration**: During the creation of SSL/TLS certificates for brokers and controllers,
288
+
the Operator must include the Submariner-exposed service addresses in the Subject Alternative Name (SAN) entries.
289
+
This guarantees that hostname verification succeeds when traffic flows between clusters.
290
+
291
+
#### Example of Updated advertised.listener and controller.quorum.voters
292
+
293
+
The Operator will update advertised.listener and controller.quorum.voters configurations as follows
These updates ensure brokers and controllers can be discovered and communicate across clusters without relying on traditional external access methods.
139
303
140
304
#### Resource cleanup on remote Kubernetes clusters
141
305
As some of the Kubernetes resources will be created on a remote cluster, we will not be able to use standard Kubernetes approaches for deleting resources based on owner references.
0 commit comments