-
|
I am trying to install a multi-node cluster of opensearch and all pods were spawning on a single node, after apply anti-affinity to them, 2 of the 3 nodes are now failing to start up as the PVs are getting some sort of affinity to a single node: kubectl describe pv -n example pvc-deadbeef-1234 my storage class looks like this: I installed openebs using the official chart with the default values file. I have verified that all nodes have the AirVG and free space on it with this node even has some lvs on it leftover from a previous run: so I know its possible for openebs lvmpv to provision to all the nodes, its just everytime I spin something up the PVs all get stuck on one node. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 8 replies
-
|
Hi @joshuacox , You don't need to have anti-affinity as pods will be scheduled on the node where lvmvolume is provisioned. Since they have anti-affinity they are forced to go on different node where volumes is not provisioned. Can you please run:
|
Beta Was this translation helpful? Give feedback.
-
|
If you want a WA for now you could try to side step our scheduling with |
Beta Was this translation helpful? Give feedback.
-
|
@joshuacox , If you need anti-affinity among pods then you should use WaitForFirstConsumer in SC. It should be on However, Your lvmnodes output does not seems correct. Which version of localpv-lvm have you installed? Can you share: kubectl get pod -n |
Beta Was this translation helpful? Give feedback.
-
|
For the volumes to be consumed by user application, It needs to be on the storage Class. But the values.yaml you pasted above is installing the default storage class. It will be fine if you apply it and use it while creating PVs. |
Beta Was this translation helpful? Give feedback.
ahh its the storage class that needs it added, not the helm chart values
workaround achieved. Is this a workaround? i.e. is there something that needs to be fixed and then I should revert this change?