Skip to content

Commit 553221a

Browse files
authored
added kube-vip-doc (#52)
1 parent f6fc99d commit 553221a

File tree

1 file changed

+219
-0
lines changed

1 file changed

+219
-0
lines changed

docs/kube-vip-config-guide.md

Lines changed: 219 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,219 @@
1+
# Kube-VIP Configuration Guide
2+
3+
## Overview
4+
5+
Kube-VIP provides a highly available Virtual IP (VIP) for the Kubernetes control plane API server. It uses ARP (Address Resolution Protocol) to advertise the VIP across control plane nodes, ensuring that the Kubernetes API remains accessible even if individual control plane nodes fail.
6+
7+
## When to Use Kube-VIP
8+
9+
Use kube-vip when:
10+
- You need a highly available Kubernetes API endpoint without external load balancers
11+
- You're deploying on bare metal or environments where Octavia/cloud load balancers are not available
12+
- You want a simple, lightweight HA solution for the control plane
13+
- You have multiple control plane nodes (1 or 3+ masters)
14+
15+
## Architecture
16+
17+
Kube-VIP runs as a static pod on each control plane node and:
18+
1. Manages a shared VIP that floats between control plane nodes
19+
2. Uses ARP to announce the VIP on the network
20+
3. Provides automatic failover if the active node becomes unavailable
21+
4. Binds to a specific network interface for VIP management
22+
23+
## Configuration Variables
24+
25+
### Required Variables
26+
27+
```hcl
28+
# Enable kube-vip
29+
kube_vip_enabled = true
30+
31+
# The Virtual IP address for the Kubernetes API
32+
# Must be from the same subnet as your nodes (subnet_nodes)
33+
# Should NOT be in the DHCP/allocation pool range
34+
vrrp_ip = "10.2.184.10"
35+
36+
# Network interface where kube-vip will bind
37+
# This should match your primary node network interface
38+
cni_iface = "enp3s0"
39+
40+
# Enable Kube-vip and the creation of its required infrastructure resources. Eg In Openstack a dummy port with a floating IP associated.
41+
vrrp_enabled = true
42+
43+
# Disable Octavia (cloud load balancer)
44+
# kube-vip and Octavia are mutually exclusive
45+
use_octavia = false
46+
```
47+
48+
### Network Planning
49+
50+
When planning your network, reserve IP addresses appropriately:
51+
52+
```hcl
53+
# Example network layout
54+
subnet_nodes = "10.2.184.0/22"
55+
56+
# Reserve IPs outside the allocation pool
57+
allocation_pool_start = "10.2.184.50" # Start DHCP range here
58+
allocation_pool_end = "10.2.184.254" # End DHCP range here
59+
60+
# VIP should be outside the allocation pool
61+
vrrp_ip = "10.2.184.10" # Reserved for kube-vip
62+
63+
# Optional: Reserve range for MetalLB or other services
64+
# e.g., 10.2.184.11-10.2.184.49
65+
```
66+
67+
## Implementation Example
68+
69+
### Stage Cluster Configuration
70+
71+
From `000000-opencenter-example/infrastructure/clusters/stage-cluster/main.tf`:
72+
73+
```hcl
74+
locals {
75+
# Network configuration
76+
subnet_nodes = "10.2.184.0/22"
77+
subnet_nodes_oct = join(".", slice(split(".", split("/", local.subnet_nodes)[0]), 0, 3))
78+
79+
# Reserve VIP outside allocation pool
80+
allocation_pool_start = "${local.subnet_nodes_oct}.50"
81+
allocation_pool_end = "${local.subnet_nodes_oct}.254"
82+
vrrp_ip = "${local.subnet_nodes_oct}.10"
83+
84+
# Kube-VIP settings
85+
kube_vip_enabled = true
86+
vrrp_enabled = true
87+
use_octavia = false
88+
89+
# Network interface
90+
cni_iface = "enp3s0"
91+
92+
# API configuration
93+
k8s_api_port = 443
94+
}
95+
96+
module "kubespray-cluster" {
97+
source = "github.com/rackerlabs/openCenter.git//install/iac/kubespray?ref=main"
98+
99+
kube_vip_enabled = local.kube_vip_enabled
100+
vrrp_ip = local.vrrp_ip
101+
vrrp_enabled = local.vrrp_enabled
102+
use_octavia = local.use_octavia
103+
cni_iface = local.cni_iface
104+
k8s_api_ip = module.openstack-nova.k8s_api_ip
105+
k8s_api_port = local.k8s_api_port
106+
107+
# ... other configuration
108+
}
109+
```
110+
111+
## Kubespray Integration
112+
113+
The kubespray opentofu module automatically configures kube-vip through the addons template:
114+
115+
```yaml
116+
# Generated in inventory/group_vars/k8s_cluster/addons.yml
117+
kube_vip_enabled: true
118+
kube_vip_arp_enabled: true
119+
kube_vip_controlplane_enabled: true
120+
kube_vip_address: 10.2.184.10
121+
kube_vip_interface: "enp3s0"
122+
kube_vip_services_enabled: false
123+
```
124+
125+
### Key Settings Explained
126+
127+
- `kube_vip_arp_enabled`: Enables ARP for VIP advertisement
128+
- `kube_vip_controlplane_enabled`: Enables control plane VIP management
129+
- `kube_vip_address`: The VIP that clients will use to reach the API
130+
- `kube_vip_interface`: Network interface for VIP binding
131+
- `kube_vip_services_enabled`: Set to false (use MetalLB for service load balancing instead)
132+
133+
## Hardening Considerations
134+
135+
When using kube-vip with hardened clusters, ensure the VIP is included in kubelet secure addresses:
136+
137+
```yaml
138+
# From hardening.tpl
139+
kubelet_secure_addresses: "localhost link-local ${subnet_pods} ${subnet_nodes} ${vrrp_ip}"
140+
```
141+
142+
This allows kubelet to accept connections from the VIP address.
143+
144+
## Verification
145+
146+
After deployment, verify kube-vip is working:
147+
148+
```bash
149+
# Check kube-vip pods on control plane nodes
150+
kubectl get pods -n kube-system | grep kube-vip
151+
152+
# Verify VIP is responding
153+
curl -k https://<vrrp_ip>:<k8s_api_port>/healthz
154+
155+
# Check which node is currently holding the VIP
156+
ip addr show | grep <vrrp_ip>
157+
```
158+
159+
## Troubleshooting
160+
161+
### VIP Not Accessible
162+
163+
1. Verify the VIP is not in the DHCP allocation pool
164+
2. Check that `vrrp_enabled = true` and `use_octavia = false`
165+
3. Ensure the network interface name (`cni_iface`) is correct
166+
4. Verify no firewall rules are blocking ARP traffic
167+
5. Check if quorum is broken. Like 2 nodes down out of the total 3.
168+
169+
### Kube-VIP Pods Not Starting
170+
171+
1. Check kubespray deployment logs
172+
2. Verify the interface exists on control plane nodes: `ip link show`
173+
3. Check the kube-vip container logs directly on the control plane nodes.
174+
175+
### API Endpoint Not Updating
176+
177+
After deployment, the kubeconfig is automatically updated to use the VIP:
178+
179+
```bash
180+
# The kubespray module runs this automatically
181+
ansible localhost -c local -m replace \
182+
-a "path=./kubeconfig.yaml \
183+
regexp='server: https://.*:[0-9]*' \
184+
replace='server: https://${vrrp_ip}:${k8s_api_port}'"
185+
```
186+
187+
## Kube-VIP vs Octavia
188+
189+
| Feature | Kube-VIP | Octavia |
190+
|---------|----------|---------|
191+
| Deployment | Static pods on control plane | External load balancer service |
192+
| Cost | No additional cost | May incur cloud provider costs |
193+
| Complexity | Simple, self-contained | Requires cloud provider integration |
194+
| Use Case | Bare metal, on-prem, cost-sensitive | Cloud environments with LBaaS |
195+
| Mutual Exclusivity | Cannot use with Octavia | Cannot use with kube-vip |
196+
197+
**Important**: `vrrp_enabled` cannot be set to true if `use_octavia` is true. Choose one approach.
198+
199+
## Best Practices
200+
201+
1. **IP Planning**: Always reserve the VIP outside your DHCP/allocation pool
202+
2. **Interface Verification**: Confirm the network interface name before deployment
203+
3. **Master Count**: Deploy 1 or 3 control plane nodes (avoid 2 for quorum)
204+
4. **Service Load Balancing**: Use MetalLB for service-type LoadBalancer, not kube-vip
205+
5. **Monitoring**: Monitor kube-vip pod health and VIP accessibility
206+
6. **Documentation**: Document your VIP and reserved IP ranges
207+
208+
## Related Configuration
209+
210+
- **MetalLB**: For service load balancing (separate from control plane HA)
211+
- **Calico/CNI**: Ensure CNI interface matches kube-vip interface
212+
- **Firewall Rules**: Allow ARP and API traffic on the VIP
213+
- **DNS**: Optionally create DNS records pointing to the VIP
214+
215+
## References
216+
217+
- [Kube-VIP Documentation](https://kube-vip.io/)
218+
- [Kubespray Kube-VIP Integration](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/kube-vip.md)
219+
- OpenCenter GitOps Base: `iac/provider/kubespray/`

0 commit comments

Comments
 (0)