Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: omersch381 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
45bf2e7 to
7b8961b
Compare
7b8961b to
9b7ba69
Compare
|
/test designate-operator-build-deploy-kuttl |
1 similar comment
|
/test designate-operator-build-deploy-kuttl |
9b7ba69 to
e5d7a71
Compare
|
/test designate-operator-build-deploy-kuttl |
5869bea to
0a02194
Compare
Implement multipool configuration support to allow multiple pools with different bind replica counts and NS records per pool. Changes: - Update pools.yaml template to support array of pools - Add multipool ConfigMap parsing (designate-multipool-config) - Extend GeneratePoolsYamlDataAndHash to handle multipool config - Add validation for multipool configuration: * Default pool must always exist * Pool names must be unique * BindReplicas must be > 0 - Maintain backward compatibility - Add unit tests for multipool functionality - Integrate multipool config into Designate controller - Add example multipool ConfigMap in config/samples The multipool configuration is read from a ConfigMap and allows defining multiple pools with per-pool NS records, descriptions, attributes, and bind replica counts. RNDC keys are indexed sequentially across all pools.
Extend multipool support to manage separate StatefulSets per pool and add validation for multipool ConfigMap changes. Changes: - Create pool-specific StatefulSets (designate-backendbind9-poolN) instead of a single StatefulSet when multipool ConfigMap exists - Add validating webhook to prevent invalid multipool ConfigMap changes (validates YAML, prevents pool removal, checks replica counts) - Support graceful migration between single and multipool modes - Track pools with labels (pool, pool-index) on StatefulSets - Update controllers to calculate readyCount across all pool StatefulSets - Add comprehensive kuttl tests covering pool scaling, addition/removal, and single to multipool migration and vice versa scenarios
Configure the central scheduler to support zone assignment to specific pools via the pool_id attribute while maintaining backward compatibility with both single-pool and multipool deployments. Backward compatibility: - Single-pool deployments: All zones automatically use the default pool (no pool_id attribute required) - Multipool deployments: Zones can optionally specify pool_id attribute to target a specific pool, otherwise use the default pool This enables the following zone creation patterns: - openstack zone create example.com. --email admin@test.com (creates in default pool) - openstack zone create example.com. --email admin@test.com --attribute pool_id:<pool-uuid> (creates in specified pool)
This change adds: 1. Per-pool bind IP ConfigMaps with pool-local indexing: - Pool 0 (default) uses designate-bind-ip-map - Pool 1+ use designate-bind-ip-map-pool1, pool2, etc. - ConfigMaps include both bind IPs and RNDC key mappings 2. Pool-specific DNS services for bind9: - Service names follow pattern: designate-backendbind9-pool<N>-<i> - Proper cleanup when migrating between single/multipool modes 3. TSIG authentication for non-default pools: - Single TSIG secret contains keys for all non-default pools - Mounted via init container to configure bind9 authentication 4. Automatic migration handling: - Cleans up single-pool resources when switching to multipool - Removes multipool resources when switching to single-pool - Proper orphaned resource cleanup for pool additions/removals The volume mount logic now dynamically includes TSIG configuration for non-default pools, and the init script processes TSIG keys when present.
Apply the same backwards-compatible naming pattern to pool0 resources that we use for ConfigMaps, extending it to StatefulSets and Services. Changes: - Pool0 StatefulSet uses base name (designate-backendbind9) instead of designate-backendbind9-pool0 for seamless single/multipool migration - Pool0 Services use base name pattern (designate-backendbind9-N) to match pod names from the StatefulSet - Remove deletion of single-pool StatefulSet during multipool migration since pool0 now reuses the same resource - Update cleanup logic to preserve pool0 resources when migrating back to single-pool mode (only delete numbered pools: pool1, pool2, etc.) - Update all multipool integration tests to reflect new naming scheme
Split multipool-specific resource management into a dedicated file to
improve code organization and maintainability.
Changes:
- Created controllers/designatebackendbind9_multipool.go (909 lines)
* Multipool ConfigMap, Service, and StatefulSet reconciliation
* TSIG secret management for non-default pools
* Pool validation and orphan cleanup helpers
- Deleted controllers/designate_bind_configmaps.go (212 lines)
* Consolidated bind ConfigMap logic into multipool file
- Refactored controllers/designatebackendbind9_controller.go
* Reduced from 1725 to 1064 lines (38% reduction)
* Removed multipool functions moved to dedicated file
* Removed unused yaml import
- Updated tests/functional/designate_controller_test.go
* Fixed ConfigMap test to expect both bind_address_N and rndc_key_N
entries (broken by earlier commit af50a0d)
* Added regex validation for RNDC key format
This commit automates TSIG secret and zones management via gopher cloud, and enforces NS records validation for multipool DNS functionality. Note: we currently use galera commands to fetch pool data from the database which is bad practice. It wll be changed in a future commit. TSIG Secret Management: - Automatically creates/updates TSIG secrets for non-default pools - Retrieves TSIG keys from Designate database via OpenStack API - Generates TSIG configuration with mdns server blocks - Handles cleanup during multipool-to-single migration - Triggers pod restarts when TSIG secrets change NS Records Validation: - Webhook validation rejects multipool ConfigMaps missing NS records - Runtime validation ensures each pool defines NS records - Kuttl tests verify correct NS records usage (CR vs ConfigMap) OpenStack API Integration: - New client factory for OpenStack API access - Pool ID/name queries, TSIG key operations, zone queries - Note: Currently uses direct database queries (will migrate to designate-manage in future for better isolation) Signed-off-by: Omer <omersch381@gmail.com>
eb2a2bd to
82d76a8
Compare
Until now we were using direct database queries because it was easy to implement, but not a good practice. This commit replaces that implementation with designate-manage pool update --dry-run command. I use this hack because we still do not have a version that supports `designate-manage pool show_config` with the `--all` flag, and it should be replaced in a future commit. The pool list job is idempotent and uses a fixed name for efficient reuse across reconciliation loops.
82d76a8 to
c1aa893
Compare
|
/test designate-operator-build-deploy-kuttl |
| return buf.String(), poolHash, nil | ||
| } | ||
|
|
||
| // sortNSRecords sorts NS records by hostname and then by priority |
There was a problem hiding this comment.
Do we need to sort them if they are a list of ns records? Can we not just deal with them by index throughout?
There was a problem hiding this comment.
We need deterministic ordering for stable hashing to avoid triggering unnecessary pool updates (lines 233-239)
|
|
||
| // ListPoolsFromJob retrieves pool information by running designate-manage pool update --dry-run as a Kubernetes Job | ||
| // This replaces the old galera-based approach with a proper job-based solution | ||
| func ListPoolsFromJob( |
There was a problem hiding this comment.
Something we could consider if it makes life easier - and easier to test is to write a python script that does the heavy lifting of running the designate-manage tool and parsing the output and simply spits out the list of pools. It would make the golang part a little simpler and it's something that could be easily tested outside the operator. Maybe consider for a follow-up though.
| MountPath: "/var/lib/config-data/keys", | ||
| ReadOnly: true, | ||
| }, | ||
| { |
There was a problem hiding this comment.
why are predictableips mounted here?
There was a problem hiding this comment.
They are needed by the init container for RNDC key mapping in multipool mode:
/templates/designatebackendbind9/bin/init.sh lines 49+:
# Try to read RNDC key name from per-pool ConfigMap (multipool mode)
rndc_key_map_file="/var/lib/predictableips/rndc_key_${pod_index}"
if [[ -f "${rndc_key_map_file}" ]]; then
# Multipool mode: read the RNDC key name from ConfigMap
rndc_key_name=$(cat "${rndc_key_map_file}")
rndc_key_filename="/var/lib/config-data/keys/${rndc_key_name}"
echo "Using RNDC key from ConfigMap: ${rndc_key_name}"
...
if [[ -f "${rndc_key_filename}" ]]; then
cp ${rndc_key_filename} /var/lib/config-data/merged/named/rndc.key
else
The init container uses them to find the correct RNDC key. It might not be the best solution I guess.
Integrate pod labeling feature (PR openstack-k8s-operators#394) into multipool implementation: - Add Owns(&corev1.Pod{}) to controller setup - Add pod labeling for single-pool mode - Add pod labeling for multipool mode with per-pool ConfigMap support - Each pool uses its own ConfigMap for pod IP labeling
0e2e6e0 to
2ffb962
Compare
Signed-off-by: Omer <omersch381@gmail.com>
2ffb962 to
569a753
Compare
No description provided.