Skip to content

Conversation

@halohsu
Copy link

@halohsu halohsu commented Jul 18, 2025

Over the past year, I've observed many users requiring specific DNS configurations rather than blindly defaulting to local DNS settings:

I observe that Blackbox Exporter supports Helm chart deployment and provides container images, which enables flexible deployment in large-scale Kubernetes clusters. Real-world network probing requirements are often complex. We want certain pods to maintain the ability to resolve targets through CoreDNS, while other pods should resolve public DNS addresses or DNS servers on jump hosts. Based on these enterprise-level realities, we've made some modifications to the source code.

Assuming we want the following configuration (either manually written or Helm-generated):

  • file: /etc/blackbox.yaml
  • file: metrics/data/{cluster-name}/blackbox.jumpserver.yaml
  • file: metrics/data/{cluster-name}/blackbox.worker.yaml
  • file: metrics/data/{cluster-name}/blackbox.master.yaml

We have implemented the following configuration (sanitized and simplified for demonstration):

modules:
  http_get_2xx:
    prober: http
    http:
      dns_server: 10.96.0.10:53
      dns_timeout: 10s
  tcp_connect:
    prober: tcp
    tcp:
      dns_server: 10.96.0.10:53
      dns_timeout: 10s
  grpc:
    prober: grpc
    grpc:
      dns_server: 10.96.0.10:53
      dns_timeout: 10s

This PR has been running stably in Kubernetes clusters spanning hundreds to thousands of physical machines and has operated continuously for over a year. The code and operational experience have undergone rigorous production-level validation.

I hope this will be helpful to others. @electron0zero , @anionDev , @RorFis , @darioef , @snaar

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant