Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -168,11 +168,12 @@ services:
# volumes:
# - ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
# environment:
# pender_metrics_endpoint: "pender:3200"
# x_honeycomb_team: ${X_HONEYCOMB_TEAM} # Set this env var in your shell before running docker-compose
# RAILS_ENV: development
# SERVER_PORT: 3200
# networks:
# - dev
# - dev
web:
build: check-web
platform: linux/x86_64
Expand Down
31 changes: 26 additions & 5 deletions otel-collector-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@ receivers:
prometheus:
config:
scrape_configs:
- job_name: "prometheus"
- job_name: "pender_metrics"
scrape_interval: 15s
static_configs:
- targets: ["pender:3200"]
- targets: ["${env:pender_metrics_endpoint}"]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if the environment variable was just prometheus_targets and included the ["..."] text? 🙂

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A question: Let's say the targets are the pender endpoint, and the check-api endpoint. How do we make sure we send them to the correct exporter, dataset?

I'm wondering if we would have separate prometheus configs for each endpoint, or if we could use one for all of them.

Or does that not matter?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah right the way we're using datasets here has them as one per service, and metrics in honeycomb requires it to be attached to a dataset. so we will need to have a separate exporter for each service

we could have them all use the same receiver and then just filter per service based on the metric attribute app or something like that using a processor (https://opentelemetry.io/docs/collector/configuration/#processors) but it might be simpler to just have completely separate pipelines 😒

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found this blog post, which I think might be relevant: https://www.honeycomb.io/blog/simplify-opentelemetry-pipelines-headers-setter

For now, do we want to assume completely separate pipelines or not? If complete separate pipelines, do you still want changes to the env var?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's assume separate pipelines and keep the env var you have! I think you'll have to change the prometheus receiver name to be prometheus/pender or something similarly unique though (https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/README.md#configuring-receivers)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that does not work for the prometheus receiver: open-telemetry/opentelemetry-operator#3034

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... well then. it looks like we will have to run an otel collector for each service 😒 or try to get the processor filtering above working

ew

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can set up different jobs from Prometheus, but the issue here would be how to send each to the correct Dataset, right?

Maybe we can take a step back, and just have one main 'check' Dataset, instead of one per service? I just assumed one per service made sense, I guess. Then we could have a more generic approach. If we need different configuration we could set up a second job, if we don't, we can pass the environment variables as you first suggested.

This would make the configuration easier I think, what do you think? Are there any drawbacks?

metrics_path: /metrics

processors:
memory_limiter:
Expand All @@ -17,19 +18,39 @@ processors:
send_batch_size: 8192

exporters:
otlp/metrics:
otlp/pender_metrics:
endpoint: "api.honeycomb.io:443" # US instance
#endpoint: "api.eu1.honeycomb.io:443" # EU instance
headers:
"x-honeycomb-team": ${env:x_honeycomb_team} # Honeycomb API KEY
"x-honeycomb-dataset": "pender"
# for debugging purposes only
# debug:
# verbosity: normal

connectors:
routing:
default_pipelines: []
# for debugging purposes only
# default_pipelines: [metrics/debug]
table:
- context: resource
condition: attributes["service.name"] == "pender_metrics"
pipelines: [metrics/pender_honeycomb]

service:
# telemetry:
# logs:
# level: "debug"
pipelines:
metrics:
metrics/prometheus:
receivers: [prometheus]
exporters: [routing]
metrics/pender_honeycomb:
receivers: [routing]
processors: [memory_limiter, batch]
exporters: [otlp/metrics]
exporters: [otlp/pender_metrics]
# for debugging purposes only
# metrics/debug:
# receivers: [routing]
# exporters: [debug]