-
-
Notifications
You must be signed in to change notification settings - Fork 45
feat: add metrics to the api crate #257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some minor comments, but not needed for merge.
How are we planning on extracting metrics from replicator instances? Will we add a /metrics
endpoint to them?
.expect("Failed to execute request."); | ||
|
||
// Assert | ||
assert!(response.status().is_success()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also validate that there are some metrics in here (if doable)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might need to parse the metrics for validation. The timing values returned in the result make it a bit non-deterministic which will need to be skipped. I'll see if it's easy enough and add it, otherwise keep it as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I guessed the non-determinism. If you can do it's great, otherwise no problem!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just took a look at this and the non-determinism is not limited to just the times returned in the metrics. Since the metrics recorder is installed globally, and while the tests are running in parallel, the recorder keeps recording the endpoints being hit. Due to this it becomes extremely challenging to guess what the expected metrics output would be. This probably could be done by setting a per thread recorder just for tests, but that is extra complexity for very limited benefit. Due to these reasons, leaving it as is for now.
.wrap(tracing_logger) | ||
.service(health_check) | ||
.service(metrics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure that creating an endpoint manually doesn't conflict with the default endpoint created?
Just asking since I saw these docs: https://crates.io/crates/actix-web-metrics
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default endpoint is created only when we call PrometheusBuilder::install
method. We are calling the PrometheusBuilder::install_recorder
method instead. See docs: https://docs.rs/metrics-exporter-prometheus/latest/metrics_exporter_prometheus/.
We have two options here:
|
Yeah, I would go for option 2. |
This PR returns metrics from the api crate on the
/metrics
endpoint. These metrics will be scraped from this endpoint by victoria metrics. Following metrics are currently returned:http_requests_total
(labels: endpoint, method, status): the total number of HTTP requests handled by the api.http_requests_duration_seconds
(labels: endpoint, method, status): the request duration for all HTTP requests handled by the api.More metrics will be added later.