Conversation
There was a problem hiding this comment.
Pull request overview
Adds a Grafana dashboard JSON under script/server/grafana/ intended to visualize Seata transaction metrics exported to Prometheus.
Changes:
- Add
script/server/grafana/panel.jsoncontaining a Grafana dashboard with panels for transaction counts, latency, and TPS. - Add dashboard templating variables for
instance,group, andpodto filter panels.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "datasource": { | ||
| "type": "prometheus", | ||
| "uid": "hJ8_mPCnk" | ||
| }, |
There was a problem hiding this comment.
The templating variables hardcode the Prometheus datasource UID ("hJ8_mPCnk"). If you switch to Grafana __inputs / a datasource placeholder for portability, make sure to apply it to the templating datasources too (not just panel datasources).
| "editable": true, | ||
| "fiscalYearStartMonth": 0, | ||
| "graphTooltip": 0, | ||
| "id": 707, |
There was a problem hiding this comment.
The exported dashboard includes a fixed numeric "id" (707). Grafana dashboard JSON meant for import should typically omit "id" or set it to null so imports don’t try to update a non-existent dashboard or collide with an existing one.
| "id": 707, | |
| "id": null, |
| "datasource": { | ||
| "type": "prometheus", | ||
| "uid": "hJ8_mPCnk" | ||
| }, |
There was a problem hiding this comment.
The Prometheus datasource is hardcoded to a specific Grafana datasource UID ("hJ8_mPCnk"). This makes the dashboard non-portable (imports will fail unless the target Grafana has the same UID). Consider using Grafana’s datasource input mechanism (e.g., __inputs) or a datasource variable placeholder so the UID can be selected at import time.
| "type": "prometheus", | ||
| "uid": "hJ8_mPCnk" | ||
| }, | ||
| "expr": "sum(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"counter\",pod=\"$pod\"}) by (status)", |
There was a problem hiding this comment.
These PromQL queries filter on the label key "kubeone_ali_appinstance_name", which is not a label produced by Seata’s metrics exporter (the standard key is "applicationId" per metrics IdConstants). As written, the dashboard won’t work out-of-the-box unless the Prometheus scrape/relabeling injects this custom label; consider switching to Seata’s built-in labels (applicationId/group/role/status/...) or documenting the required relabeling.
| "expr": "sum(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"counter\",pod=\"$pod\"}) by (status)", | |
| "expr": "sum(seata_transaction{applicationId=\"$instance\",group=~\"$group\",meter=\"counter\",pod=\"$pod\"}) by (status)", |
| "type": "prometheus", | ||
| "uid": "hJ8_mPCnk" | ||
| }, | ||
| "expr": "sum(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"counter\",pod=\"$pod\"}) by (status)", |
There was a problem hiding this comment.
The "$pod" template variable is configured as multi-select + includeAll, but the query uses an exact-match selector pod="$pod". For multi/all selections Grafana expands the variable to a regex or $__all, so this selector can match nothing. Use a regex matcher (pod=~"$pod") or make the variable single-select without includeAll.
| "expr": "sum(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"counter\",pod=\"$pod\"}) by (status)", | |
| "expr": "sum(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"counter\",pod=~\"$pod\"}) by (status)", |
| "type": "prometheus", | ||
| "uid": "hJ8_mPCnk" | ||
| }, | ||
| "expr": "sum(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"summary\",statistic=\"count\",status=~\"unretry|timeout\",pod=\"$pod\"}) by (status)", |
There was a problem hiding this comment.
The "异常事务统计" query matches status=~"unretry|timeout", but Seata’s exported transaction statuses are things like "failed" and "2phaseTimeout" (plus retrying statuses), not "unretry"/"timeout". Update the status matcher to reflect actual Seata metric label values so the panel returns data.
| "expr": "sum(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"summary\",statistic=\"count\",status=~\"unretry|timeout\",pod=\"$pod\"}) by (status)", | |
| "expr": "sum(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"summary\",statistic=\"count\",status=~\"failed|2phaseTimeout|retrying\",pod=\"$pod\"}) by (status)", |
| }, | ||
| "expr": "sum(rate(seata_transaction{kubeone_ali_appinstance_name=\"$instance\",group=~\"$group\",meter=\"counter\",pod=\"$pod\",status=~\"committed|rollbacked\"}[1m])) by (kubeone_ali_appinstance_name)", | ||
| "interval": "", | ||
| "legendFormat": "{{kubeone_ali_appinstance_name}}{{pod}}", |
There was a problem hiding this comment.
This query aggregates ... by (kubeone_ali_appinstance_name) but the legend format includes {{pod}}. Since pod is not in the grouping labels, it will be dropped from the output series and the legend will render empty/incorrect for pod. Either include pod in the aggregation labels or remove it from the legend template.
| "legendFormat": "{{kubeone_ali_appinstance_name}}{{pod}}", | |
| "legendFormat": "{{kubeone_ali_appinstance_name}}", |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## 2.x #7988 +/- ##
============================================
+ Coverage 72.28% 72.29% +0.01%
Complexity 876 876
============================================
Files 1310 1310
Lines 49977 49977
Branches 5945 5945
============================================
+ Hits 36124 36131 +7
+ Misses 10902 10890 -12
- Partials 2951 2956 +5 🚀 New features to boost your workflow:
|
Ⅰ. Describe what this PR did
add grafana json
Ⅱ. Does this pull request fix one issue?
Ⅲ. Why don't you add test cases (unit test/integration test)?
Ⅳ. Describe how to verify it
Ⅴ. Special notes for reviews