Skip to content

Comments

feat(internal/samplernames): decision maker string conversion by lookup table#4424

Open
darccio wants to merge 3 commits intomainfrom
dario.castane/langplat-59/adjacent-sampling-dm-improvement
Open

feat(internal/samplernames): decision maker string conversion by lookup table#4424
darccio wants to merge 3 commits intomainfrom
dario.castane/langplat-59/adjacent-sampling-dm-improvement

Conversation

@darccio
Copy link
Member

@darccio darccio commented Feb 16, 2026

What does this PR do?

Replaces samplerToDM function by a SamplerName method DecisionMaker that returns the string representation following the "-" + strconv.Itoa(int(sampler)) behavior.

Motivation

Add a small improvement for warm path when sampling spans. It was found while statically analyzing the codebase for potential improvement opportunities on spans.

strconv.Itoa falls back to a very fast small function that returns a preallocated representation for most of the values modeled by SamplerName, but this implementation is faster for all of them.

Microbenchmarks

old models the previous getRate implementation with string concatenation.

goos: darwin
goarch: arm64
pkg: github.com/DataDog/dd-trace-go/v2/internal/samplernames
cpu: Apple M1 Max
BenchmarkSamplerDecisionMaker

BenchmarkSamplerDecisionMaker/current
BenchmarkSamplerDecisionMaker/current-10         	174151701	         6.809 ns/op	       0 B/op	       0 allocs/op
BenchmarkSamplerDecisionMaker/old
BenchmarkSamplerDecisionMaker/old-10             	88376904	        13.12 ns/op	       0 B/op	       0 allocs/op

PASS
ok  	github.com/DataDog/dd-trace-go/v2/internal/samplernames	2.634s

Reviewer's Checklist

  • Changed code has unit tests for its functionality at or near 100% coverage.
  • There is a benchmark for any new code, or changes to existing code.
  • If this interacts with the agent in a new way, a system test has been added.
  • New code is free of linting errors. You can check this by running make lint locally.
  • New code doesn't break existing tests. You can check this by running make test locally.
  • Add an appropriate team label so this PR gets put in the right place for the release notes.
  • All generated files are up to date. You can check this by running make generate locally.
  • Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild. Make sure all nested modules are up to date by running make fix-modules locally.

Unsure? Have a question? Request a review!

@darccio darccio requested review from a team as code owners February 16, 2026 17:25
@darccio darccio added team:apm-go AI Assisted AI/LLM assistance used in this PR (partially or fully) labels Feb 16, 2026
@codecov
Copy link

codecov bot commented Feb 16, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 55.53%. Comparing base (effd572) to head (b662010).

Additional details and impacted files
Files with missing lines Coverage Δ
ddtrace/tracer/spancontext.go 87.98% <100.00%> (ø)
internal/samplernames/samplernames.go 100.00% <100.00%> (ø)

... and 370 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@pr-commenter
Copy link

pr-commenter bot commented Feb 16, 2026

Benchmarks

Benchmark execution time: 2026-02-17 13:58:36

Comparing candidate commit b662010 in PR branch dario.castane/langplat-59/adjacent-sampling-dm-improvement with baseline commit effd572 in branch main.

Found 4 performance improvements and 0 performance regressions! Performance is the same for 152 metrics, 8 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

scenario:BenchmarkOTelApiWithCustomTags/datadog_otel_api-25

  • 🟩 allocations [-1; -1] or [-3.448%; -3.448%]

scenario:BenchmarkOTelApiWithCustomTags/otel_api-25

  • 🟩 allocations [-1; -1] or [-2.273%; -2.273%]

scenario:BenchmarkStartSpanConfig/scenario_WithStartSpanConfig-25

  • 🟩 allocations [-1; -1] or [-5.000%; -5.000%]

scenario:BenchmarkStartSpanConfig/scenario_none-25

  • 🟩 allocations [-1; -1] or [-4.545%; -4.545%]

@darccio
Copy link
Member Author

darccio commented Feb 17, 2026

/merge

@gh-worker-devflow-routing-ef8351
Copy link

gh-worker-devflow-routing-ef8351 bot commented Feb 17, 2026

View all feedbacks in Devflow UI.

2026-02-17 08:43:27 UTC ℹ️ Start processing command /merge


2026-02-17 08:43:35 UTC ℹ️ MergeQueue: waiting for PR to be ready

This pull request is not mergeable according to GitHub. Common reasons include pending required checks, missing approvals, or merge conflicts — but it could also be blocked by other repository rules or settings.
It will be added to the queue as soon as checks pass and/or get approvals. View in MergeQueue UI.
Note: if you pushed new commits since the last approval, you may need additional approval.
You can remove it from the waiting list with /remove command.


2026-02-17 12:06:34 UTC ⚠️ MergeQueue: This merge request was unqueued

dario.castane@datadoghq.com unqueued this merge request

@darccio
Copy link
Member Author

darccio commented Feb 17, 2026

/remove

@gh-worker-devflow-routing-ef8351
Copy link

gh-worker-devflow-routing-ef8351 bot commented Feb 17, 2026

View all feedbacks in Devflow UI.

2026-02-17 12:06:24 UTC ℹ️ Start processing command /remove


2026-02-17 12:06:27 UTC ℹ️ Devflow: /remove

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Assisted AI/LLM assistance used in this PR (partially or fully) mergequeue-status: removed team:apm-go

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants