Skip to content

pkg/util/topsql/reporter: stabilize flaky TestTopRUPipelineInProcessIntegration#67579

Closed
flaky-claw wants to merge 1 commit intopingcap:masterfrom
flaky-claw:flakyfixer/case_ee66a3d888fd-a1
Closed

pkg/util/topsql/reporter: stabilize flaky TestTopRUPipelineInProcessIntegration#67579
flaky-claw wants to merge 1 commit intopingcap:masterfrom
flaky-claw:flakyfixer/case_ee66a3d888fd-a1

Conversation

@flaky-claw
Copy link
Copy Markdown
Contributor

@flaky-claw flaky-claw commented Apr 7, 2026

What problem does this PR solve?

Issue Number: close #67578

Problem Summary:
Flaky test TestTopRUPipelineInProcessIntegration in pkg/util/topsql/reporter intermittently fails, so this PR stabilizes that path.

What changed and how does it work?

Root Cause

takeDataAndSendToReportChan snapshotted ruAggregator without first draining already-buffered collectRUIncrementsChan batches, so queued TopRU increments could be omitted from the current report or shifted to a later window depending on scheduling.

Fix

takeDataAndSendToReportChan now drains the currently queued RU batches before building TopRU records, and TestTopRUHandoverEdgeCases/report snapshot includes queued RU batch was added as a precise regression subtest for that boundary.

Verification

Native repro was weak (./tools/check/failpoint-go-test.sh pkg/util/topsql/reporter -run '^TestTopRUPipelineInProcessIntegration$' -count=20 passed pre-fix); the new precise repro failed pre-fix, then passed post-fix under scoped failpoint enablement for pkg/util/topsql/reporter, the original flaky passed with go test ./pkg/util/topsql/reporter -run '^TestTopRUPipelineInProcessIntegration$' -count=20 -tags=intest,deadlock, adjacent TopRU tests passed, and make lint passed.

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No need to test
    • I checked and no code files have been changed.

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Please refer to Release Notes Language Style Guide to write a quality release note.

None

Fixes #67578

Summary by CodeRabbit

Release Notes

  • Bug Fixes

    • Enhanced dump operation consistency and error handling through improved initialization and control flow
    • Fixed resource unit buffering to ensure all tracked data is included in monitoring reports
  • Tests

    • Added test coverage for resource unit data handover edge cases

@ti-chi-bot ti-chi-bot Bot added do-not-merge/needs-triage-completed release-note-none Denotes a PR that doesn't merit a release note. labels Apr 7, 2026
@pantheon-ai
Copy link
Copy Markdown

pantheon-ai Bot commented Apr 7, 2026

Review failed due to infrastructure/execution failure after retries. Please re-trigger review.

ℹ️ Learn more details on Pantheon AI.

@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented Apr 7, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign benjamin2037 for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot Bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. component/dumpling This is related to Dumpling of TiDB. labels Apr 7, 2026
@pingcap-cla-assistant
Copy link
Copy Markdown

pingcap-cla-assistant Bot commented Apr 7, 2026

CLA assistant check
All committers have signed the CLA.

@tiprow
Copy link
Copy Markdown

tiprow Bot commented Apr 7, 2026

Hi @flaky-claw. Thanks for your PR.

PRs from untrusted users cannot be marked as trusted with /ok-to-test in this repo meaning untrusted PR authors can never trigger tests themselves. Collaborators can still trigger tests on the PR using /test all.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 7, 2026

📝 Walkthrough

Walkthrough

The PR introduces a major refactoring of the dump orchestration flow in dumpling by replacing failpoint injection patterns with structured eval-based checks and a new Dump() method for primary orchestration. Additionally, it improves RU batch handling in the TopSQL reporter to drain queued RU increments before report snapshots.

Changes

Cohort / File(s) Summary
Dump Orchestration Refactoring
dumpling/export/dump.go
Refactored NewDumper initialization with structured runSteps pipeline replacing failpoint.Inject with failpoint.Eval checks. Added primary orchestration method Dump() establishing connections, managing consistency, writing metadata, preparing tables, and executing parallel dumps. Introduced chunking/dumping helpers, TiDB region/table-sample logic, GC safe-point management, and Close() method.
TopSQL RU Batch Handling
pkg/util/topsql/reporter/reporter.go, pkg/util/topsql/reporter/reporter_test.go
Enhanced takeDataAndSendToReportChan to drain buffered RU increment batches before snapshotting, ensuring pending RU work is included in report boundaries. Added drainPendingRUBatchesForReport helper method and corresponding test case validating queued RU data inclusion.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested labels

size/M, ok-to-test

Suggested reviewers

  • XuHuaiyu
  • yibin87
  • qw4990

Poem

🐰 A dumper now orchestrates with grace,
Through failpoints eval'd at every place,
RU batches drained before they fly,
The dump pipeline reaches the sky! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: stabilizing a flaky test in pkg/util/topsql/reporter by fixing TopRU pipeline buffering.
Description check ✅ Passed The description includes issue number, clear problem summary, root cause analysis, fix explanation, and verification steps; all required sections are adequately filled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.11.4)

Command failed


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Warning

⚠️ This pull request might be slop. It has been flagged by CodeRabbit slop detection and should be reviewed carefully.

@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented Apr 7, 2026

@flaky-claw: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-unit-test-next-gen b989d63 link true /test pull-unit-test-next-gen
pull-build-next-gen b989d63 link true /test pull-build-next-gen
idc-jenkins-ci-tidb/unit-test b989d63 link true /test unit-test
idc-jenkins-ci-tidb/check_dev b989d63 link true /test check-dev
idc-jenkins-ci-tidb/build b989d63 link true /test build
idc-jenkins-ci-tidb/check_dev_2 b989d63 link true /test check-dev2
pull-integration-e2e-test b989d63 link true /test pull-integration-e2e-test
pull-integration-realcluster-test-next-gen b989d63 link true /test pull-integration-realcluster-test-next-gen

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dumpling/export/dump.go`:
- Around line 346-385: startWriters currently starts each writer goroutine
inside the loop so if createConnWithConsistency fails later earlier writers are
left running; change startWriters to first construct all Writer instances and
keep their sql.Conn references (using createConnWithConsistency) without calling
wg.Go, and only after the loop completes successfully iterate over the
constructed writers to call wg.Go(func() error { return writer.run(taskChan) });
if an error occurs during construction clean up by closing any already-created
conns and calling any writer cleanup (e.g., writer.Close or similar) before
returning the teardown no-op; refer to startWriters, createConnWithConsistency,
Writer, writer.run, wg.Go and the teardown return to implement this safe
two-phase initialization and cleanup.
- Around line 1062-1101: The reset callback inside selectTiDBTableRegion
currently clears pkFields and doesn't restore rowID, which is wrong because
pkFields is computed before the query and rowID must be reset for retries;
change the reset callback to restore rowID = -1 and only clear pkVals (and any
per-run state), leaving pkFields untouched so retries keep the computed
handle-column list and the first-row skip behavior is preserved. Ensure
references to rowID, pkVals, and pkFields in the reset closure are updated
accordingly.
- Around line 104-145: The constructor currently only unregisters metrics on
error, leaving resources from later steps running; add a teardown that runs if
runSteps returns an error to cleanly reverse side effects started by
startHTTPService, openSQLDB and tidbStartGCSavepointUpdateService. Implement a
cleanup function (called from a defer placed before calling runSteps) that:
cancels/terminates d.tctx (or calls its Cancel/Close), closes any opened DB
handle (d.db.Close or similar), stops the HTTP service started by
startHTTPService (call its shutdown/Stop method), and unregisters metrics
(d.metrics.unregisterFrom) — call that cleanup when runSteps returns a non-nil
error so partially-initialized state is torn down. Ensure the cleanup references
the same symbols used in this file: runSteps, startHTTPService, openSQLDB,
tidbStartGCSavepointUpdateService, d.tctx, d.db and d.metrics.unregisterFrom.
- Around line 1906-1952: There is a duplicate "package export" declaration and a
repeated import block; remove the second package clause and the duplicate import
block (the redundant lines starting with "package export" and the following
import (...) section) so the file has only one package statement and one import
block; ensure any referenced symbols (types/functions like Dump, ExportTask, or
imports such as fmt, context, github.com/go-sql-driver/mysql) are still covered
by the remaining import block and run go build to confirm the file compiles.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 81a5da47-28c5-4f84-b236-dbdd965a34e4

📥 Commits

Reviewing files that changed from the base of the PR and between 6f4dd4f and b989d63.

📒 Files selected for processing (3)
  • dumpling/export/dump.go
  • pkg/util/topsql/reporter/reporter.go
  • pkg/util/topsql/reporter/reporter_test.go

Comment thread dumpling/export/dump.go
Comment on lines +104 to +145
d.metrics = newMetrics(conf.PromFactory, conf.Labels)
d.metrics.registerTo(conf.PromRegistry)
defer func() {
if err != nil {
d.metrics.unregisterFrom(conf.PromRegistry)
}
}()

err = adjustConfig(conf,
buildTLSConfig,
validateSpecifiedSQL,
adjustFileFormat)
if err != nil {
return nil, err
}
if _, _err_ := failpoint.Eval(_curpkg_("SetIOTotalBytes")); _err_ == nil {
d.conf.IOTotalBytes = gatomic.NewUint64(0)
d.conf.Net = uuid.New().String()
go func() {
for {
time.Sleep(10 * time.Millisecond)
d.tctx.L().Logger.Info("IOTotalBytes", zap.Uint64("IOTotalBytes", d.conf.IOTotalBytes.Load()))
}
}()
}

err = runSteps(d,
initLogger,
createExternalStore,
startHTTPService,
openSQLDB,
detectServerInfo,
resolveAutoConsistency,

validateResolveAutoConsistency,
tidbResolveKeyspaceMetaForGC,
tidbSetPDClientForGC,
tidbGetSnapshot,
tidbStartGCSavepointUpdateService,

setSessionParam)
return d, err
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Tear down partially initialized state when NewDumper fails.

After Line 130, later steps can fail after startHTTPService, openSQLDB, or tidbStartGCSavepointUpdateService have already started side effects. The current error defer only unregisters metrics, so a failed constructor can leave the context live, the DB handle open, and background services still running.

🧹 Minimal cleanup on constructor failure
 d.metrics.registerTo(conf.PromRegistry)
 defer func() {
 	if err != nil {
+		cancelFn()
+		if d.dbHandle != nil {
+			_ = d.dbHandle.Close()
+		}
 		d.metrics.unregisterFrom(conf.PromRegistry)
 	}
 }()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dumpling/export/dump.go` around lines 104 - 145, The constructor currently
only unregisters metrics on error, leaving resources from later steps running;
add a teardown that runs if runSteps returns an error to cleanly reverse side
effects started by startHTTPService, openSQLDB and
tidbStartGCSavepointUpdateService. Implement a cleanup function (called from a
defer placed before calling runSteps) that: cancels/terminates d.tctx (or calls
its Cancel/Close), closes any opened DB handle (d.db.Close or similar), stops
the HTTP service started by startHTTPService (call its shutdown/Stop method),
and unregisters metrics (d.metrics.unregisterFrom) — call that cleanup when
runSteps returns a non-nil error so partially-initialized state is torn down.
Ensure the cleanup references the same symbols used in this file: runSteps,
startHTTPService, openSQLDB, tidbStartGCSavepointUpdateService, d.tctx, d.db and
d.metrics.unregisterFrom.

Comment thread dumpling/export/dump.go
Comment on lines +346 to +385
func (d *Dumper) startWriters(tctx *tcontext.Context, wg *errgroup.Group, taskChan <-chan Task,
rebuildConnFn func(*sql.Conn, bool) (*sql.Conn, error)) ([]*Writer, func(), error) {
conf, pool := d.conf, d.dbHandle
writers := make([]*Writer, conf.Threads)
for i := range conf.Threads {
conn, err := createConnWithConsistency(tctx, pool, needRepeatableRead(conf.ServerInfo.ServerType, conf.Consistency))
if err != nil {
return nil, func() {}, err
}
writer := NewWriter(tctx, int64(i), conf, conn, d.extStore, d.metrics)
writer.rebuildConnFn = rebuildConnFn
writer.setFinishTableCallBack(func(task Task) {
if _, ok := task.(*TaskTableData); ok {
IncCounter(d.metrics.finishedTablesCounter)
// FIXME: actually finishing the last chunk doesn't means this table is 'finished'.
// We can call this table is 'finished' if all its chunks are finished.
// Comment this log now to avoid ambiguity.
// tctx.L().Debug("finished dumping table data",
// zap.String("database", td.Meta.DatabaseName()),
// zap.String("table", td.Meta.TableName()))
if _, _err_ := failpoint.Eval(_curpkg_("EnableLogProgress")); _err_ == nil {
time.Sleep(1 * time.Second)
tctx.L().Debug("EnableLogProgress, sleep 1s")
}
}
})
writer.setFinishTaskCallBack(func(task Task) {
IncGauge(d.metrics.taskChannelCapacity)
if td, ok := task.(*TaskTableData); ok {
d.metrics.completedChunks.Add(1)
tctx.L().Debug("finish dumping table data task",
zap.String("database", td.Meta.DatabaseName()),
zap.String("table", td.Meta.TableName()),
zap.Int("chunkIdx", td.ChunkIndex))
}
})
wg.Go(func() error {
return writer.run(taskChan)
})
writers[i] = writer
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't launch writer goroutines before the full writer set is constructed.

If createConnWithConsistency fails on a later iteration, earlier writers have already been started with wg.Go, but this function returns a no-op teardown. Dump then exits before those goroutines or connections are cleaned up.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dumpling/export/dump.go` around lines 346 - 385, startWriters currently
starts each writer goroutine inside the loop so if createConnWithConsistency
fails later earlier writers are left running; change startWriters to first
construct all Writer instances and keep their sql.Conn references (using
createConnWithConsistency) without calling wg.Go, and only after the loop
completes successfully iterate over the constructed writers to call wg.Go(func()
error { return writer.run(taskChan) }); if an error occurs during construction
clean up by closing any already-created conns and calling any writer cleanup
(e.g., writer.Close or similar) before returning the teardown no-op; refer to
startWriters, createConnWithConsistency, Writer, writer.run, wg.Go and the
teardown return to implement this safe two-phase initialization and cleanup.

Comment thread dumpling/export/dump.go
Comment on lines +1062 to +1101
var (
startKey, decodedKey sql.NullString
rowID = -1
)
const (
tableRegionSQL = "SELECT START_KEY,tidb_decode_key(START_KEY) from INFORMATION_SCHEMA.TIKV_REGION_STATUS s WHERE s.DB_NAME = ? AND s.TABLE_NAME = ? AND IS_INDEX = 0 ORDER BY START_KEY;"
tidbRowID = "_tidb_rowid="
)
dbName, tableName := meta.DatabaseName(), meta.TableName()
logger := tctx.L().With(zap.String("database", dbName), zap.String("table", tableName))
err = conn.QuerySQL(tctx, func(rows *sql.Rows) error {
rowID++
err = rows.Scan(&startKey, &decodedKey)
if err != nil {
return errors.Trace(err)
}
// first region's start key has no use. It may come from another table or might be invalid
if rowID == 0 {
return nil
}
if !startKey.Valid {
logger.Debug("meet invalid start key", zap.Int("rowID", rowID))
return nil
}
if !decodedKey.Valid {
logger.Debug("meet invalid decoded start key", zap.Int("rowID", rowID), zap.String("startKey", startKey.String))
return nil
}
pkVal, err2 := extractTiDBRowIDFromDecodedKey(tidbRowID, decodedKey.String)
if err2 != nil {
logger.Debug("cannot extract pkVal from decoded start key",
zap.Int("rowID", rowID), zap.String("startKey", startKey.String), zap.String("decodedKey", decodedKey.String), log.ShortError(err2))
} else {
pkVals = append(pkVals, []string{pkVal})
}
return nil
}, func() {
pkFields = pkFields[:0]
pkVals = pkVals[:0]
}, tableRegionSQL, dbName, tableName)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reset the retry state correctly in selectTiDBTableRegion.

pkFields is computed before the query, so clearing it in the reset callback is wrong, and rowID also needs to be restored to -1 for a retried result set. As written, a successful retry can return boundaries with an empty handle-column list and stop skipping the first row.

🔁 Suggested retry-state fix
 	}, func() {
-		pkFields = pkFields[:0]
 		pkVals = pkVals[:0]
+		rowID = -1
 	}, tableRegionSQL, dbName, tableName)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
var (
startKey, decodedKey sql.NullString
rowID = -1
)
const (
tableRegionSQL = "SELECT START_KEY,tidb_decode_key(START_KEY) from INFORMATION_SCHEMA.TIKV_REGION_STATUS s WHERE s.DB_NAME = ? AND s.TABLE_NAME = ? AND IS_INDEX = 0 ORDER BY START_KEY;"
tidbRowID = "_tidb_rowid="
)
dbName, tableName := meta.DatabaseName(), meta.TableName()
logger := tctx.L().With(zap.String("database", dbName), zap.String("table", tableName))
err = conn.QuerySQL(tctx, func(rows *sql.Rows) error {
rowID++
err = rows.Scan(&startKey, &decodedKey)
if err != nil {
return errors.Trace(err)
}
// first region's start key has no use. It may come from another table or might be invalid
if rowID == 0 {
return nil
}
if !startKey.Valid {
logger.Debug("meet invalid start key", zap.Int("rowID", rowID))
return nil
}
if !decodedKey.Valid {
logger.Debug("meet invalid decoded start key", zap.Int("rowID", rowID), zap.String("startKey", startKey.String))
return nil
}
pkVal, err2 := extractTiDBRowIDFromDecodedKey(tidbRowID, decodedKey.String)
if err2 != nil {
logger.Debug("cannot extract pkVal from decoded start key",
zap.Int("rowID", rowID), zap.String("startKey", startKey.String), zap.String("decodedKey", decodedKey.String), log.ShortError(err2))
} else {
pkVals = append(pkVals, []string{pkVal})
}
return nil
}, func() {
pkFields = pkFields[:0]
pkVals = pkVals[:0]
}, tableRegionSQL, dbName, tableName)
var (
startKey, decodedKey sql.NullString
rowID = -1
)
const (
tableRegionSQL = "SELECT START_KEY,tidb_decode_key(START_KEY) from INFORMATION_SCHEMA.TIKV_REGION_STATUS s WHERE s.DB_NAME = ? AND s.TABLE_NAME = ? AND IS_INDEX = 0 ORDER BY START_KEY;"
tidbRowID = "_tidb_rowid="
)
dbName, tableName := meta.DatabaseName(), meta.TableName()
logger := tctx.L().With(zap.String("database", dbName), zap.String("table", tableName))
err = conn.QuerySQL(tctx, func(rows *sql.Rows) error {
rowID++
err = rows.Scan(&startKey, &decodedKey)
if err != nil {
return errors.Trace(err)
}
// first region's start key has no use. It may come from another table or might be invalid
if rowID == 0 {
return nil
}
if !startKey.Valid {
logger.Debug("meet invalid start key", zap.Int("rowID", rowID))
return nil
}
if !decodedKey.Valid {
logger.Debug("meet invalid decoded start key", zap.Int("rowID", rowID), zap.String("startKey", startKey.String))
return nil
}
pkVal, err2 := extractTiDBRowIDFromDecodedKey(tidbRowID, decodedKey.String)
if err2 != nil {
logger.Debug("cannot extract pkVal from decoded start key",
zap.Int("rowID", rowID), zap.String("startKey", startKey.String), zap.String("decodedKey", decodedKey.String), log.ShortError(err2))
} else {
pkVals = append(pkVals, []string{pkVal})
}
return nil
}, func() {
pkVals = pkVals[:0]
rowID = -1
}, tableRegionSQL, dbName, tableName)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dumpling/export/dump.go` around lines 1062 - 1101, The reset callback inside
selectTiDBTableRegion currently clears pkFields and doesn't restore rowID, which
is wrong because pkFields is computed before the query and rowID must be reset
for retries; change the reset callback to restore rowID = -1 and only clear
pkVals (and any per-run state), leaving pkFields untouched so retries keep the
computed handle-column list and the first-row skip behavior is preserved. Ensure
references to rowID, pkVals, and pkFields in the reset closure are updated
accordingly.

Comment thread dumpling/export/dump.go
Comment on lines +1906 to +1952
// Copyright 2020 PingCAP, Inc. Licensed under Apache-2.0.

package export

import (
"bytes"
"context"
"database/sql"
"database/sql/driver"
"encoding/hex"
"fmt"
"math/big"
"net"
"slices"
"strconv"
"strings"
"sync/atomic"
"time"

"github.com/coreos/go-semver/semver"
// import mysql driver
"github.com/go-sql-driver/mysql"
"github.com/google/uuid"
"github.com/pingcap/errors"
"github.com/pingcap/failpoint"
pclog "github.com/pingcap/log"
"github.com/pingcap/tidb/br/pkg/summary"
"github.com/pingcap/tidb/br/pkg/version"
"github.com/pingcap/tidb/dumpling/cli"
tcontext "github.com/pingcap/tidb/dumpling/context"
"github.com/pingcap/tidb/dumpling/log"
infoschema "github.com/pingcap/tidb/pkg/infoschema/context"
"github.com/pingcap/tidb/pkg/objstore"
"github.com/pingcap/tidb/pkg/objstore/storeapi"
"github.com/pingcap/tidb/pkg/parser"
"github.com/pingcap/tidb/pkg/parser/ast"
"github.com/pingcap/tidb/pkg/parser/format"
"github.com/pingcap/tidb/pkg/store/helper"
"github.com/pingcap/tidb/pkg/tablecodec"
"github.com/pingcap/tidb/pkg/util"
"github.com/pingcap/tidb/pkg/util/codec"
pd "github.com/tikv/pd/client"
"github.com/tikv/pd/client/pkg/caller"
gatomic "go.uber.org/atomic"
"go.uber.org/zap"
"golang.org/x/sync/errgroup"
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

sed -n '1900,1960p' dumpling/export/dump.go
printf '\n--- package/import markers ---\n'
rg -n '^(package export|import \()$' dumpling/export/dump.go

Repository: pingcap/tidb

Length of output: 1993


Remove the duplicate package export block and import declarations starting at line 1908.

The file contains a second package export declaration at line 1908 and a duplicate import block at line 1910. Go does not allow multiple package clauses or import blocks in a single file, so this will fail compilation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dumpling/export/dump.go` around lines 1906 - 1952, There is a duplicate
"package export" declaration and a repeated import block; remove the second
package clause and the duplicate import block (the redundant lines starting with
"package export" and the following import (...) section) so the file has only
one package statement and one import block; ensure any referenced symbols
(types/functions like Dump, ExportTask, or imports such as fmt, context,
github.com/go-sql-driver/mysql) are still covered by the remaining import block
and run go build to confirm the file compiles.

@yinsustart yinsustart closed this Apr 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

component/dumpling This is related to Dumpling of TiDB. release-note-none Denotes a PR that doesn't merit a release note. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Flaky test: TestTopRUPipelineInProcessIntegration in pkg/util/topsql/reporter

2 participants