Skip to content

fix: correct TP zero-copy put semantics#1685

Open
zxpdemonio wants to merge 6 commits intokvcache-ai:mainfrom
openanolis:cruz/fix_put_with_tp
Open

fix: correct TP zero-copy put semantics#1685
zxpdemonio wants to merge 6 commits intokvcache-ai:mainfrom
openanolis:cruz/fix_put_with_tp

Conversation

@zxpdemonio
Copy link
Collaborator

Fix full-tensor semantics for TP zero-copy put APIs and deduplicate the shared buffer helper logic used by the tensor API tests.

Description

Treat put_tensor_with_tp_from and batch_put_tensor_with_tp_from as
full-tensor zero-copy inputs, decode the serialized buffer, and reuse
the existing TP split/write path internally.

Update the API reference and tensor API tests to match the corrected
behavior.

Module

  • Transfer Engine (mooncake-transfer-engine)
  • Mooncake Store (mooncake-store)
  • Mooncake EP (mooncake-ep)
  • Integration (mooncake-integration)
  • P2P Store (mooncake-p2p-store)
  • Python Wheel (mooncake-wheel)
  • PyTorch Backend (mooncake-pg)
  • Mooncake RL (mooncake-rl)
  • CI/CD
  • Docs
  • Other

Type of Change

  • Bug fix
  • New feature
  • Refactor
  • Breaking change
  • Documentation update
  • Other

How Has This Been Tested?

Checklist

  • I have performed a self-review of my own code.
  • I have formatted my own code using ./scripts/code_format.sh before submitting.
  • I have updated the documentation.
  • I have added tests to prove my changes are effective.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a bug in the zero-copy Tensor Parallelism (TP) put APIs, put_tensor_with_tp_from and batch_put_tensor_with_tp_from, by ensuring they correctly process full tensor inputs. The previous implementation incorrectly expected pre-split tensor shards, leading to semantic inconsistencies. The fix involves decoding the full tensor from the provided zero-copy buffer and then utilizing the existing internal TP splitting and storage mechanisms. Concurrently, the Python tensor API tests have been refactored to centralize buffer allocation and deallocation, enhancing the clarity and robustness of the test suite.

Highlights

  • Corrected Zero-Copy TP Put Semantics: The put_tensor_with_tp_from and batch_put_tensor_with_tp_from APIs now correctly interpret input buffers as full tensors, which are then internally decoded, split, and stored as shards, aligning with the intended full-tensor zero-copy behavior.
  • Updated API Documentation: The Python API reference has been updated to clearly state that the zero-copy Tensor Parallelism (TP) put methods expect complete tensors in the input buffer, rather than pre-split shards.
  • Refactored Test Helper Logic: Shared helper functions for managing registered buffers (allocate_registered_buffer_slots, release_registered_buffer_slots) were introduced in the tensor API tests, reducing code duplication and improving test maintainability.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • docs/source/python-api-reference/mooncake-store.md
    • Updated descriptions for put_tensor_with_tp_from and batch_put_tensor_with_tp_from to clarify that they expect full tensors, not pre-split shards.
    • Added split_dim parameter to batch_put_tensor_with_tp_from in the documentation.
  • mooncake-integration/store/store_py.cpp
    • Initialized TensorMetadata in PyTensorInfo struct to prevent uninitialized data.
    • Introduced validate_serialized_tensor_buffer, decode_serialized_tensor_buffer, and CastAddrs2Ptrs helper functions for robust buffer validation and type casting.
    • Modified batch_put_from to initialize final_results with ErrorCode::INVALID_PARAMS and update only successfully processed indices.
    • Refactored batch_put_from to utilize the newly added buffer validation and casting helpers.
    • Updated put_tensor_with_tp_from to decode the full tensor from the input buffer and delegate to the existing put_tensor_with_tp_impl for splitting and storing.
    • Updated batch_put_tensor_with_tp_from to decode full tensors from input buffers, perform validation, and use batch_put_tensor_with_tp_impl for batch processing.
    • Adjusted pybind11 definitions for put_tensor_with_tp_from and batch_put_tensor_with_tp_from to reflect the full tensor input semantics and added split_dim to the latter's signature.
  • scripts/test_tensor_api.py
    • Added RegisteredBufferSlots dataclass to encapsulate registered buffer information for better organization.
    • Introduced allocate_registered_buffer_slots and release_registered_buffer_slots methods to TensorAPITestCase for centralized and reusable buffer management.
    • Refactored test_04_tp_consistency, test_05_put_get_into, test_06_batch_put_get_into, test_07_put_get_into_with_tp, test_08_batch_put_get_into_with_tp, test_benchmark_04_batch_put_get_into_with_tp, and _test_dtype_roundtrip to leverage the new buffer helper functions, enhancing test clarity and reducing boilerplate.
    • Updated test_04_tp_consistency, test_07_put_get_into_with_tp, and test_08_batch_put_get_into_with_tp to correctly validate the new full-tensor zero-copy TP put semantics.
Activity
  • The author, zxpdemonio, created this pull request to fix incorrect semantics in zero-copy Tensor Parallelism put APIs and to refactor related test helper logic.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly updates the semantics for put_tensor_with_tp_from and batch_put_tensor_with_tp_from to handle full tensors instead of pre-split shards, which simplifies the client-side logic for zero-copy tensor parallelism puts. The changes are well-implemented, reusing the existing tensor splitting logic by decoding the tensor from the buffer first. The accompanying documentation and pybind11 wrapper updates are accurate.

The refactoring in test_tensor_api.py is a great improvement, introducing helper methods for buffer allocation and release, which makes the tests much cleaner and more robust.

I have one minor suggestion in store_py.cpp to improve consistency in how result vectors are initialized.

@codecov-commenter
Copy link

codecov-commenter commented Mar 17, 2026

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

❌ Patch coverage is 0% with 60 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
mooncake-integration/store/store_py.cpp 0.00% 60 Missing ⚠️

📢 Thoughts on this report? Let us know!

@zxpdemonio zxpdemonio force-pushed the cruz/fix_put_with_tp branch from 895ba34 to 487816b Compare March 24, 2026 11:19
@zxpdemonio zxpdemonio force-pushed the cruz/fix_put_with_tp branch from 487816b to 0d123cb Compare March 24, 2026 11:58
zxpdemonio and others added 3 commits March 24, 2026 23:27
Ensure batch_put_tensor_with_tp_impl reports zero on fully successful shard writes so TP zero-copy Python tests don't fail with false INVALID_PARAMS results.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Keep the TP zero-copy benchmark focused on put-from semantics by removing an unsupported split_dim argument from batch_get_tensor_with_tp_into.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants