Skip to content

Conversation

razdoburdin
Copy link
Contributor

This PR introduces optimization for CPU inference. For each tree, the top N levels are transformed into a compact array-based layout. This allows for a branchless node indexing rule: idx = 2 * idx + int(val < split_cond). To minimize memory overhead, this transformation from the standard tree structure to the array layout is performed on-the-fly for each block of data being processed. Even with this additional calculations, improved data locality in the cache-friendly array layout leads to inference speed up to ~2x (x1.4 on average).
image

@razdoburdin razdoburdin marked this pull request as draft June 20, 2025 13:50
@trivialfis
Copy link
Member

Thank you for the optimization on the inference. Please unmark the "draft" status and ping me when the PR is ready for testing.

Copy link

@Vika-F Vika-F left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cosmetic changes.

The next possible step would be to convert the trees into array-based representation only once, and not to do it for each block of data.

Copy link
Member

@trivialfis trivialfis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still trying to understand the code, will give it a try later. In the meanwhile, could you please craft some specific unittests for the new inference algorithm?

* We use transforming trees to array layout for each block of data to avoid memory overheads.
* It makes the array layout inefficient for block_size == 1
*/
const bool use_array_tree_layout = block_size > 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if this is a small online inference call? The input size could be a few samples per call.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default (the old one) realization will be used

@razdoburdin
Copy link
Contributor Author

Still trying to understand the code, will give it a try later. In the meanwhile, could you please craft some specific unittests for the new inference algorithm?

I added some unit-tests.

Copy link
Member

@trivialfis trivialfis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still trying to understand the code, in the meantime, let me do some refactoring in this and the next week to accommodate the new optimization. We need a better structure to handle all these:

  • Predict with scalar leaf.
  • Predict with vector leaf.
  • Array predict with scalar leaf.
  • Array predict with vector leaf.
  • Column split with scalar leaf.

I think I will split up the CPU predictor into multiple pieces.

*/
std::array<bst_node_t, kNodesCount + 1> nidx_in_tree_;

static bool IsLeaf(const RegTree& tree, bst_node_t nidx) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a benefit of doing this C++ overloading rather than the simpler tree.IsLeaf? How much faster are we seeing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did the overload to handle both RegTree and MultiTargetTree cases. Is there a better option?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use RegTree without extracting the Multi-target tree when populating the buffer, and delegate the dispatching to RegTree::LeftChild(bst_node_t nidx) instead of using the RegTree::Node::LeftChild. There's a check inside the RegTree::LeftChild:

  [[nodiscard]] bst_node_t LeftChild(bst_node_t nidx) const {
    if (IsMultiTarget()) {
      return this->p_mt_tree_->LeftChild(nidx);
    }
    return (*this)[nidx].LeftChild();
  }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@trivialfis
Copy link
Member

I'm trying to cleanup the CPU predictor. I will update this PR once it is finished.

@trivialfis
Copy link
Member

I need to fix a perf regression caused by the new ordinal encoder.

@trivialfis
Copy link
Member

I need to fix a perf regression caused by the new ordinal encoder.

This has been fixed. I will look deeper into this PR.

*/
std::array<bst_node_t, kNodesCount + 1> nidx_in_tree_;

static bool IsLeaf(const RegTree& tree, bst_node_t nidx) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use RegTree without extracting the Multi-target tree when populating the buffer, and delegate the dispatching to RegTree::LeftChild(bst_node_t nidx) instead of using the RegTree::Node::LeftChild. There's a check inside the RegTree::LeftChild:

  [[nodiscard]] bst_node_t LeftChild(bst_node_t nidx) const {
    if (IsMultiTarget()) {
      return this->p_mt_tree_->LeftChild(nidx);
    }
    return (*this)[nidx].LeftChild();
  }

@trivialfis
Copy link
Member

trivialfis commented Aug 20, 2025

Thank you for expanding the tree layout. In the future (when you can prioritize it), do you think it's possible to create and store the layout inside the RegTree structure as an opt-in method call? My reasoning is as follows:

  • The existing RegTree and the multi-target tree already use a very similar layout, minus the dummy nodes. It might be easier/cleaner to do it there.
  • We can avoid complicating the predictor too much.
  • We can cache the result in the regtree structure to avoid repeated initialization.

You can define a std::unique_ptr<ArrayTree> inside the RegTree, set it to nullptr. Define a method to create the array tree when needed, and reset it back to nullptr if any non-const method is called.

@razdoburdin
Copy link
Contributor Author

Thank you for expanding the tree layout. In the future (when you can prioritize it), do you think it's possible to create and store the layout inside the RegTree structure as an opt-in method call? My reasoning is as follows:

  • The existing RegTree and the multi-target tree already use a very similar layout, minus the dummy nodes. It might be easier/cleaner to do it there.
  • We can avoid complicating the predictor too much.
  • We can cache the result in the regtree structure to avoid repeated initialization.

You can define a std::unique_ptr<ArrayTree> inside the RegTree, set it to nullptr. Define a method to create the array tree when needed, and reset it back to nullptr if any non-const method is called.

Do you think memory overhead (about 1KB per tree) is acceptable for storing the layout? If so, it would be the natural next optimization step.

@trivialfis
Copy link
Member

Do you think memory overhead (about 1KB per tree) is acceptable for storing the layout?

I think this should be fine since the size of the layout is bound by depth. The implementation here falls back to the original tree after certain level is reached.

@razdoburdin
Copy link
Contributor Author

Do you think memory overhead (about 1KB per tree) is acceptable for storing the layout?

I think this should be fine since the size of the layout is bound by depth. The implementation here falls back to the original tree after certain level is reached.

Can we merge the current implementation and postpone buffering of the layout?

@trivialfis
Copy link
Member

Can we merge the current implementation and postpone buffering of the layout?

We can. I will look into this PR.

Copy link
Member

@trivialfis trivialfis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the excellent optimization!

I can understand the code (mostly), and it should be cleaner after merging into the regtree. I will merge this PR once the CI is green.

@trivialfis trivialfis merged commit 446e3b9 into dmlc:master Sep 10, 2025
82 of 84 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants