-
Notifications
You must be signed in to change notification settings - Fork 9
Paralellize the filling on the std::vector in DMatrix #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,6 +17,8 @@ def py_str(string): | |
"""Convert C string back to Python string""" | ||
return string.decode("utf-8") | ||
|
||
def check_if_fast(): | ||
return True | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not sure if there's missing code here or if this was supposed to be removed before the PR was made. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You are right, sorry, this commit was only included to check internally which version of tl2cgen we were running and went pushed to the PR because in the same branch, but should not be included. |
||
|
||
def _open_and_validate_recipe(recipe_path: pathlib.Path) -> Dict[str, Any]: | ||
"""Ensure that the build recipe contains necessary fields""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have benchmarks showing the impact of this change at various scales? How often is this code likely to get called inside a block that is already parallelized? You mentioned this was a bottleneck; can you give an example of a workload where it is a bottleneck?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late reply. In our application we do predictions at large scale: we run tl2cgen.Predictor.predict about 17,000 times on 100,000,000 samples points by 61 features Numpy nd arrays. For this reason we use tl2cgen to compile the model from sklearn, convert the array to DMartrix and then predict. Whoever we noticed that the DMatrix conversion was about 60% of the computing time and that the execution was not doe in parallel. In our machines we have 96 CPUs so this has a big impact. Parallelizing the filling of the std::vector for the DMatrix the time dropped drastically. For comparison you can see the following screenshot (100,000,000 samples points by 61 features input). Here an old version of TreeLite was use for comparison (since it has similar performance and had the same bottle neck), while tl2cgen is with parallel DMatrix.