-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Open
Labels
PFCCPaddle Framework Contributor Club,https://github.com/PaddlePaddle/community/tree/master/pfccPaddle Framework Contributor Club,https://github.com/PaddlePaddle/community/tree/master/pfccstatus/new-issue新建新建type/bug-report报bug报bug
Description
bug描述 Describe the Bug
当paddle import 在 juliacall 之后将会发生core dumped, 下面是最小复现代码
# paddlepaddle-gpu 3.0.0b1
import pysr
from pysr import PySRRegressor
import numpy as np
import paddle # <---- 此处如果在 pysr 前面被import 将不会core dumped
X = np.random.randn(100, 10)
y = np.ones(X.shape[0])
model = PySRRegressor(
progress=False,
max_evals=10000,
model_selection="accuracy",
extra_sympy_mappings={},
output_paddle_format=True,
)
model.fit(X, y) <----- core dumpedWhen paddle is imported after juliacall, a core dump occurs. Below is the minimal reproducible code:
# paddlepaddle-gpu 3.0.0b1
import pysr
from pysr import PySRRegressor
import numpy as np
import paddle # <---- Importing this *before* pysr prevents the core dump
X = np.random.randn(100, 10)
y = np.ones(X.shape[0])
model = PySRRegressor(
progress=False,
max_evals=10000,
model_selection="accuracy",
extra_sympy_mappings={},
output_paddle_format=True,
)
model.fit(X, y) # <----- core dumped该问题在其他框架也有类似的问题
This problem also has similar problems in other frameworks
Related Issues
- Certain import order triggers segmentation fault pytorch/pytorch#78829
- Using numba and julia (or any other LLVM loader?) at the same time leads to segfaults numba/numba#7857
- 【Hackathon 7th No.1】 Integrate PaddlePaddle as a New Backend MilesCranmer/PySR#704
其他补充信息 Additional Supplementary Information
No response
Metadata
Metadata
Assignees
Labels
PFCCPaddle Framework Contributor Club,https://github.com/PaddlePaddle/community/tree/master/pfccPaddle Framework Contributor Club,https://github.com/PaddlePaddle/community/tree/master/pfccstatus/new-issue新建新建type/bug-report报bug报bug