Skip to content

5090 cuda Compatibility issues #205

@kehan777

Description

@kehan777

python -c "import torch; print(torch.version); print(torch.cuda.is_available()); print(torch.cuda.get_device_capability())"
2.9.0+cu128
True
(12, 0)
(ptxmm) adsb@MSI:/mnt/d/conda/protenix$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Aug_20_01:58:59_PM_PDT_2025
Cuda compilation tools, release 13.0, V13.0.88
Build cuda_13.0.r13.0/compiler.36424714_0

(ptxmm) adsb@MSI:/mnt/d/conda/protenix$ export TORCH_CUDA_ARCH_LIST="8.9;12.0"
(ptxmm) adsb@MSI:/mnt/d/conda/protenix$ bash inference_demo.sh
Try to find the ccd cache data in the code directory for inference.
2025-10-06 16:13:55,702 [/mnt/d/conda/protenix/runner/inference.py:159] INFO main: Distributed environment: world size: 1, global rank: 0, local rank: 0
2025-10-06 16:13:55,702 [/mnt/d/conda/protenix/runner/inference.py:65] INFO root: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
2025-10-06 16:13:55,812 [/mnt/d/conda/protenix/runner/inference.py:159] INFO main: env: /mnt/d/conda/protenix/cutlass-3.5.1
2025-10-06 16:13:55,812 [/mnt/d/conda/protenix/runner/inference.py:80] INFO root: The kernels will be compiled when DS4Sci_EvoformerAttention is called for the first time.
2025-10-06 16:13:55,812 [/mnt/d/conda/protenix/runner/inference.py:85] INFO root: The kernels will be compiled when fast_layernorm is called for the first time.
2025-10-06 16:13:55,812 [/mnt/d/conda/protenix/runner/inference.py:89] INFO root: Finished init ENV.
train scheduler 16.0
inference scheduler 16.0
Diffusion Module has 16.0
2025-10-06 16:13:58,242 [/mnt/d/conda/protenix/runner/inference.py:159] INFO main: Loading from /home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/./release_data/checkpoint/model_v0.2.0.pt, strict: True
2025-10-06 16:14:03,742 [/mnt/d/conda/protenix/runner/inference.py:159] INFO main: Sampled key: module.input_embedder.atom_attention_encoder.linear_no_bias_f.weight
2025-10-06 16:14:04,038 [/mnt/d/conda/protenix/runner/inference.py:159] INFO main: Finish loading checkpoint.
2025-10-06 16:14:04,042 [/mnt/d/conda/protenix/runner/inference.py:221] INFO main: Loading data from
./examples/example.json
2025-10-06 16:14:04,226 [/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/data/infer_data_pipeline.py:209] INFO protenix.data.infer_data_pipeline: Featurizing 7r6r...
2025-10-06 16:14:04,226 [/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/data/infer_data_pipeline.py:209] INFO protenix.data.infer_data_pipeline: Featurizing 7wux...
2025-10-06 16:14:04,227 [/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/data/infer_data_pipeline.py:209] INFO protenix.data.infer_data_pipeline: Featurizing 7pzb...
2025-10-06 16:14:10,904 [/mnt/d/conda/protenix/runner/inference.py:245] INFO main: [Rank 0 (1/3)] 7r6r: N_asym 3, N_token 245, N_atom 2529, N_msa 363
/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/layer_norm/layer_norm.py:50: FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
with torch.cuda.amp.autocast(enabled=False):
2025-10-06 16:14:11,382 [/mnt/d/conda/protenix/runner/inference.py:271] INFO main: [Rank 0]7r6r CUDA error: no kernel image is available for execution on the device
Search for cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. Compile with TORCH_USE_CUDA_DSAto enable device-side assertions. : Traceback (most recent call last): File "/mnt/d/conda/protenix/runner/inference.py", line 254, in infer_predict prediction = runner.predict(data) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context return func(*args, **kwargs) File "/mnt/d/conda/protenix/runner/inference.py", line 148, in predict prediction, _, _ = self.model( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 692, in forward pred_dict, log_dict, time_tracker = self.main_inference_loop( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 321, in main_inference_loop pred_dict, log_dict, time_tracker = self._main_inference_loop( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 382, in _main_inference_loop s_inputs, s, z = self.get_pairformer_output( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 148, in get_pairformer_output s_inputs = self.input_embedder( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/embedders.py", line 72, in forward a, _, _, _ = self.atom_attention_encoder( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 837, in forward q_l = self.atom_transformer( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 482, in forward return self.diffusion_transformer( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 405, in forward a, s, z = checkpoint_blocks( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/openfold_local/utils/checkpointing.py", line 85, in checkpoint_blocks return exec(blocks, args) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/openfold_local/utils/checkpointing.py", line 72, in exec a = wrap(block(*a)) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 282, in forward attn_out = self.attention_pair_bias( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 201, in forward a = self.local_multihead_attention( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 131, in local_multihead_attention bias = self.linear_nobias_z( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 134, in forward return F.linear(input, self.weight, self.bias) torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device Search forcudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

2025-10-06 16:14:13,832 [/mnt/d/conda/protenix/runner/inference.py:245] INFO main: [Rank 0 (2/3)] 7wux: N_asym 10, N_token 1218, N_atom 9142, N_msa 1286
2025-10-06 16:14:14,173 [/mnt/d/conda/protenix/runner/inference.py:271] INFO main: [Rank 0]7wux CUDA error: no kernel image is available for execution on the device
Search for cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. Compile with TORCH_USE_CUDA_DSAto enable device-side assertions. : Traceback (most recent call last): File "/mnt/d/conda/protenix/runner/inference.py", line 254, in infer_predict prediction = runner.predict(data) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context return func(*args, **kwargs) File "/mnt/d/conda/protenix/runner/inference.py", line 148, in predict prediction, _, _ = self.model( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 692, in forward pred_dict, log_dict, time_tracker = self.main_inference_loop( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 321, in main_inference_loop pred_dict, log_dict, time_tracker = self._main_inference_loop( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 382, in _main_inference_loop s_inputs, s, z = self.get_pairformer_output( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 148, in get_pairformer_output s_inputs = self.input_embedder( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/embedders.py", line 72, in forward a, _, _, _ = self.atom_attention_encoder( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 837, in forward q_l = self.atom_transformer( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 482, in forward return self.diffusion_transformer( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 405, in forward a, s, z = checkpoint_blocks( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/openfold_local/utils/checkpointing.py", line 85, in checkpoint_blocks return exec(blocks, args) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/openfold_local/utils/checkpointing.py", line 72, in exec a = wrap(block(*a)) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 282, in forward attn_out = self.attention_pair_bias( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 201, in forward a = self.local_multihead_attention( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 131, in local_multihead_attention bias = self.linear_nobias_z( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 134, in forward return F.linear(input, self.weight, self.bias) torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device Search forcudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

2025-10-06 16:14:14,178 [/mnt/d/conda/protenix/runner/inference.py:245] INFO main: [Rank 0 (3/3)] 7pzb: N_asym 8, N_token 600, N_atom 5222, N_msa 6502
2025-10-06 16:14:14,243 [/mnt/d/conda/protenix/runner/inference.py:271] INFO main: [Rank 0]7pzb CUDA error: no kernel image is available for execution on the device
Search for cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. Compile with TORCH_USE_CUDA_DSAto enable device-side assertions. : Traceback (most recent call last): File "/mnt/d/conda/protenix/runner/inference.py", line 254, in infer_predict prediction = runner.predict(data) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context return func(*args, **kwargs) File "/mnt/d/conda/protenix/runner/inference.py", line 148, in predict prediction, _, _ = self.model( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 692, in forward pred_dict, log_dict, time_tracker = self.main_inference_loop( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 321, in main_inference_loop pred_dict, log_dict, time_tracker = self._main_inference_loop( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 382, in _main_inference_loop s_inputs, s, z = self.get_pairformer_output( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/protenix.py", line 148, in get_pairformer_output s_inputs = self.input_embedder( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/embedders.py", line 72, in forward a, _, _, _ = self.atom_attention_encoder( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 837, in forward q_l = self.atom_transformer( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 482, in forward return self.diffusion_transformer( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 405, in forward a, s, z = checkpoint_blocks( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/openfold_local/utils/checkpointing.py", line 85, in checkpoint_blocks return exec(blocks, args) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/openfold_local/utils/checkpointing.py", line 72, in exec a = wrap(block(*a)) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 282, in forward attn_out = self.attention_pair_bias( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 201, in forward a = self.local_multihead_attention( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/protenix/model/modules/transformer.py", line 131, in local_multihead_attention bias = self.linear_nobias_z( File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/adsb/conda/envs/ptxmm/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 134, in forward return F.linear(input, self.weight, self.bias) torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device Search forcudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

(ptxmm) adsb@MSI:/mnt/d/conda/protenix$ python -c "import torch; x = torch.randn(1, 128, 256, device='cuda'); y = torch.nn.Linear(256, 256).cuda()(x); print(y)"
tensor([[[-0.5351, -0.3450, -0.0861, ..., -0.3420, 0.0343, -0.1446],
[ 0.5010, -0.1754, -1.2315, ..., -0.9490, 0.2061, 0.0160],
[ 0.0726, 0.2123, -0.0957, ..., 0.3553, 0.0089, 0.3244],
...,
[-0.0861, -0.5129, -0.2459, ..., -0.6243, 0.5106, -0.7058],
[-0.1798, -0.2492, -0.5570, ..., 0.7262, -0.0278, 0.1920],
[-0.2529, 0.1932, -0.6778, ..., -0.0812, 0.4013, 0.6989]]],
device='cuda:0', grad_fn=)

老师您好,似乎5090 sm_12.0 有兼容问题,不知道能否根据上述信息,排查一下,谢谢您!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions