-
Notifications
You must be signed in to change notification settings - Fork 605
Open
Description
Description
I am unable to get PyTorch to recognize the CUDA cores on my NVIDIA Jetson AGX Orin. Despite multiple installation attempts using standard pip commands and official NVIDIA wheel URLs, the system either fails with a 404 error or installs a CPU-only version of Torch.
Environment / Hardware Specs
- Hardware: NVIDIA Jetson AGX Orin
- L4T Release: R35.4.1 (Check via
cat /etc/nv_tegra_release) - JetPack Version: 5.1.2
- Python Version: 3.10 (Virtual Environment)
- Architecture: aarch64
The Problem
- Default Installation: Running
pip install torchinstalls a generic manylinux version that does not support CUDA on Jetson hardware. - Broken Links: Official NVIDIA documentation links for JetPack 5.1.2 PyTorch wheels are currently returning
HTTP 404 Not Found.
- Failed URL:
https://developer.download.nvidia.com/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp310-cp310-linux_aarch64.whl
- Index Issues: Using
--extra-index-url https://pypi.nvidia.comoften defaults to the highest version number available (e.g., 2.3.0+), which is incompatible with JetPack 5.x binaries, causing CUDA detection to fail.
Current Error Output
>>> import torch
>>> print(torch.cuda.is_available())
False
>>> print(torch.version.cuda)
None
>>> print(torch.cuda.device_count())
0Steps Taken to Reproduce / Fix
- Created a clean virtual environment (
python3 -m venv env). - Attempted to install via NVIDIA's PyPI index:
pip install --extra-index-url https://pypi.nvidia.com torch. - Attempted manual wheel installation via
wget(Resulted in 404). - Verified that system-level libraries (
libopenblas-base,libopenmpi-dev) are installed. - Exported
LD_LIBRARY_PATHto include/usr/local/cuda/lib64.
Expected Behavior
PyTorch should be able to utilize the Ampere GPU cores on the Orin, and torch.cuda.is_available() should return True.
Metadata
Metadata
Assignees
Labels
No labels