Skip to content

Conversation

muxuezzz
Copy link

@muxuezzz muxuezzz commented Sep 1, 2025

Related to #7785

This PR adds support for running ComfyUI on a single Ascend NPU card.

The key changes include:

  • Added configuration options to specify single NPU card usage
  • Modified device initialization logic to properly detect and utilize a single NPU
  • Updated relevant device mapping and resource allocation code to work with single NPU setup

This allows users with a single Ascend NPU to run ComfyUI without needing multi-card configuration, lowering the entry barrier for NPU users.

@Kosinkadink
Copy link
Collaborator

Kosinkadink commented Sep 17, 2025

Is there currently an issue with Ascent NPUs not working properly if you have only one?

@muxuezzz
Copy link
Author

ue with Ascent NPUs not working properly if you have only one

No, this PR is intended to address the requirement that allows users to selectively designate specific NPUs for ComfyUI to use when they have multiple Ascend NPUs. This need arises from the fact that Ascend NPU devices rarely appear individually.

@Kosinkadink
Copy link
Collaborator

Ah, gotcha, thank you for the explanation.

@Kosinkadink
Copy link
Collaborator

Just spoke with comfy, could you change this PR so that instead of adding a new startup argument specific to Ascend, you reuse the cuda-device argument so that it sets the environment variable within the same if statement?

@muxuezzz
Copy link
Author

Just spoke with comfy, could you change this PR so that instead of adding a new startup argument specific to Ascend, you reuse the cuda-device argument so that it sets the environment variable within the same if statement?

I have received your suggestions and made the corresponding revisions. Currently, cuda-device can support the configuration of both CUDA and NPU devices simultaneously. Additionally, due to the earlier introduction of model_management, I have advanced the placement of the WARNING regarding the import of torch.

@Kosinkadink Kosinkadink added the Good PR This PR looks good to go, it needs comfy's final review. label Sep 18, 2025
@Kosinkadink
Copy link
Collaborator

Kosinkadink commented Sep 19, 2025

To make this much more likely to get merged since it now edits the logic for where pytorch deps get loaded, I will push a commit to this PR to simply make it always set the NPU environment variable, same as it it does for HIP_VISIBLE_DEVICES even though that is for ROCm. The print statement needing to say 'npu' is unnecessary. In the future we may say 'setting hardware device id' or something like that for the print statement, but the scope of this PR should be just to get the Ascend NPU device selection working.

…able without any other edits to master branch
@muxuezzz
Copy link
Author

To make this much more likely to get merged since it now edits the logic for where pytorch deps get loaded, I will push a commit to this PR to simply make it always set the NPU environment variable, same as it it does for HIP_VISIBLE_DEVICES even though that is for ROCm. The print statement needing to say 'npu' is unnecessary. In the future we may say 'setting hardware device id' or something like that for the print statement, but the scope of this PR should be just to get the Ascend NPU device selection working.

Thank you, this looks much cleaner! I'm considering whether to add detection code like subprocess.check_output(['where', 'nvidia-smi'], stderr=subprocess.STDOUT) to clearly the loginfo without introducing torch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Good PR This PR looks good to go, it needs comfy's final review.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants