Skip to content

Commit 40c7db6

Browse files
[MM][Bugfix] Add MoE verification for multi-modal models (#3897)
### What this PR does / why we need it? Fix #3891. The empty of `moe_comm_method` in the above issue is due to the wrong check for MoE models. To be specific, the method `is_moe_model` only checks whether a text-only model is a MoE model, without considering multi-modal models, e.g., `VL` and `Omni`. Check the config dict recursively to find if it has a key contains "expert", without checking the model architecture. It is worth noting that, we can't verify a model by if it contains `FusedMoE` module because `is_moe_model` is called somewhere before the model loading, e.g., it's called when updating the ACLGraph config in platform initialization. - vLLM version: v0.11.0 - vLLM main: vllm-project/vllm@83f478b --------- Signed-off-by: shen-shanshan <[email protected]>
1 parent 892f1ee commit 40c7db6

File tree

1 file changed

+13
-3
lines changed

1 file changed

+13
-3
lines changed

vllm_ascend/utils.py

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -672,14 +672,24 @@ def prefill_context_parallel_enable() -> bool:
672672

673673

674674
def is_moe_model(vllm_config: VllmConfig):
675+
"""Checks if the model is a MoE model by config"""
675676
global _IS_MOE_MODEL
676677
if _IS_MOE_MODEL is None:
677-
config = vllm_config.model_config.hf_config
678-
_IS_MOE_MODEL = any('experts' in key.lower()
679-
for key in config.to_dict())
678+
model_configs = vllm_config.model_config.hf_config.to_dict()
679+
_IS_MOE_MODEL = _is_contain_expert(model_configs)
680680
return _IS_MOE_MODEL
681681

682682

683+
def _is_contain_expert(config: Any):
684+
if isinstance(config, dict):
685+
for k, v in config.items():
686+
if "expert" in str(k):
687+
return True
688+
if _is_contain_expert(v):
689+
return True
690+
return False
691+
692+
683693
def weak_ref_tensor(tensor: Any) -> Any:
684694
"""
685695
Create a weak reference to a tensor.

0 commit comments

Comments
 (0)