Warning
hf-mem is still experimental and therefore subject to major changes across releases, so please keep in mind that breaking changes may occur until v1.0.0.
hf-mem is a CLI to estimate inference memory requirements for Hugging Face models, written in Python. hf-mem is lightweight, only depends on httpx, as it pulls the Safetensors metadata via HTTP Range requests. It's recommended to run with uv for a better experience.
hf-mem lets you estimate the inference requirements to run any model from the Hugging Face Hub, including Transformers, Diffusers and Sentence Transformers models, as well as any model that contains Safetensors compatible weights.
Read more information about hf-mem in this short-form post.
uvx hf-mem --model-id MiniMaxAI/MiniMax-M2
uvx hf-mem --model-id Qwen/Qwen-Image
uvx hf-mem --model-id google/embeddinggemma-300m
By enabling the --experimental flag, you can enable the KV Cache memory estimation for LLMs (...ForCausalLM) and VLMs (...ForConditionalGeneration), even including a custom --max-model-len (defaults to the config.json default), --batch-size (defaults to 1), and the --kv-cache-dtype (defaults to auto which means it uses the default data type set in config.json under torch_dtype or dtype, or rather from quantization_config when applicable).
uvx hf-mem --model-id MiniMaxAI/MiniMax-M2 --experimental
Optionally, you can add hf-mem as an agent skill, which allows the underlying coding agent to discover and use it when provided as a SKILL.md.
More information can be found at Anthropic Agent Skills and how to use them.