Skip to content

Improve Mistral models integration with llama.cpp #14737

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Aug 11, 2025

Conversation

juliendenize
Copy link
Contributor

@juliendenize juliendenize commented Jul 17, 2025

Description

This PR aims to enhance the integration of Mistral models with llama.cpp by addressing several key issues and introducing new features. Here are the details:

Context

  • The current HF conversion to GGUF does not work directly for Mistral models due to our format that is vLLM based. This means that we have to first convert weights to Hugging Face then to GGUF which is not ideal and can lead to conversion errors if the first conversion is not done correctly. It also means that adding new models to the llama.cpp ecosystem requires first adding them to Transformers.
  • We do not support chat templates natively which means chat templates are community based and not guaranteed to work correctly.
  • We are using mistral-common internally for tokenization and want the community to use it to unlock full capacities of our models. As mistral-common is a Python library, we have opened a PR to add a REST API via FastAPI to make it easier for users who are not in the Python ecosystem.

Using mistral-common with llama.cpp

We recommend that users only use the llama-server tool with the /completions route of the server for now, as it is the only one that supports tokens input. We also advise users to set return_tokens=True in their requests to let mistral-common handle detokenization.

Added features

  1. Model conversion:

We have added a script to convert Mistral models to GGUF directly from Hugging Face. This script is located at convert_mistral_to_gguf.py and can be used to convert Mistral models to GGUF format.

  1. Model architecture:

We registered the Mistral architecture in llama.cpp to support Mistral models natively. This allows users to use Mistral models with llama.cpp without having to convert them to Hugging Face first.

Known Limitations:

Our approach does not support multimodality:

  • mistral-common handles processing multimodal data but they cannot be passed to llama.cpp via the route.
  • llama.cpp only supports multimodality via chat templates, which we do not support.

Also this approach requires users to only use the llama.cpp server with the /completions route.

Example Code

To get started, install mistral-common using the following command:

(Optional) Convert the model

HF_TOKEN=... python convert_mistral_to_gguf.py \
mistralai/Devstral-Small-2505 --remote --ctx-train 131072 --outtype bf16

Launch the mistral-common and llama.cpp servers

pip install git+https://github.com/mistralai/mistral-common.git@improve_llama_cpp_integration[server]

Launch the mistral-common server:

HF_TOKEN=... mistral_common mistralai/Devstral-Small-2505 --port 6000

Launch the llama.cpp server:

./build/bin/llama-server -m models/Devstral-Small-2505-Q4_K_M.gguf --port 8080

Use the servers

Here is a code snippet demonstrating how to use the new features:

import requests

mistral_common_url = "http://127.0.0.1:6000"
llama_cpp_url = "http://127.0.0.1:8080"

def tokenize(messages, url):
    response = requests.post(f"{url}/tokenize/messages", json=messages)
    return response.json()

def detokenize(tokens, url):
    response = requests.post(f"{url}/detokenize", json={"tokens": tokens})
    return response.json()

def detokenize_message(tokens, url):
    response = requests.post(f"{url}/detokenize", json={"tokens": tokens, "as_message": True})
    return response.json()

def generate(tokens, url):
    response = requests.post(f"{url}/completions", json={
        "prompt": tokens,
        "stream": False,
        "return_tokens": True
    })
    return response.json()

messages = [
    {"role": "system", "content": "You are Devstral a cool coding agent that can help users with their coding needs."},
    {"role": "user", "content": "Who are you and what can you do?"}
]

tokens = tokenize(messages, mistral_common_url)
print(tokens)

generated = generate(tokens, llama_cpp_url)["tokens"]
print(generated)

detokenized = detokenize(generated, mistral_common_url)
print(detokenized)

detokenized_message = detokenize_message(generated, mistral_common_url)
print(detokenized_message)

Feedback and Contributions

We believe these changes will significantly improve the integration of Mistral models with llama.cpp and provide a better experience for our users. We welcome any feedback or suggestions to further enhance this integration. Also, as we have few experience in the codebase of llama.cpp, we welcome any help to improve the integration and make sure we respect the codebase and the community.

@github-actions github-actions bot added the python python script changes label Jul 17, 2025
@ggerganov
Copy link
Member

Thanks for the contribution. From a developer perspective, it looks like a good approach to avoid any potential tokenization / formatting problems. In general, for all models, using a reference tokenizer instead of relying on llama.cpp is always recommended. From usability standpoint, the requirement to start a separate tokenization server is a bit of a drawback, but I understand that correctness is of higher importance.

My understanding is that most chat template problems occur during the early days of the model release, and with time tend to get polished and fixed. So this approach would be a stable alternative during such periods of instability.

@ehoogeveen-medweb
Copy link

IIRC Mistral's architecture also makes use of sliding window attention (SWA), defaulting to a window size of 4096 tokens - though I don't know all the details (like which layers, if any, are full layers). It would be great if the window size could be stored in the GGUF file as well (e.g. as mistral.attention.sliding_window), and the model could eventually be hooked into llama.cpp's SWA support.

@juliendenize juliendenize force-pushed the mistral_integration branch from b809a96 to 2865a25 Compare July 23, 2025 14:55
@juliendenize
Copy link
Contributor Author

Hey guys many sorries for the delay of the answer and thanks a lot for your feedback.

@ggerganov

My understanding is that most chat template problems occur during the early days of the model release, and with time tend to get polished and fixed. So this approach would be a stable alternative during such periods of instability.

Exactly what's cool with llama.cpp is that you support the possibility to pass jinja templates when serving so people can use them once they are correct if they want and remove the mistral-common server ! Very nice feature.

@ehoogeveen-medweb

Mistral's architecture also makes use of sliding window attention (SWA)

This is actually for super old (for Deep Learning ^^) models so we didn't add support to that. Could it be a subsequent PR ?

Regarding the PR:

  • I refactored a bit to remove the Mistral arch, it didn't add value so we think it's a less of a burden for maintainability !
  • I tried to make the CI green but I think that because we modified gguf-py files some checks cannot pass because it installs the published package AFAIU. Is it right ?

Happy to answer more questions :)

@juliendenize juliendenize marked this pull request as ready for review July 23, 2025 22:19
@CISC
Copy link
Collaborator

CISC commented Jul 24, 2025

* I tried to make the CI green but I think that because we modified gguf-py files some checks cannot pass because it installs the published package AFAIU. Is it right ?

Partially, there's also a pydantic version conflict:
https://github.com/ggml-org/llama.cpp/actions/runs/16474192826/job/46571944794?pr=14737#step:4:291

@CISC
Copy link
Collaborator

CISC commented Jul 24, 2025

@juliendenize Please undo all the formatting/style changes, they are not relevant and add too much noise to the PR, will review afterwards. :)

@Kreijstal

This comment was marked as off-topic.

@juliendenize
Copy link
Contributor Author

Partially, there's also a pydantic version conflict:

Arf, would it be ok to up the requirements for Pydantic in your side or is it a no ? Was there a particular reason to stay at 2.6 ?

@juliendenize
Copy link
Contributor Author

@juliendenize Please undo all the formatting/style changes, they are not relevant and add too much noise to the PR, will review afterwards. :)

Done sorry about that my own formatter was on.

Is there a formatter or linter available for Python ? Didn't find it in the contributing guidelines.
I've installed flake8 but it didn't flag any thing when launched locally

@CISC
Copy link
Collaborator

CISC commented Jul 24, 2025

Arf, would it be ok to up the requirements for Pydantic in your side or is it a no ? Was there a particular reason to stay at 2.6 ?

Yes, I think it's ok, it's probably just the version that was available at the time.

Is there a formatter or linter available for Python ? Didn't find it in the contributing guidelines. I've installed flake8 but it didn't flag any thing when launched locally

We don't use a Python formatter, only flake8 linting.

@CISC
Copy link
Collaborator

CISC commented Jul 24, 2025

Pillow conflict, should be fine to update:
https://github.com/ggml-org/llama.cpp/actions/runs/16497338367/job/46646389345?pr=14737#step:4:305

@CISC
Copy link
Collaborator

CISC commented Jul 24, 2025

Right, now we are getting somewhere. :)
https://github.com/ggml-org/llama.cpp/actions/runs/16497849941/job/46648060083?pr=14737

Edit: The unbound errors are clearly handled at init and can be silenced by # pyright: ignore[reportPossiblyUnboundVariable]

@am17an
Copy link
Collaborator

am17an commented Jul 24, 2025

@juliendenize do you also plan to make changes to convert_mistral_to_gguf.py to have mappings for audio_tower.* for the mmproj, I guess it will be necessary for the new voxtral models?

@juliendenize
Copy link
Contributor Author

Right, now we are getting somewhere. :) https://github.com/ggml-org/llama.cpp/actions/runs/16497849941/job/46648060083?pr=14737

Edit: The unbound errors are clearly handled at init and can be silenced by # pyright: ignore[reportPossiblyUnboundVariable]

Tried to make things cleaner sorry for the back and forth.

@juliendenize
Copy link
Contributor Author

@juliendenize do you also plan to make changes to convert_mistral_to_gguf.py to have mappings for audio_tower.* for the mmproj, I guess it will be necessary for the new voxtral models?

that would be cool indeed, I didn't work personally on Voxtral (for the model), so I might need some assistance as I lack experience in audio models.

Is voxtral already supported by llama.cpp ? I assumed that not for now.

@am17an
Copy link
Collaborator

am17an commented Jul 24, 2025

@juliendenize do you also plan to make changes to convert_mistral_to_gguf.py to have mappings for audio_tower.* for the mmproj, I guess it will be necessary for the new voxtral models?

that would be cool indeed, I didn't work personally on Voxtral (for the model), so I might need some assistance as I lack experience in audio models.

Is voxtral already supported by llama.cpp ? I assumed that not for now.

Yeah not for now, but I was trying to add support and ran into issues converting to GGUF. But that should be easy to add after this PR is merged, so don't worry about it for now :)

@juliendenize
Copy link
Contributor Author

Ok so this: https://github.com/ggml-org/llama.cpp/actions/runs/16500995835/job/46660394829?pr=14737

Is actually expected because we didn't merge the PR here yet in mistral-common:
Add a FastAPI app #113

We're in the process of merging I'm just adding a final feature which is begin able to call /v1/chat/completions to directly call the inference server (in this case llama.cpp !!). I'm moving as fast as possible for this.

@CISC
Copy link
Collaborator

CISC commented Jul 24, 2025

We're in the process of merging I'm just adding a final feature which is begin able to call /v1/chat/completions to directly call the inference server (in this case llama.cpp !!). I'm moving as fast as possible for this.

Ok, ping me when you're ready.

@ngxson
Copy link
Collaborator

ngxson commented Jul 24, 2025

I've just had a deeper look into this PR. One concern though, most of the code inside convert_mistral_to_gguf.py is copied from convert_hf_to_gguf.py, which can make it a bit tricky in the long term, especially the code for multimodal model conversion.

Just thinking, maybe it's better to bring them right into convert_hf_to_gguf.py? AFAIU most of the complicated code in this PR is dedicated to converting the tokenizer to GGUF.

Btw, I'm also working converting Voxtral to GGUF. I thought that would be simple but I'm currently stuck at the tokenizer. Trying a quick hack to copy some code from this PR.. will see if it work.

@ngxson
Copy link
Collaborator

ngxson commented Jul 24, 2025

Ok so as demo in #14862, I think it might be better to merge everything into convert_hf_to_gguf. This has 2 big advantages:

  • Easier for long term maintenance, since we will have less duplicated code
  • Less confusion for end-users. Users who are not very familiar with llama.cpp may not understand that they need to use a dedicated script for mistral models (or models fine tuned from mistral)

@CISC
Copy link
Collaborator

CISC commented Jul 24, 2025

Ok so as demo in #14862, I think it might be better to merge everything into convert_hf_to_gguf.

Sounds good to me.

@juliendenize
Copy link
Contributor Author

Hi @ngxson thanks for the review.

Just thinking, maybe it's better to bring them right into convert_hf_to_gguf.py?

The reason I split the two files was to avoid confusion of what is happening, because here we don't convert hf models.

It is indeed a lot of copy paste from convert_hf_to_gguf.py but with lots of overriding. We could probably subclasses but end up overriding whole methods and use few super(). Though not entirely sure about that as I decided to decouple things really early and didn't keep track of that. I can probably achieve something better. Maybe the first thing would be to import from convert_hf_to_gguf.py . Then if you have a strong opinion about merging the two it could be done more easily.

@ngxson
Copy link
Collaborator

ngxson commented Jul 25, 2025

The reason I split the two files was to avoid confusion of what is happening, because here we don't convert hf models.

Hmm FYI, convert_hf_to_gguf already support some non-HF models so I think it's probably fine to merge these two.

We can also add an additional flag like --non-hf if needed, but eventually I think it's unnecessary.

From the perspective of convert_hf_to_gguf, not many things that make a mistral model different from HF model:

  • The tensors are stored inside consolidated.safetensors --> easy to support
  • Hyperparams are stored inside params.json instead of config.json --> also easy to support
  • Tokenizer conversion --> a bit tricky, but as demo in mtmd : add support for Voxtral #14862 it is feasible

So overall I still think merging everything into convert_hf_to_gguf is not that complicated and will be a better choice.

@juliendenize
Copy link
Contributor Author

thanks @ngxson I started the process of refactoring. I have a bug that i need to fix (probably on Monday) which is why I didn't push but I prefer notifying you guys as I don't want to be silent again. You were right there is very few changes to make !

BTW @CISC we merged the PR and made the release of mistral-common.

FYI we also released Magistral GGUF https://huggingface.co/mistralai/Magistral-Small-2507-GGUF thanks to this PR seems to work very smoothly with mistral-common and llama.cpp

@juliendenize juliendenize force-pushed the mistral_integration branch from 10ef34f to 9b5d9a8 Compare July 29, 2025 12:52
@juliendenize
Copy link
Contributor Author

Thanks, it looks better now.

Based on your second paragraph i didn't rebase but lmk if you want me to do it.

Yes please do a rebase. I thought about doing it myself but turns out I still haven't had time.

Done :)

Comment on lines +2253 to +2260
valid_prefixes = (
"multi_modal_projector.",
"vision_tower.",
"vision_encoder.",
"vision_language_adapter.",
"patch_merger.",
"pre_mm_projector_norm",
)
Copy link
Collaborator

@ngxson ngxson Jul 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit out of scope, but this list can be extracted into a static const inside MmprojModel.TENSOR_PREFIXES

I will do that in another PR, just writing a note here so I won't forget it

Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall, nice contribution!

We can merge once the 2 pending comments are all resolved.

Copy link
Contributor Author

@juliendenize juliendenize left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think I answered your comments.

Regarding the remote I think it would have been nice to allow remote download for mistral format but I reverted as requested.

Regarding chat templates, i created a method for it that should be easily expandable.

Comment on lines 118 to +120
remote_tensors = gguf.utility.SafetensorRemote.get_list_tensors_hf_model(remote_hf_model_id)
self.tensor_names = set(name for name in remote_tensors.keys())
for name, remote_tensor in gguf.utility.SafetensorRemote.get_list_tensors_hf_model(remote_hf_model_id).items():
for name, remote_tensor in remote_tensors.items():
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left the change here (while removing mistral-format) as remote_tensors was not used in the for loop but evaluated again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now mistral-format cannot be downloaded from hf though by removing the mistral format case.

Comment on lines +7820 to +7851
@staticmethod
def get_community_chat_template(vocab: MistralVocab, templates_dir: Path):
assert TokenizerVersion is not None, "mistral_common is not installed"
assert isinstance(vocab.tokenizer, (Tekkenizer, SentencePieceTokenizer)), (
f"Expected Tekkenizer or SentencePieceTokenizer, got {type(vocab.tokenizer)}"
)

if vocab.tokenizer.version == TokenizerVersion.v1:
return "mistral-v1"
elif vocab.tokenizer.version == TokenizerVersion.v3 and vocab.tokenizer_type == MistralTokenizerType.spm:
return "mistral-v3"
elif vocab.tokenizer.version == TokenizerVersion.v3 and vocab.tokenizer_type == MistralTokenizerType.tekken:
return "mistral-v3-tekken"
elif vocab.tokenizer.version == TokenizerVersion.v7 and vocab.tokenizer_type == MistralTokenizerType.spm:
return "mistral-v7"
elif vocab.tokenizer.version == TokenizerVersion.v7 and vocab.tokenizer_type == MistralTokenizerType.tekken:
return "mistral-v7-tekken"
elif vocab.tokenizer.version == TokenizerVersion.v11:
template_file = "Mistral-Small-3.2-24B-Instruct-2506.jinja"
elif vocab.tokenizer.version == TokenizerVersion.v13:
template_file = "unsloth-mistral-Devstral-Small-2507.jinja"
else:
raise ValueError(f"Unknown tokenizer type: {vocab.tokenizer_type} and version {vocab.tokenizer.version}")

template_path = templates_dir / template_file
if not template_path.exists():
raise FileNotFoundError(f"Template file not found: {template_path}")

with open(template_path, "r", encoding="utf-8") as f:
template = f.read()

return template
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should handle the chat template defaults.

@broadbit-hu
Copy link

broadbit-hu commented Aug 2, 2025

Thanks for this contribution!

A little remark to check the current "good enough" tekken pre-tokenizer too for Unicode characters (like Finnish, Thai, Hungarian, etc.):

            case LLAMA_VOCAB_PRE_TYPE_TEKKEN:
                // original regex from tokenizer.json
                // "[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
                regex_exprs = {
                    "[^\\r\\n\\p{L}\\p{N}]?((?=[\\p{L}])([^a-z]))*((?=[\\p{L}])([^A-Z]))+|[^\\r\\n\\p{L}\\p{N}]?((?=[\\p{L}])([^a-z]))+((?=[\\p{L}])([^A-Z]))*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+",
                };
                break;

@juliendenize
Copy link
Contributor Author

@CISC @ngxson gentle ping to know if there is anything more I can do to help merging this one.

Son alerted me that he'll be busy, so I understand if for now this PR is on hold. Just so you know, next week I'll be on vacations !

@ngxson
Copy link
Collaborator

ngxson commented Aug 5, 2025

Hey @juliendenize , I do a final review a bit later this week. Since we already have 2 approvals on this PR, I think it's pretty much ready to be merged.

@CISC
Copy link
Collaborator

CISC commented Aug 5, 2025

@juliendenize Sorry, been pretty busy, @ngxson can merge when ready.

@broadbit-hu
Copy link

            case LLAMA_VOCAB_PRE_TYPE_TEKKEN:
                // original regex from tokenizer.json
                // "[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
                regex_exprs = {
                    "[^\\r\\n\\p{L}\\p{N}]?((?=[\\p{L}])([^a-z]))*((?=[\\p{L}])([^A-Z]))+|[^\\r\\n\\p{L}\\p{N}]?((?=[\\p{L}])([^a-z]))+((?=[\\p{L}])([^A-Z]))*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+",
                };
                break;

@juliendenize What do you think about these differences in the Tekken tokenizer that affect Unicode characters?

@juliendenize
Copy link
Contributor Author

@broadbit-hu
Not sure to understand the question, I think it is handled here if this was the request.

If the request is to improve, it is out of scope rn (at least to me), not sure there was requests to improve or performance issues raised.

@broadbit-hu
Copy link

broadbit-hu commented Aug 5, 2025

@broadbit-hu Not sure to understand the question, I think it is handled here if this was the request.

If the request is to improve, it is out of scope rn (at least to me), not sure there was requests to improve or performance issues raised.

@juliendenize I apologize, I'll be a bit more specific.

The llama-vocab.cpp handles the Tekken pre-tokenizer here: https://github.com/ggml-org/llama.cpp/blob/master/src/llama-vocab.cpp#L381

The original regex in Mistral's tokenizer.json:

[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+

The code in llama.cpp that handles regular expressions has limitations with Unicode characters, so the original regex was replaced with this:

[^\\r\\n\\p{L}\\p{N}]?((?=[\\p{L}])([^a-z]))*((?=[\\p{L}])([^A-Z]))+|[^\\r\\n\\p{L}\\p{N}]?((?=[\\p{L}])([^a-z]))+((?=[\\p{L}])([^A-Z]))*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+

Could you check the differences between your own (maybe vLLM) environment and the llama.cpp pre-tokenizer to see if it causes significant differences in inference (for example Mistral NeMo, Mistral Small, etc.)?

@CISC
Copy link
Collaborator

CISC commented Aug 10, 2025

@ngxson This should be merged before #14810

Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CISC please merge when you're ready

@CISC CISC merged commit a3a7874 into ggml-org:master Aug 11, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
examples python python script changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants