docs: improve documentation on exporting models from Colab#4184
docs: improve documentation on exporting models from Colab#4184Sagargupta16 wants to merge 2 commits intounslothai:mainfrom
Conversation
Add comprehensive guide covering how to save fine-tuned models from Google Colab, including saving to Google Drive, pushing to Hugging Face Hub, GGUF export for Ollama/llama.cpp, and downloading large files. Closes unslothai#1615 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Use ### headings instead of bold text for subheadings - Fix save_pretrained_gguf argument (use "my_model" not "my_model_gguf") - Replace hardcoded GGUF filename with <model-name> placeholder Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the project's documentation by introducing a detailed guide on exporting fine-tuned models from Google Colab. This new resource provides users with critical information and best practices for persisting their models, preventing data loss due to Colab's ephemeral nature, and facilitating deployment to various platforms like Hugging Face Hub or local inference engines. The guide addresses common challenges and offers practical solutions, making the model export process more robust and user-friendly. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds a comprehensive and well-written guide on exporting models from Google Colab. The documentation is clear, detailed, and covers a wide range of methods and common issues, which will be very helpful for users. I've made a few minor suggestions to improve consistency and clarity in some of the examples. Overall, this is an excellent addition to the documentation.
Note: Security Review has been skipped due to the limited scope of the PR.
| # Save as GGUF with q8_0 quantization (fast conversion, good quality) | ||
| model.save_pretrained_gguf( | ||
| "my_model", | ||
| tokenizer=tokenizer, | ||
| quantization_method="q8_0", | ||
| ) |
There was a problem hiding this comment.
The Python example for saving a GGUF file uses q8_0 quantization, but the subsequent Ollama Modelfile example refers to a Q4_K_M.gguf file. This inconsistency can be confusing.
To make the guide more consistent, I suggest updating the Python example to use q4_k_m, which is also noted as the recommended method for most use cases in the quantization table.
| # Save as GGUF with q8_0 quantization (fast conversion, good quality) | |
| model.save_pretrained_gguf( | |
| "my_model", | |
| tokenizer=tokenizer, | |
| quantization_method="q8_0", | |
| ) | |
| # Save as GGUF with q4_k_m quantization (recommended for most use cases) | |
| model.save_pretrained_gguf( | |
| "my_model", | |
| tokenizer=tokenizer, | |
| quantization_method="q4_k_m", | |
| ) |
|
|
||
| ```bash | ||
| # Create a Modelfile | ||
| echo 'FROM ./my_model_gguf/<model-name>.Q4_K_M.gguf' > Modelfile |
There was a problem hiding this comment.
The placeholder <model-name> is ambiguous. It's better to use <base-model-name> to clarify that it refers to the original model's name (e.g., Llama-3-8B-Instruct), not the name provided to save_pretrained_gguf. This will help users avoid errors when creating the Modelfile. This also applies to other parts of the document where <model-name> is used as a placeholder for the GGUF filename.
| echo 'FROM ./my_model_gguf/<model-name>.Q4_K_M.gguf' > Modelfile | |
| echo 'FROM ./my_model_gguf/<base-model-name>.Q4_K_M.gguf' > Modelfile |
| ```bash | ||
| # On your local machine | ||
| pip install huggingface_hub | ||
| huggingface-cli download your-username/my-model-gguf --local-dir ./my-model |
There was a problem hiding this comment.
There's an inconsistency in the suggested --local-dir for downloading GGUF models. Here it's ./my-model, but in the "Quick Reference" section (line 544), it's ./my-model-gguf. Using ./my-model-gguf is more descriptive and consistent with the repository name your-username/my-model-gguf. I recommend using it here as well for clarity.
| huggingface-cli download your-username/my-model-gguf --local-dir ./my-model | |
| huggingface-cli download your-username/my-model-gguf --local-dir ./my-model-gguf |
Summary
docs/exporting_models_from_colab.mdcovering all model export methods from Google ColabCloses #1615
What's covered