Skip to content

Commit e07c524

Browse files
Merge pull request #210 from tushar2407/docs
docs: made changes to general rules in contribution guide
2 parents f315380 + ae1199c commit e07c524

File tree

4 files changed

+10
-9
lines changed

4 files changed

+10
-9
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -33,13 +33,11 @@ With `xTuring` you can,
3333

3434
<br>
3535

36-
## 🌟 INT4 fine-tuning and generation with LLaMA LoRA
36+
## 🌟 What's new?
37+
We are excited to announce the latest enhancements to our `xTuring` library: Falcon LLM integration and Generic model support. With this update, you can use and finetune Falcon-7B model with the off-the-shelf, off-the-shelf with INT8 precision, with LoRA architecture, and LoRA architecture with INT8 precision. Moreover, in case you do not find a model you want to run in the models' list, you can still us `xTuring` to run with the new `GenericModel` wrapper available to you. This new integration allows you to test and finetune any new model on xTuring without waiting for it to be integrated.
3738

38-
We are excited to announce the latest enhancement to our `xTuring` library: INT4 fine-tuning and generation integration. With this update, you can fine-tune LLMs like LLaMA with LoRA architecture in INT4 precision with less than `6 GB` of VRAM. This breakthrough significantly reduces memory requirements and accelerates the fine-tuning process, allowing you to achieve state-of-the-art performance with less computational resources.
39-
40-
More information about INT4 fine-tuning and benchmarks can be found in the [INT4 README](examples/int4_finetuning/README.md).
41-
42-
You can check out the [LLaMA INT4 fine-tuning example](examples/int4_finetuning/LLaMA_lora_int4.ipynb) to see how it works.
39+
You can check the [Falcon LoRA INT8 working example](examples/falcon/falcon_lora_int8.py) repository to see how it works.
40+
Also, you can check the [GenericModel working example](examples/generic/generic_model.py) repository to see how it works.
4341

4442
<br>
4543

@@ -147,6 +145,8 @@ model = BaseModel.load("x/distilgpt2_lora_finetuned_alpaca")
147145
- [x] Added fine-tuned checkpoints for some models to the hub
148146
- [x] INT4 LLaMA LoRA fine-tuning demo
149147
- [x] INT4 LLaMA LoRA fine-tuning with INT4 generation
148+
- [x] Support for a generic model wrapper
149+
- [x] Support for Falcon-7B model
150150
- [ ] Evaluation of LLM models
151151
- [ ] Support for Stable Diffusion
152152

docs/docs/contributing/general_rules.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,10 @@ To contribute to xTuring, follow these steps:
2626
git clone https://github.com/<YOUR_USERNAME>/xturing.git
2727
```
2828

29-
3. Create a new branch for your changes
29+
3. Create a new branch for your changes emerging from the `dev` branch.
3030

3131
```bash
32+
git checkout dev
3233
git checkout -b <BRANCH_NAME>
3334
```
3435

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "xturing"
3-
version = "0.1.3"
3+
version = "0.1.4"
44
description = "Fine-tuning, evaluation and data generation for LLMs"
55

66
authors = [

src/xturing/__about__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "0.1.3"
1+
__version__ = "0.1.4"

0 commit comments

Comments
 (0)