diff --git a/README.md b/README.md index 7f33fd7..b164567 100644 --- a/README.md +++ b/README.md @@ -38,16 +38,16 @@ ```bash # 7B -bash scripts/make-7b.sh +bash run.sh 7b -# 或 13B -bash scripts/make-13b.sh +# 13B +bash run.sh 13b -# 或 7B Chinese -bash scripts/make-7b-cn.sh +# 7B Chinese +bash run.sh 7b-cn -# 或 7B Chinese 4bit -bash scripts/make-7b-cn-4bit.sh +# 7B Chinese 4bit +bash run.sh 7b-cn-4bit ``` 2. 选择适合你的命令,从 HuggingFace 下载 LLaMA2 或中文模型: @@ -127,13 +127,13 @@ meta-llama ```bash # 7B -bash scripts/run-7b.sh -# 或 13B -bash scripts/run-13b.sh -# 或 Chinese 7B -bash scripts/run-7b-cn.sh -# 或 Chinese 7B 4BIT -bash scripts/run-7b-cn-4bit.sh +bash run.sh 7b +# 13B +bash run.sh 13b +# Chinese 7B +bash run.sh 7b-cn +# Chinese 7B 4BIT +bash run.sh 7b-cn-4bit ``` 模型运行之后,在浏览器中访问 `http://localhost7860` 或者 `http://你的IP地址:7860` 就可以开始玩了。 diff --git a/README_EN.md b/README_EN.md index 49c9809..acaad3b 100644 --- a/README_EN.md +++ b/README_EN.md @@ -39,16 +39,16 @@ Get started quickly, locally using the 7B or 13B models, using Docker. ```bash # 7B -bash scripts/make-7b.sh +bash run.sh 7b -# OR 13B -bash scripts/make-13b.sh +# 13B +bash run.sh 13b -# OR 7B Chinese -bash scripts/make-7b-cn.sh +# 7B Chinese +bash run.sh 7b-cn -# OR 7B Chinese 4bit -bash scripts/make-7b-cn-4bit.sh +# 7B Chinese 4bit +bash run.sh 7b-cn-4bit ``` 2. Download LLaMA2 Models from HuggingFace, or chinese models. @@ -128,13 +128,13 @@ meta-llama ```bash # 7B -bash scripts/run-7b.sh -# OR 13B -bash scripts/run-13b.sh -# OR Chinese 7B -bash scripts/run-7b-cn.sh -# OR Chinese 7B 4BIT -bash scripts/run-7b-cn-4bit.sh +bash run.sh 7b +# 13B +bash run.sh 13b +# Chinese 7B +bash run.sh 7b-cn +# Chinese 7B 4BIT +bash run.sh 7b-cn-4bit ``` enjoy, open `http://localhost7860` or `http://ip:7860` and play with the LLaMA2! diff --git a/run.sh b/run.sh new file mode 100644 index 0000000..6dde540 --- /dev/null +++ b/run.sh @@ -0,0 +1,79 @@ +#!/bin/bash + +me_path="$(dirname "$(readlink -f "$0")")" + +case "$1" in +7b) + image_name=soulteary/llama2 + tag_name=7b + docker_file=docker/Dockerfile.7b + # MetaAI LLaMA2 Models (10~14GB vRAM) + mod_url=https://huggingface.co/meta-llama/Llama-2-7b-chat-hf + mod_path=meta-llama + ;; +13b) + image_name=soulteary/llama2 + tag_name=13b + docker_file=docker/Dockerfile.13b + # MetaAI LLaMA2 Models (10~14GB vRAM) + mod_url=https://huggingface.co/meta-llama/Llama-2-13b-chat-hf + mod_path=meta-llama + ;; +7b-cn) + image_name=soulteary/llama2 + tag_name=7b-cn + docker_file=docker/Dockerfile.7b-cn + # 或 Chinese LLaMA2 (10~14GB vRAM) + mod_url=https://huggingface.co/LinkSoul/Chinese-Llama-2-7b + mod_path=LinkSoul + ;; +7b-cn-4bit) + image_name=soulteary/llama2 + tag_name=7b-cn-4bit + docker_file=docker/Dockerfile.7b-cn-4bit + # 或 Chinese LLaMA2 4BIT (5GB vRAM) + mod_url=https://huggingface.co/soulteary/Chinese-Llama-2-7b-4bit + mod_path=soulteary + ;; +*) + cat <