Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

怎么从本地直接加载模型啊 #46

Open
JACKzhuz opened this issue Aug 3, 2023 · 1 comment
Open

怎么从本地直接加载模型啊 #46

JACKzhuz opened this issue Aug 3, 2023 · 1 comment

Comments

@JACKzhuz
Copy link

JACKzhuz commented Aug 3, 2023

怎么从本地直接加载模型啊 我发现给的docker命令部署的话需要重新下载huggingface上的模型 ,不能直接从本地加载构建成docker容器吗

@kerneltravel
Copy link

kerneltravel commented Aug 3, 2023

可以用llama.cpp 的命令行、或其python绑定版本的编程方式,实现本地直接加载模型。
前者的命令参考 网址

make -j && ./main -m ./models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512

后者的编程参考 网址

>>> from llama_cpp import Llama
>>> llm = Llama(model_path="./models/7B/ggml-model.bin")
>>> output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True)
>>> print(output)
{
  "id": "cmpl-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "object": "text_completion",
  "created": 1679561337,
  "model": "./models/7B/ggml-model.bin",
  "choices": [
    {
      "text": "Q: Name the planets in the solar system? A: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.",
      "index": 0,
      "logprobs": None,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 14,
    "completion_tokens": 28,
    "total_tokens": 42
  }
}

以上两种方式都实现了从本地加载模型。注意,以上方法适合于模型文件只有一个的时候。

如果模型文件有多个,比如,像https://huggingface.co/LinkSoul/Chinese-Llama-2-7b 这种有多个文件
pytorch_model-00001-of-00003.bin
pytorch_model-00002-of-00003.bin
pytorch_model-00003-of-00003.bin
那么可以用 Hugging Face库的模型的python常用加载方式:

from transformers import AutoModel, AutoConfig
from pathlib import Path

model_name = "模型名称"
cache_dir = "自定义路径"  #可以是当前路径 cache_dir = str( Path(os.getcwd()) / "models" / "7B"  )

config = AutoConfig.from_pretrained(model_name, cache_dir=cache_dir)
model = AutoModel.from_pretrained(model_name, config=config, cache_dir=cache_dir)
通过将cache_dir参数设置为你想要的路径,你可以将模型文件保存在指定的位置。请确保指定的路径是存在的,并且具有适当的读写权限。

进一步参考:链接

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants