Replies: 1 comment
-
Did you solve this issue? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
(crew) caojianao@Frog:~/llamacpp/llama.cpp$ python convert_hf_to_gguf.py ~/llamacpp/semikong-8b
INFO:hf-to-gguf:Loading model: semikong-8b
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00003.safetensors'
INFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F16, shape = {4096, 128256}
INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {4096}
Traceback (most recent call last):
File "/home/caojianao/llamacpp/llama.cpp/convert_hf_to_gguf.py", line 4074, in
main()
File "/home/caojianao/llamacpp/llama.cpp/convert_hf_to_gguf.py", line 4068, in main
model_instance.write()
File "/home/caojianao/llamacpp/llama.cpp/convert_hf_to_gguf.py", line 387, in write
self.prepare_tensors()
File "/home/caojianao/llamacpp/llama.cpp/convert_hf_to_gguf.py", line 1597, in prepare_tensors
super().prepare_tensors()
File "/home/caojianao/llamacpp/llama.cpp/convert_hf_to_gguf.py", line 280, in prepare_tensors
for new_name, data in ((n, d.squeeze().numpy()) for n, d in self.modify_tensors(data_torch, name, bid)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/caojianao/llamacpp/llama.cpp/convert_hf_to_gguf.py", line 1566, in modify_tensors
return [(self.map_tensor_name(name), data_torch)]
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/caojianao/llamacpp/llama.cpp/convert_hf_to_gguf.py", line 200, in map_tensor_name
raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.g_idx'
Beta Was this translation helpful? Give feedback.
All reactions