-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue with converting vlm model "InternVL2_5-1B" #1073
Comments
@endomorphosis please try to uninstall flash_attention_2 package |
I would, however that is not likely going to work with the design parameters of my project, e.g. a peer to peer model server and endpoint aggregator, that is platform agnostic and model agnostic |
if this is indeed meant to be treated like an executable, why not just have all of the dependencies for this executable baked in, so that this sort of package conflict wont happen? |
Hi @endomorphosis, is there already a solution to this? I encountered the same problem during conversion of the jina-embedding-v3 model |
I did not find a solution to fixing this CLI command, and otherwise there is another means of converting the model, whereby openvino traces the torchscript code that is evaluated, before converting it to openvino IR. see e.g. https://github.com/endomorphosis/ipfs_accelerate_py/blob/212c5ad39db2f8d60c3e0230f0025e25c72cf6c2/ipfs_accelerate_py/worker/openvino_utils.py#L197 |
devel@workstation:/tmp$ optimum-cli export openvino --model OpenGVLab/InternVL2_5-1B InternVL2_5-1B --trust-remote-code >> /tmp/openvino.txt
The text was updated successfully, but these errors were encountered: