-
Notifications
You must be signed in to change notification settings - Fork 268
Issues: DAMO-NLP-SG/Video-LLaMA
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Does video-llama or any finetuned version support types such as float16 or bfloat16 when inferencing? Thanks
#176
opened Feb 21, 2025 by
luentong
配置文件位置在本地但是还是提示OSError: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.
#172
opened Sep 15, 2024 by
Asmallsoldier
Do you have plan to release Video-LLaMA checkpoints with LLaMA 3.1?
#171
opened Aug 19, 2024 by
ShramanPramanick
Problem running demo: Loading checkpoint shards never finishes
#165
opened Jun 10, 2024 by
jpssoares
finetune-billa7b-zh inference error shape '[-1, 136]' is invalid for input of size 137
#161
opened May 16, 2024 by
len2618187
Previous Next
ProTip!
Updated in the last three days: updated:>2025-02-23.