-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Config fixes for VLLMModel #472
base: main
Are you sure you want to change the base?
Conversation
@@ -191,7 +194,7 @@ def _create_auto_tokenizer(self, config: VLLMModelConfig, env_config: EnvConfig) | |||
config.pretrained, | |||
tokenizer_mode="auto", | |||
trust_remote_code=config.trust_remote_code, | |||
tokenizer_revision=config.revision, | |||
revision=config.revision + (f"/{config.subfolder}" if config.subfolder is not None else ""), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important fix: the argument is indeed revision
https://github.com/vllm-project/vllm/blob/04139ade599eedd493ce8effcda7ceabb57f2fb5/vllm/transformers_utils/tokenizer.py#L92
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update re-revision - not sure about why you're doing the next addition to the path howver?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed this pattern in other implementations, so it probably makes sense to standardize while we're at it
lighteval/src/lighteval/models/transformers/transformers_model.py
Lines 505 to 507 in fdb12f4
tokenizer = AutoTokenizer.from_pretrained( | |
model_name if tokenizer_name is None else tokenizer_name, | |
revision=revision + (f"/{subfolder}" if subfolder is not None else ""), |
Pinging @NathanHB for whether it's applicable to vllm
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@anton-l Hope this fixes the below error: ImportError: You are trying to use an VLLM model, for which you need |
hi ! did you try |
@NathanHB in my case it was environment issue, conda/venv. resolved it. thanks |
Just re-adding some tiny but useful features from the base model back to VLLM
LMK if you notice anything else!