Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Config fixes for VLLMModel #472

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

anton-l
Copy link
Member

@anton-l anton-l commented Dec 20, 2024

Just re-adding some tiny but useful features from the base model back to VLLM

LMK if you notice anything else!

@@ -191,7 +194,7 @@ def _create_auto_tokenizer(self, config: VLLMModelConfig, env_config: EnvConfig)
config.pretrained,
tokenizer_mode="auto",
trust_remote_code=config.trust_remote_code,
tokenizer_revision=config.revision,
revision=config.revision + (f"/{config.subfolder}" if config.subfolder is not None else ""),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update re-revision - not sure about why you're doing the next addition to the path howver?

Copy link
Member Author

@anton-l anton-l Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed this pattern in other implementations, so it probably makes sense to standardize while we're at it

tokenizer = AutoTokenizer.from_pretrained(
model_name if tokenizer_name is None else tokenizer_name,
revision=revision + (f"/{subfolder}" if subfolder is not None else ""),

Pinging @NathanHB for whether it's applicable to vllm

@HuggingFaceDocBuilderDev
Copy link
Collaborator

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@kzos
Copy link

kzos commented Jan 6, 2025

@anton-l Hope this fixes the below error:
Despite having both vllm and ray installed.

ImportError: You are trying to use an VLLM model, for which you need vllm and ray, which are not
available in your environment. Please install them using pip, pip install vllm ray

@NathanHB
Copy link
Member

NathanHB commented Jan 7, 2025

hi ! did you try pip install lighteval[vllm] ?

@kzos
Copy link

kzos commented Jan 8, 2025

@NathanHB in my case it was environment issue, conda/venv. resolved it. thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants