You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After fine-tuning based on Finetuning with Adapter, a new weights file lit_model.pth.adapter_v2 was generated. How can we merge lit_model.pth.adapter_v2 with the original model weights file, and convert it to Hugging Face Transformers weights, in order to deploy?
Similar to litgpt merge_lora path/to/lora/checkpoint_dir.
The text was updated successfully, but these errors were encountered:
After fine-tuning based on Finetuning with Adapter, a new weights file lit_model.pth.adapter_v2 was generated. How can we merge lit_model.pth.adapter_v2 with the original model weights file, and convert it to Hugging Face Transformers weights, in order to deploy?
Similar to
litgpt merge_lora path/to/lora/checkpoint_dir.
The text was updated successfully, but these errors were encountered: