Skip to content

Latest commit

 

History

History
33 lines (25 loc) · 961 Bytes

README.md

File metadata and controls

33 lines (25 loc) · 961 Bytes

RLHF-Reward-Modeling

  • This is a fork of RLHF-Reward-Modeling
    • support models which can handle Japanese
    • support Unsloth, which reduce VRAM when training and accelerte training efficiency
    • support wandb

Support

model support
google/gemma-2b-it
llm-jp/llm-jp-3-1.8b-instruct
dataset support
hendrydong/preference_700K
xxxx -

Environment setup

git clone https://github.com/ohashi3399/RLHF-Reward-Modeling.git && cd RLHF-Reward-Modeling

Bradley-Terry-RM

export HUGGINGFACE_API_KEY=<Your HUGGINGFACE_API token>
export WANDB_API_KEY=<Your WANDB_API token>
source setup.sh && cd bradley-terry-rm
source tune_bt_rm.sh