Skip to content

SmallDoges/small-doge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

samlldoges

visitors arXiv huggingface License: Apache-2.0

Small Doges is under construction, let's develop together!๐Ÿ•๐Ÿ•๐Ÿ•

small-doge

News: ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ We now support the full training process of pre-trained Doge-Base, instruction fine-tuned Doge-Instruct, and reasoning fine-tuned Doge-R1, please refer to the guide!

  • This project aims to train a series of dynamic and fast small models from scratch, with the fastest training time of only 3 hours! You can train a tiny language model Doge-20M in just 13M!๐Ÿš€
  • The small doge series is extremely lightweight, with the smallest version being about $\frac{1}{7800}$ the size of GPT3, and strives to make even the most ordinary personal GPU capable of fast inference and even training.๐ŸŽ๏ธ
  • We provide full-stage code for dataset preprocessing, pre-training, supervised fine-tuning, reinforcement learning preference alignment, visual multimodal VLM (under development), and inference fine-tuning R1 (under development).๐Ÿงช
  • Standing on the shoulders of giants can see further, we hope that the small doge series of small models can provide researchers with more ideas and contribute to the road to achieving Embodied Artificial General Intelligence.๐Ÿค–

Tip

We hope to use open-source tools and frameworks as much as possible to simplify the process from data processing to model training, so that beginners can easily understand and use.๐Ÿค—

streamlit

Doge-60M-Instruct on an 11th gen i7 CPU notebook for fast inference

About

This project aims to develop a series of dynamic and fast small models to promote their application in the field of embodied intelligence, especially in resource-constrained environments, to meet real-time response needs, and to promote the practical application of downstream fields.

Tip

As of 2025-2-2: The small doge series has completed the pre-training of 2 model models, with a minimum of 20M, which can have smooth conversation capabilities!

Model tokens max_train_steps accumulate_steps learning_rate scheduler warmup_ratio decay_ratio weight_decay min_lr_rate
Doge-20M 4B 8,000 256 8e-3 warmup_stable_decay 0.1 0.1 0.01 0.0
Doge-60M 16B 16,000 512 6e-3 warmup_stable_decay 0.1 0.1 0.01 0.0

The following two models are currently in pre-training, and researchers with the capability are welcome to help(poor man's cry)!๐Ÿ™

Model tokens max_train_steps accumulate_steps learning_rate scheduler warmup_ratio decay_ratio weight_decay min_lr_rate
Doge-160M 32B 24,000 768 4e-3 warmup_stable_decay 0.1 0.1 0.01 0.0
Doge-320M 64B 32,000 1024 2e-3 warmup_stable_decay 0.1 0.1 0.01 0.0
drawing

As shown in the figure, the sequence transformation part of the Doge architecture uses Dynamic Mask Attention, which can be understood as using self-attention related to value states during training, and using state-space without past state decay during inference, to solve the problem of existing Transformers or SSMs getting lost in long text. The state transformation part of Doge uses Cross Domain Mixture of Experts, which consists of dense linear layers and sparse embedding layers, and can additionally increase sparse parameters to continue training from dense weight checkpoints without retraining the entire model, thereby reducing the cost of continuous iteration of the model. In addition, Doge also uses RMSNorm and Residual with learnable parameters to adapt the gradient range of deep models.

Dynamic Mask Attention Module

DMAttn DMAttn

Cross Domain Mixture of Experts Module

CDMoE CDMoE

Requirements

Our codebase requires the following environment:

  • Windows or Linux
  • NVIDIA GPU
  • Python 3.10+
  • PyTorch 2.0+
  • CUDA 11.8+

We highly recommend that you install the latest version of PyTorch and CUDA for optimal performance.

Of course, you can also use the open-source Docker PyTorch image to avoid the hassle of configuring the environment.

docker pull nvcr.io/nvidia/pytorch:24.12-py3
docker run --privileged --gpus all -it --name PyTorch --shm-size=32g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 -v <your code path>:/workspace -v <your datasets path>:/workspace/Doge/datasets nvcr.io/nvidia/pytorch:24.12-py3
  • pip install transformers: The core framework for all subsequent work.
  • pip install datasets sentencepiece boto3: Used to download and process datasets.
  • pip install accelerate: Used for distributed training.
  • pip install trl: Used for fine-tuning with reinforcement learning.

Installation

git clone https://github.com/SmallDoges/small-doge.git
cd small-doge
pip install -e .

Quick Start

We have written a notebook and a training guide to demonstrate the entire process of dataset processing, model training, and model evaluation. You can also use the models that have been released independently. If you are interested, please read the notebook or training guide in detail, which contains specific steps and details!

Models Released

Doge-CheckPoint

wsd_scheduler

Doge uses wsd_scheduler as the training scheduler, which divides the learning rate into three stages: warmup, stable, and decay. It allows us to continue training on any new dataset from any checkpoint in the stable stage without spikes of the training.

Here are the initial learning rates required to continue training at each checkpoint:

Model Learning Rate Schedule Warmup Steps Stable Steps
Doge-20M 8e-3 wsd_scheduler 800 6400
Doge-60M 6e-3 wsd_scheduler 1600 12800
Doge-160M 4e-3 wsd_scheduler 2400 19200
Doge-320M 2e-3 wsd_scheduler 3200 25600

Doge-SLM

Pre-Training:

Model Training Data Steps Content Length Tokens LR Batch Size Precision
Doge-20M HuggingFaceTB/smollm-corpus 8k 2048 4B 8e-3 0.5M bfloat16
Doge-60M HuggingFaceTB/smollm-corpus 16k 2048 16B 6e-3 1M bfloat16

Evaluation:

Model MMLU TriviaQA ARC-E ARC-C PIQA HellaSwag OBQA Winogrande tokens / s on CPU
Doge-20M 25.43 0.03 36.83 22.78 58.38 27.25 25.60 50.20 142
Doge-60M 26.41 0.18 50.46 25.34 61.43 31.45 28.00 50.75 62

All evaluations are done using five-shot settings, without additional training on the benchmarks.

SFT:

Model Training Data Epochs Content Length LR Batch Size Precision
Doge-20M-Instruct-SFT HuggingFaceTB/smoltalk 2 2048 8e-4 0.25M bfloat16
Doge-60M-Instruct-SFT HuggingFaceTB/smoltalk 2 2048 6e-4 0.25M bfloat16

DPO:

Model Training Data Epochs Content Length LR Batch Size Precision
Doge-20M-Instruct HuggingFaceH4/ultrafeedback_binarized 2 1024 8e-5 0.125M bfloat16
Doge-60M-Instruct HuggingFaceH4/ultrafeedback_binarized 2 1024 6e-5 0.125M bfloat16

Environment:

  • Image: nvcr.io/nvidia/pytorch:24.12-py3
  • Hardware: 1x NVIDIA RTX 4090
  • Software: Transformers, TRL

Expectations

Important

  • If you find this project helpful, please consider giving it a star โญ!

  • Due to time and expertise constraints, there may be omissions in the project. Feel free to submit your insights through Issues or PRs to help improve the project, your support is the driving force behind the continuous progress of the project!๐Ÿ˜Š

  • One person can go fast, but a group of people can go further. If you have already trained a new small-doge model, feel free to share your model weights, training recipes, evaluation results, and other relevant information in Discussions or Issues. It can be a new small-doge model version for specific downstream tasks or vertical fields, such as sentiment recognition, medical, psychological, financial, legal Q&A, etc. It can also be an expanded training, such as exploring new small-doge model versions with longer text sequences, larger parameters, or larger datasets. Your sharing will greatly promote the development of the community!๐Ÿš€๐Ÿš€๐Ÿš€

Star History

Star History Chart

Citation

If you use this codebase, or otherwise find our work valuable, please cite our paper:

@misc{shi2024wonderfulmatrices,
      title={Wonderful Matrices: Combining for a More Efficient and Effective Foundation Model Architecture}, 
      author={Jingze Shi and Bingheng Wu},
      year={2024},
      eprint={2412.11834},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2412.11834}, 
}