Skip to content

softmax1/MosaicBERT-Softmax1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MosaicBERT-Softmax1

A test of the Attention Is Off By One hypothesis. MosaicML claims that with their recipe you can pretrain BERT from scratch for $20. As such, I will test the hypothesis by generalizing their implementation of BERT to use my implementation of Flash Attention with SoftmaxN.

Training Dataset

I train on the Colossal, Cleaned, Common Crawl (C4) dataset. Mosaic used 78.6% of the 'en' subset of C4 for their pretraining. Note that MosaicBERT reached the performance level of the original BERT's average GLUE score (of 79.6) in 21.4% of its total training time. Therefore, to save resources, I will use 16.8% of the 'en' subset of C4 for training, which corresponds to 61,358,080 samples or 125 GiB. The main reason IMO why outliers did not appear in my previous test of the hypothesis was that the dataset I used was only 178 MB, which was too small. Using a significantly larger dataset eliminates this possibility.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published