Skip to content

Latest commit

 

History

History
65 lines (50 loc) · 1.98 KB

README.md

File metadata and controls

65 lines (50 loc) · 1.98 KB

llama2.zig

This's just a work to reimplement llama2.c in Zig, also a toy project for me to explore Zig.

This repo would be more like a direct implementation to keep things simple and easy to understand (at least for myself).

If you are looking for a stable & fast implementations, please consider checking out cgbur/llama2.zig and clebert/llama2.zig!

Requirements

  • zig: 0.11.0

Build

# XXX: Currently the build have to look up `ztracy` even if it's dependency for
# development only, so you have to fetch the submodule once.
# $ git submodule update --init --recursive

$ zig build -Doptimize=ReleaseFast

Usage

Almost all arguments in llama2.c are supported except those ones related to chat mode:

# For stories15M, remember to download the model and tokenizer first:
# $ wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin -P models
# $ wget https://github.com/karpathy/llama2.c/raw/master/tokenizer.bin -P models

$ ./zig-out/bin/run models/stories15M.bin \
    -z models/tokenizer.bin -t 0.8 -n 256 -i "One day, Lily met a Shoggoth"

(if you want to compare the output with llama2.c, remember to specify an identical seed)

Tests

To run tests, it currently requires installing PyTorch to load checkpoint for checking whether weights are correctly mapped.

# Remember to download the model `stories15M.pt` (PyTorch model) first:
# wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P models

$ zig test tests.zig

Developments

If you want to profile the code, please fetch the submodules:

$ git submodule update --init --recursive

Then build the code with tracy enabled:

$ zig build -Doptimize=ReleaseFast -Duse_tracy=true

For further details, please checkout docs/INSTALL.md.