Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA: compress-mode size #12029

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

Green-Sky
Copy link
Collaborator

@Green-Sky Green-Sky commented Feb 22, 2025

cuda 12.8 added the option to specify stronger compression for binaries.

I ran some tests in CI with the new ubuntu 12.8 docker image:

89-real arch

In this scenario, it appears it is not compressing by default at all?

mode ggml-cuda.so
none 64M
speed (default) 64M
balanced 64M
size 18M

60;61;70;75;80 arches

mode ggml-cuda.so
none 994M
speed (default) 448M
balanced 368M
size 127M

I did not test the runtime load overhead this should incur. But for most ggml-cuda usecases, the processes are usually long(er) lived, so the trade-off seems reasonable to me.

cuda 12.8 added the option to specify stronger compression for binaries.
@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Feb 22, 2025
@Green-Sky Green-Sky marked this pull request as ready for review February 24, 2025 12:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant