Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split pkgci_shark_ai.yml for (short / long-running) X (cpu / gpu) #883

Open
renxida opened this issue Jan 29, 2025 · 0 comments
Open

Split pkgci_shark_ai.yml for (short / long-running) X (cpu / gpu) #883

renxida opened this issue Jan 29, 2025 · 0 comments

Comments

@renxida
Copy link
Contributor

renxida commented Jan 29, 2025

Since we have pkgci and build packages once before running tests (#780), we can now build the packages once and download them for running multiple tests in parallel.

In particular, it would be nice to have GPU tests for shortfin, as well as a split between quick tests that run small / toy models, and slow test that run full-sized models.

This also enables parallel execution between e.g. open Llama and meta llama.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant