Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building System: Enable device code compression (Lin/Win) #1148

Merged
merged 12 commits into from
Jan 9, 2025

Conversation

fengyuan14
Copy link
Contributor

@fengyuan14 fengyuan14 commented Dec 9, 2024

This is the first stage to align binary distribution with CUDA. That is only libtorch_cuda.so at all. The first stage helps on combining all kernel libraries into one. After the commit, we will have two libraries libtorch_xpu.so and libtorch_xpu_ops_sycl_kernel.so, and the commit also helps us make the bin size on par as CUDA. In the next stage, we will

  1. Combine all XPU libraries into one, libtorch_xpu.so.
  2. Revert lin/win divergence.

@fengyuan14 fengyuan14 changed the title DONOTMERGE: Evaluate device code compression. Building System: Enable device code compression (Lin/Win) Dec 26, 2024
@fengyuan14 fengyuan14 marked this pull request as ready for review December 26, 2024 01:49
else()
set(AOT_TARGETS "pvc,xe-lpg,ats-m150")
set(AOT_TARGETS "pvc,xe-lpg,dg2-g10,xe2-lpg,xe2-hpg")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed before, let's remove BMG in default linux aot target. Because we have LTS driver for linux server gpu, it will break the build on LTS driver.

@fengyuan14 fengyuan14 added this pull request to the merge queue Jan 9, 2025
Merged via the queue into main with commit 36a710a Jan 9, 2025
3 checks passed
@fengyuan14 fengyuan14 deleted the fy/combine-kernel-lib branch January 9, 2025 02:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants