-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel compilation? #205
Comments
It would be even faster if pre-compiled binaries could be used: googlecolab/colabtools#4256 |
It is technically much more challenging to use pre-compiled binaries, and they would have to be restricted to a small subset of platforms, maybe x86_64 Linux only. OTOH parallel compilation is (probably?) easy to switch on and every platform benefits greatly from it. Btw. I just compiled duckdb-r on s390x Linux in qemu, and it took more than 24 hours (!). |
Thanks. Shouldn't this be a setting in |
How?
Because people don't know about this. If the user has a setting that use that, sure. If there is no setting, then I would use two processors on CRAN, and maybe two, maybe more elsewhere. |
I have |
Arrow does build in parallel by default for me, but IDK if it uses |
@thisisnic: Does this ring a bell? How does the arrow R package achieve multicore compilation by default, regardless of user settings? |
I think this is where we make that happen, dependent on the |
But the |
Hmm, I'm unsure of the exact details, but @jonkeane will know |
Yeah, this is what happens |
AFAIR it is allowed to use two processor cores on CRAN when installing a package. Would it be possible to leverage this and compile duckdb in parallel? That would cut the current ~24 minutes compilation time in half.
Additionally, if
NOT_CRAN
(or some other env var) is set, then we could use even more processors, and would (ideally) cut the compilation time to ~6 minutes on 4 processors, or even ~3 minutes on 8 processors.The text was updated successfully, but these errors were encountered: