-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make static linking for nanobind_extension
optional
#7
Conversation
Results in a `@nanobind//:libnanobind` target producing a `libnanobind.a` file.
Notes: I added the This means that we could in theory omit these flags here and bank on the assumption that the user compiles in "opt" mode. With the linker flags I'm not sure they can be omitted as easily. What's missing is the definition of these flags in userland (i.e. in cc @wjakob, maybe you could take a look as well if you're interested and have time. |
This is purely an optimization thing. If Bazel adds these flags in optimization mode, then you don't need to do so on top. |
I'll leave them out of the compiler options, then. I still think the linker options are necessary, so I'll keep them for now and share how they influence the outputs. By the way, for my understanding: The linker response file approach on Mac should not be necessary for |
You need the linker response file when |
Oh, thanks for clarifying. That would mean that it's currently missing in the actual bindings extension (notice there are no Lines 44 to 53 in 97e3db2
I only see green CI though. Why does this still work? I don't think it will just fall back to (FWIW, Google Benchmark MacOS extensions are also green, so this seems robust for some reason.) |
Can you capture the actual linker command used and post them here? |
It's possible that everything might be fine if you only use nanobind code in the example binding without accessing the C API explicitly. Try adding something like |
uh, something bad may be happening as well: what if Bazel links against the libpython of a specific Python version? Then it obviously won't complain about missing symbols, but you will end up with a built extension that is extremely difficult to redistribute to others. |
Trying on the google/benchmark repo, invocation: bazel build -s //bindings/python/google_benchmark:_benchmark --compilation_mode=opt --cxxopt=-std=c++17 --@rules_python//python/config_settings:python_version=3.12 --@nanobind_bazel//:py-limited-api=cp312 Linker command looks inconspicuous (also not very helpful, as I can't see the linked libs for some reason):
I think I'm doing something wrong here, because no libs show up at all. Also: In the compile commands of the nanobind parts, I see BTW: Do you have a one-liner for a module-wide binding using e.g. |
You don't need to bind the function at all, you can just add it to the code that runs when the module is loaded. E.g. NB_MODULE(my_ext, m) {
(void) PyUnicode_FromString("foo");
} |
Addendum: Output of
|
Addendum 2: The module imports just fine with |
Sorry, I didn't grasp the output. The linker command above is correct, I just failed to understand that the single argument there is in fact a file containing the options. This is from a stable ABI invocation on the nanobind_example:
Seems like it does in fact link in |
Great, looks like it's all there 👍 |
Sweet! I think I got it now. One last question: Do I need to link |
The question really only arises when building libnanobind as a shared library. And regarding that, a higher level comment: you really don't ever want to link What I would do is to capture the compiler and linker command line and compare to see if it produces the same set of flags as nanobind's CMake interface. You could systematically capture this (e.g. via CI) on the three platforms, for both static and dynamic libnanobind builds. |
Sure, I can set up a CI job for that. Your answer makes perfect sense, but doesn't that mean that the Python libs in the linker param file I shared above shouldn't be there? |
Oh right: it says |
Imagine for example, that you tried to import such an extension library into a process that has Python statically linked in. The Blender 3d modeler with its Python plugin system would be a good example. The dynamic linker will then try to find another Python shared library and link it into the process, which will lead to a mixture of one initialized and one uninitialized interpreter in the same process. It will segfault immediately. On both macOS and Linux, you need to leave CPython symbols "dangling" because they are provided by the process which dynamically imports the extension. |
Thank you, the dynamic linker mention finally made it click. Luckily, that linker param file came out of a dirty build where I added the current Python libs as a direct dependency - on current master, they are left out and no Python libraries appear at all in the linker command, so right now it's correct. |
Based on the CMake compile commands analysis of `nanobind_example`, the `-fno-strict-aliasing` and `-fvisibility=hidden` flags are always added to the builds in static mode.
…by default This is practically equivalent to the previous configuration, since the flags were largely the same. What's still left to be figured out is whether the Python libs need to go into the `nanobind_extension` target or into the nanobind deps.
8b0995d
to
2058166
Compare
I believe that the latest change solves the outstanding issues: It
I'd consider the separate |
libnanobind
targetnanobind_extension
optional
Hi @nicholasjng, I don't know enough about bazel to understand all of the nuances. I am concerned about the following point though:
in the CMake version of the build system, the static A PyPI or Conda-distributed extension will need to ship nanobind along in practice. You would not be able to rely on a precompiled This first part is mainly cosmetic in nature. But there are also two practical implications:
So this is all to say that there is some value in preserving a static libnanobind as the default in your bazel-based build system. Really the only reason where shared makes sense is if you have a really complex extension that contains multiple extension libraries that are compiled and linked separately. They can then share the libnanobind parts instead of having them replicated multiple times. |
Thank you for the comment. The following summarizes my thoughts and findings:
|
Let's say that you building a huge package (e.g. Tensorflow, for lack of imagination) using nanobind. Quite likely, that package doesn't just have a single Python extension library but a while bunch of them that are conditionally loaded. (E.g. one for CUDA, one for TPUs, etc.). In this case, your package could have the following contents
These binary extension files ( Note also that this library was compiled with the |
Alright. Sounds like this is either a The |
That would be great! You are right that the nanobind_example is a bit on the simple side. |
Results in a
@nanobind//:libnanobind
target producing alibnanobind.a
file.