Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Harness bit-scan intrinsics to accelerate iteration #6

Open
JacksonAllan opened this issue Jun 2, 2024 · 1 comment
Open

Harness bit-scan intrinsics to accelerate iteration #6

JacksonAllan opened this issue Jun 2, 2024 · 1 comment

Comments

@JacksonAllan
Copy link

JacksonAllan commented Jun 2, 2024

Hash tables that store dense metadata in a separate array can usually harness compiler intrinsics that correspond to bit-scan instructions to scan for the next occupied bucket many buckets at a time. Specifically, we can scan a group of 64 / bits_of_metadata_per_bucket buckets at once in the ideal case that starting position is the beginning of a bucket group. This technique is used, for example, by absl, Martinus’ robin_hood, and Verstable/CC (which can sadly only scan four buckets at at time). Because khashl uses only one bit of metadata per bucket, it seems like the perfect candidate for this optimization.

I’ve created a proof of concept for GCC and Clang on little-endian architectures here. The repository is private, but I’ve invited you to it. Here are the results for a uint32_t-key, uint32_t-value table displayed in graph form, courtesy of Google Sheets:

Iteration speed for 0 to 2²⁴ x 0 75 keys in table with 2²⁴ buckets (reserved at the time of table creation)
(The horizonal axis also corresponds to the number of keys in the table.)

Evidently, the gains are pretty massive. When the table is almost empty, the intrinsics-accelerated version is about 10 times faster. At a load factor of 0.5, it’s about twice as fast. The gains taper off at about 0.75, which is coincidentally khashl’s default maximum load factor.

Implementing this change involves a few steps:

  • Using a 64-bit integer, rather than a 32-bit integer, as the datatype stored by the used array. This change isn’t strictly necessary, but it allows us to scan up to 64 buckets, rather than 32 buckets, at a time. In practice, the performance difference would probably only be apparent in the case that the current load factor is extremely low and many 32-bucket groups are entirely vacant.
  • Allocating an extra unit of metadata at the end of the metadata array and setting the first bit. This acts as a stopper to automatically halt iteration the scanning at kh_end. (A less performant option would be to manually check whether we’ve reached the end of the metadata array and, if so, return an end iterator.)
  • Actually implementing the accelerated iteration functions for GCC/Clang and MSVC, taking endianness into account, and implementing fallbacks for other compilers.

If you’re interested, I could probably do these things and make a pull request. Testing on a big-endian machine is pretty much impossible in this age, though.

khash (as opposed to khashl) could, I think, also benefit from this optimization, although I'm not sure whether you’re even still developing it (in parallel with khashl).

@attractivechaos
Copy link
Owner

Thank you so much! This makes a lot of sense. According to the commit history, I haven't touched khash.h for six years. I have been mostly using khashl in my recent projects, though most users are still on the older khash.

I will incorporate your improvements at some point, along with a fix to the missing malloc checks you mentioned in #5. Thanks again!

BTW, you can now change the load factor of khashl at the compile time by defining kh_max_count before inclusion like:

#define kh_max_count(cap) ((cap) * 0.85)
#include "khashl.h"

Nonetheless, plain hash tables like khashl may not work well at high load factor anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants