-
Notifications
You must be signed in to change notification settings - Fork 921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compute and use the initial string offset when building nested
large string cols with chunked parquet reader
#17702
base: branch-25.02
Are you sure you want to change the base?
Conversation
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
str_offset
when reading nested large string cols with chunked Parquet reader.str_offset
when reading nested large string cols with chunked Parquet reader.
CC: @etseidl would love your review here as well if possible! |
* @brief Converts string sizes to offsets if this is not a large string column. Otherwise, | ||
* atomically update the initial string offset to be used during large string column construction | ||
*/ | ||
template <int block_size> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
has_lists
can't be a tparam anymore as it is not known at compile time when called from page_delta_decode.cu
. Also, we are only using it minimally at L120
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a reasonable solution. As we discussed offline, it would be nice to figure out a way to do this the same way as for small strings, but it's not worth holding a fix up for. Thanks @mhaseeb123.
Benchmark ResultsTLDR; Virtually no difference in performance observed before or after this PR. Benchmark nameexport LIBCUDF_LARGE_STRINGS_THRESHOLD=1 # Set to a small number
./PARQUET_READER_NVBENCH --devices 0 --benchmark parquet_read_long_strings --timeout 10 SetupRMM memory resource = pool
CUIO host memory resource = pinned_pool
# Devices
## [0] `NVIDIA RTX 5880 Ada Generation`
* SM Version: 890 (PTX Version: 860)
* Number of SMs: 110
* SM Default Clock Rate: 18446744071874 MHz
* Global Memory: 11669 MiB Free / 48632 MiB Total
* Global Memory Bus Peak: 960 GB/sec (384-bit DDR @10001MHz)
* Max Shared Memory: 100 KiB/SM, 48 KiB/Block
* L2 Cache Size: 98304 KiB
* Maximum Active Blocks: 24/SM
* Maximum Active Threads: 1536/SM, 1024/Block
* Available Registers: 65536/SM, 65536/Block
* ECC Enabled: No Numbers
|
….com/mhaseeb123/cudf into fix/str_offset-nested-large-str-cols
/ok to test |
Description
Closes #17692.
This PR enables computing the
str_offset
required to correctly compute the offsets columns for nested large strings columns with chunked Parquet reader whenchunk_read_limit
is small resulting in multiple output table chunks per subpass.Checklist