Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ODS14: Store (and return via *_LENGTH functions) blob length longer than 4GB #8428

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dyemanov
Copy link
Member

@dyemanov dyemanov commented Feb 3, 2025

This PR contains only ODS-related change (to be merged together with #8401 and #8340). The blh layout has been slightly modified to preserve the blob header size.

The code requires more changes to support bigger BLOBs properly, appropriate PR will be offered later.

…. This is only ODS-related change, more will follow later.
@dyemanov
Copy link
Member Author

dyemanov commented Feb 3, 2025

Answering Vlad's comment inside the commit:

I highly doubt this is required.
IMHO, we should document max blob length as 2 or 4GB and add corresponding checks in code.
This must be properly discussed first.

IMHO, we must store and retrieve blobs of arbitrary size and given that we have them officially (as documented) allowed up to 128GB (IIRC) I see no good excuse to just shrink the documented limit, provided that it costs us nearly nothing.

That said, I'd seriously question the need to process (inside the engine) blobs bigger than 2/4GB. Currently we have a number of functions that read the whole blob contents into memory for processing and it may easily lead to OOM condition even for 4GB blobs. So I'd agree to add some max length for blobs being processed and throw an error for longer blobs. Just not sure whether it should happen everywhere or only in specific places that cannot process blobs chunk-by-chunk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants