-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Truncated core file when COMP_COMPRESSION is set to "true" #165
Comments
Hi. I'm getting problems too with big dumps, they can't be read with gdb. |
Let me know how you get on
Looks like a lot has been added to zip as it's now on version 2.2.0 so a PR with a bump would be appreciated. Thanks |
Hi again. We were using the 8.6.0 version, and even when the flag was set to "false", the dumps were uploaded compressed and the big ones were corrupt. |
We are already using v8.10.0 but that doesn't solve the problem and we still need to disable the compression to make this work. @pereyra-m this would be helpful if you can share more details regarding the core file size that you have tried and configurable value you are using. |
We haven't any special configuration, and we noticed the corruptions when the dumps were higher than 1GB more or less.
Maybe in your case is something else. |
We are seeing this issue without compression as well in certain scenario. |
Can you set the composer log level to Debug
And once the issue arises provide the output of Thanks |
Hello IBM-CDH team, This reponse is on behalf of @amikugup, here are the requested debug logs. We have deleted certain logs from the file as we felt that has our setup and proprietary details. Do let us know if there is some relevant info removed by us from the composer.log During further debugging of this issue, we concluded that it might not be related to the IBM-CDH. Instead, the problem seems to related to Linux pipe. The Linux kernel writes core-dump data very fast and the CDC is unable to consume it at the same speed, causing the Linux pipe to overflow. As a result, the CDC misses reading a portion of the data. We were unable to find a way to increase the Linux pipe size since it appears to be a read-only parameter according to ulimit. If you know of any method to increase the Linux pipe size, please share it with us. We would like to try it and see if it helps prevent the core file from being truncated. |
I agree it's likely the issue is upstream as I am not seeing a Can you confirm the following please: I don't think it's the page size as I would expect the OS to block until its read but I would need to read the kernel core dump code to confirm. |
Hello @No9, Thanks for your response! Host OS: "Oracle Linux Server 9.3" |
OK This is progress - I think the core dump will take precedence over all signals. I've not tested this on Oracle Linux Server at all and don't have access to one with a k8s config so I can only suggest ideas at this stage. |
We are observing a strange issue with IBM core dump handler. Actually, we are getting truncated core file when COMP_COMPRESSION flag is set to "true". gdb is complaining about the truncated file and core file size is close to 900 MB while gdb is expecting a core file size of 3 GB.
We didn't see any such issue when we turned off the compression. we got a full core file and gdb is also happy.
Is this a known issue with the compression flag?
The text was updated successfully, but these errors were encountered: