-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.21.1] I/O bottlenecks when saving SavedData #556
Comments
also related to AllTheMods/ATM-10#1474 |
So digging into this further, it appears to be an issue with SavedData in general, rather than Lootr specifically, but because Lootr probably has a larger quantity of SavedData, it pops up more prominently. I was inclined to think it was some sort of I/O bottleneck, but it's also relevant to note that NeoForge patches the SavedData::save method to use an atomic write system, although I doubt this would have any impact as it ends up calling the exact same methods as Minecraft's default implementations. As you can see from this image, 99% of the tick time is taken up with saving, but only 63% is taken up by Lootr -- the rest is taken up with what appears to be Minecraft's SavedData. It's worth noting that similar issues have been reported since 1.20.1 on Paper and Spigot servers: Watchdog crash on a Spigot server from March (I had a second report but I realize now it's a duplicate of the first one) The slow-down is demonstrably coming from the actual I/O operations rather than the actual serialization, meaning I don't think there is really much I can do about this from Lootr's end, especially without knowing more about the I/O of this person's server. EDIT: I've commented on the linked issue. I'd like to find out exactly how much total Lootr data there is versus the overall size of the data folder, as, in theory, each file should be kilobytes or less. |
i can sent you a backup of the world save for testing if you want @noobanidus |
That would be appreciated! |
I wanted to chip in with my experience, I've been having the same problem. This server uses managed hosting (Bluehost), and I can see that most of the Lootr data files are only a few hundred Bytes each. I acknowledge this is a general issue with saving, but in the meantime is there any way to reduce the amount of Lootr savedata? For example if I were to enable loot refresh, would that help the problem or make it worse? |
Loot refresh wouldn't change anything in this instance, as it doesn't change the number of data files. How often is your server being restarted? There may be a large number of Lootr data files, but they should only be loaded from disk (and thus eligible to be saved to disk) when the relevant chest is being opened and the contents modified. Depending on what version you're running, I can possibly offer a version of Lootr that unloads saved data that should reduce the number of files it's trying to save. That said, even a large number of loaded chests shouldn't result in them being saved, as it requires the data being marked as dirty, which generally only happens when:
It is possible that something weird is causing the files to be marked as such without any actual changes being made. You could try changing the |
Every time the world gets saved and it gets to saving the Lootr data it causes such a lag spike that players get timedout

https://spark.lucko.me/LWWOywafuo
The text was updated successfully, but these errors were encountered: