Replies: 1 comment
-
Before going into tiny LFU, I want to clarify the admission policies you mentioned above.
As for tiny-LFU: To use ML, you can extend allocator/NvmAdmissionPolicy.h and override the accept function. In the accept function, you can extract feature from the item, call the ML model to produce a binary decision of admit/reject. We don't have code to share since the ML model is proprietary but you can let me know if you have any questions. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I am working on tier-2 cache admission for my research. I find that currently you can use the following types of tier-2
cache admission.
I saw that tiny-LFU which is a useful cache admission policy is not usable for the NVM cache. Is it because it does not
evict page every time it admits the page and that it does replacement at a region granularity rather than the page granularity?
I want to evaluate different workloads using the admission policies that CacheLib currently supports and implement
a ML cache admission policy that we compare against the baseline (random, dynamic-random, reject-first).
I wanted to ask if there are resources that anybody can point me to regarding how to integrate ML models into
CacheLib to use as a tier-2 admission policy.
Beta Was this translation helpful? Give feedback.
All reactions