[linux-lvm] Caching policy in machine learning context
zdenek.kabelac at gmail.com
Mon Feb 13 14:33:53 UTC 2017
Dne 13.2.2017 v 15:19 Jonas Degrave napsal(a):
> I am on kernel version 4.4.0-62-generic. I cannot upgrade to kernel 4.9, as it
> did not play nice with
> CUDA-drivers: https://devtalk.nvidia.com/default/topic/974733/nvidia-linux-driver-367-57-and-up-do-not-install-on-kernel-4-9-0-rc2-and-higher/
> Yes, I understand the cache needs repeated usage of blocks, but my question is
> basically how many? And if I can lower that number?
> In our use case, you basically read a certain group of 100GB of data
> completely about 100 times. Then another user logs in, and reads a different
> group of data about 100 times. But after a couple of such users, I observe
> that only 20GB in total has been promoted to the cache. Even though the cache
> is 450GB big, and could easily fit all the data one user would need.
> So, I come to the conclusion that I need a more aggressive policy.
> I now have a reported hit rate of 19.0%, when there is so few data on the
> volume that 73% of the data would fit in the cache. I could probably solve
> this issue by making the caching policy more aggressive. I am looking for a
> way to do that.
There are 2 'knobs' - one is 'sequential_threshold' where cache tries
to avoid promoting 'long' continuous reads into cache - so if
you do 100G reads then these likely meet the criteria and are avoided from
being promoted (and I think this one is not configurable for smq.
Other is 'migration_threshold' which limit bandwidth load on cache device.
You can try to change its value:
lvchange --cachesettings migration_threshold=10000000 vg/cachedlv
(check with dmsetup status)
Not sure thought how are there things configurable with smq cache policy.
More information about the linux-lvm