<div dir="ltr">Hi,<div><br></div><div>We are a group of scientists, who work on reasonably sized datasets (10-100GB). Because we had troubles managing our SSD's (everyone likes to have their data on the SSD), I set up a caching system where the 500GB SSD caches the 4TB HD. This way, everybody would have their data virtually on the SSD, and only the first pass through the dataset would be slow. Afterwards, it would be cached anyway, and the reads would be faster.</div><div><br></div><div>I used lvm-cache for this. Yet, it seems that the (only) smq-policy is very reluctant in promoting data to the cache, whereas what we would need, is that data is promoted basically upon the first read. Because if someone is using the machine on certain data, they will most likely go over the dataset a couple of hundred times in the following hours.</div><div><br></div><div>Right now, after a week of testing lvm-cache with the smq-policy, it looks like this:</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">jdgrave@kat:~$ sudo ./lvmstats <br>start 0<br>end 7516192768<br>segment_type cache<br>md_block_size 8<br>md_utilization 14353/1179648<br>cache_block_size 128<br>cache_utilization 7208960/7208960<br>read_hits 19954892<br>read_misses 84623959<br>read_hit_ratio 19.08%<br>write_hits 672621<br>write_misses 7336700<br>write_hit_ratio 8.40%<br>demotions 151757<br>promotions 151757<br>dirty 0<br>features 1</blockquote><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> jdgrave@kat:~$ sudo ./lvmcache-statistics.sh <br>-------------------------------------------------------------------------<br>LVM [2.02.133(2)] cache report of found device /dev/VG/lv<br>-------------------------------------------------------------------------<br>- Cache Usage: 100.0% - Metadata Usage: 1.2%<br>- Read Hit Rate: 19.0% - Write Hit Rate: 8.3%<br>- Demotions/Promotions/Dirty: 151757/151757/0<br>- Feature arguments in use: writeback <br>- Core arguments in use : migration_threshold 2048 smq 0 <br> - Cache Policy: stochastic multiqueue (smq)<br>- Cache Metadata Mode: rw<br>- MetaData Operation Health: ok</blockquote><div><br></div><div>The number of promotions has been very low, even though the read hit rate is low as well. This is with a cache of 450GB, and currently only 614GB of data on the cached device. A read hit rate of lower than 20%, when just randomly caching would have achieved 73% is not what I would have hoped to get.</div><div><br></div><div>Is there a way to make the caching way more aggressive? Some settings I can tweak?</div><div><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr">Yours sincerely,<br><br>Jonas<br></div></div></div></div></div>
</div></div>