[dm-devel] Severe performance degradation for dm-cache mq since c86c3070

Andrey Korolyov andrey at xdel.ru
Sat Sep 12 22:15:06 UTC 2015


On Fri, Sep 11, 2015 at 11:32 PM, Andrey Korolyov <andrey at xdel.ru> wrote:
>> This was running a smaller test than you with only 2G per thread.
>
>> Have you tried playing with the migration_threshold tunable?  With
>> random IO such as this we really want the cache to initially populate
>> the cache but then do no churning.  Could you give me the status line
>> for the caches once you've run the tests please?  This would tell me
>> how many blocks are being demoted.
>
> Please CC me always, as I am not subscribed to dm-devel.
>
> Short-runs are tend to show better results on smq, but real-world
> applications have at most long-running workloads with sustained cache
> usage pattern where old algorithm shows a best result. I`ll give a try
> for tunables and post results separately. Please take a look on an
> attached results - at a given cache size smq outperforms all other
> algorithms but both new mq and smq performs poorly when cache fills up
> and issuing demotions. Status outputs are given right after test
> completion. Hardware platform was changed for test, so absolute values
> can differ from ones in original message.
>
> The r/w ratio, block size and random pattern are selected to represent
> a typical SDS workload on top of the cache, but, as I pointed earlier,
> it is a quite inaccurate simulation - the hit curve is impossible to
> reproduce with fio.

Replying to myself back - migration_threshold increase does not make
situation better, when reduction results in a worse situation than
with default value. As far as I can see 'new' mq suffers from the
immediate processing of the promoted blocks on update when 'old' one
delays their processing (not in a very good manner, though, because a
large plateau of the blocks which has every right to be demoted
remains in this case in front cache).




More information about the dm-devel mailing list