[dm-devel] Significantly dropped dm-cache performance in 4.13 compared to 4.11

Stefan Ring stefanrin at gmail.com
Mon Nov 13 17:31:14 UTC 2017


On Thu, Nov 9, 2017 at 4:15 PM, Stefan Ring <stefanrin at gmail.com> wrote:
> On Tue, Nov 7, 2017 at 3:41 PM, Joe Thornber <thornber at redhat.com> wrote:
>> On Fri, Nov 03, 2017 at 07:50:23PM +0100, Stefan Ring wrote:
>>> It strikes me as odd that the amount read from the spinning disk is
>>> actually more than what comes out of the combined device in the end.
>>
>> This suggests dm-cache is trying to promote too way too much.
>> I'll try and reproduce the issue, your setup sounds pretty straight forward.
>
> I think it's actually the most straight-forward you can get ;).
>
> I've also tested kernel 4.12 in the meantime, which behaves just like
> 4.13. So the difference in behavior seems to have been introduced
> somewhere between 4.11 and 4.12.
>
> I've also done plain dd from the dm-cache disk to /dev/null a few
> times, which wrote enormous amounts of data to the SDD. My poor SSD
> has received the same amount of writes during the last week that it
> has had to endure during the entire previous year.

Do you think it would make a difference if I removed and recreated the cache?

I don't want to fry my SSD any longer. I've just copied several large
files into the dm-cached zfs dataset, and while reading them back
immediately afterwards, the SSD started writing crazy amounts again.
In my understanding, linear reads should rarely end up on the cache
device, but that is absolutely not what I'm experiencing.




More information about the dm-devel mailing list