[linux-lvm] Performance penalty for 4k requests on thin provisioned volume

Dale Stephenson dalestephenson at mac.com
Thu Sep 14 14:32:43 UTC 2017


> On Sep 14, 2017, at 7:13 AM, Zdenek Kabelac <zkabelac at redhat.com> wrote:
> 
> Dne 14.9.2017 v 12:57 Gionatan Danti napsal(a):
>> On 14/09/2017 11:37, Zdenek Kabelac wrote:
>>> Sorry my typo here - is NOT ;)
>>> 
>>> 
>>> Zdenek
>> Hi Zdenek,
>> as the only variable is the LVM volume type (fat/thick vs thin), why the thin volume is slower than the thick one?
>> I mean: all other things being equal, what is holding back the thin volume?
> 
Gionatan has hit on the heart of my concern.  I ordinarily expect minimal performance hit from remapping, and that’s clearly the case with the thick volume.  But it’s not the case with thin, even though I’ve already fully provisioned it.  Why is thin so much slower, and what can I do to make it faster?

> So few more question:
> 
> What is '/dev/sdb4'  ? - is it also some fast SSD ?
> 
Correct, it’s an SSD identical to the ones used in the array.

> - just checking to be sure your metadata device is not placed on rotational storage device)…
> 
It is not in this case.

> What is your thin-pool chunk size - is it 64K ?

In this case it is 512k, because I believed that to be the optimal chunk size for a 512k raid stripe.

> - if your raise thin-pool chunk size up - is it getting any better ?
> 

I have only tested direct to device at 64k and 512k, 512k performs better.  It is not obvious to me why 512k *should* perform better when all the requests are only 4k, except that a given size of metadata would obviously map to a larger size.  (That is, if no contiguous chunk indicators are used — in this particular case I would expect the thin provisioning to be completely contiguous.)

Thank you for your help.
Dale Stephenson




More information about the linux-lvm mailing list