[lvm-devel] thin vol write performance variance

Lakshmi Narasimhan Sundararajan lsundararajan at purestorage.com
Wed Dec 8 17:08:48 UTC 2021


Hi Zdenek,
Thanks for the reply.
I have confirmation on the Zeroing of thin chunks, they are disabled.
And I followed it up further to post additional details I collected on the
same write performance issue.

The summary is, for buffered IO there seems to be way more IOs queued than
by the configured limit (queue/nr_requests).
The thin dm device is not honouring this limit and has so many excess IO in
flight, that any sync IO eventually stalls for a very long time.

The details are in the thread. Can you confirm if this is a known issue?
And what workaround would you suggest?
If not pointers, to possible areas to explore?

Regards








On Mon, Dec 6, 2021 at 7:27 PM Zdenek Kabelac <zdenek.kabelac at gmail.com>
wrote:

> Dne 06. 12. 21 v 7:11 Lakshmi Narasimhan Sundararajan napsal(a):
> > Bumping this thread, any inputs would be appreciated.
> >
> >>>>> Do you measure writes while provisioning thin chunks or on already
> provisioned
> >>>>> device?
> >>>>>
> >>>>
> >>>> Hi Zdenek,
> >>>> These are traditional HDDs. Both the thin pool data/metadata reside on
> >>>> the same set of drive(s).
> >>>> I understand where you are going with this, I will look further into
> >>>> defining the hardware/disk before I bring it to your attention.
> >>>>
> >>>> This run was not on an already provisioned device. I do see improved
> >>>> performance of the same volume after the first write.
> >>>> I understand this perf gain to be the overhead that is avoided during
> >>>> the subsequent run where no mappings need to be established.
> >>>>
> >>>> But, you mentioned zeroing of provisioned blocks as an issue.
> >>>> 1/ during lvcreate -Z from the man pages reports only controls the
> >>>> first 4K block. And also implies this is a MUST otherwise fs may hang.
> >>>> So, we are using this. Are you saying this controls zeroing of each
> >>>> chunk that's mapped to the thin volume?
>
> Yes - for 'lvcreate --type thin-pool' the option -Z controls 'zeroing' of
> thin-pool's thin volumes
>
> The bigger the chunks are, the bigger the impact of zeroing will be.
>
>
> >>>>
> >>>> 2/ The other about zeroing all the data chunks mapped to the thin
> >>>> volume, I could see only reference in the lvm.conf under
> >>>> thin_pool_zero,
> >>>> This is default enabled. So are you suggesting I disable this?
>
> If you don't need it - disable it - it's kind of more secure to have it
> enabled - but if you use 'filesystem' like  ext4 on top - zeroing doesn't
> help
> as user of filesystem cannot read unwritten data. However if you read
> device as root - you might be able to read 'stale' data from unwritten
> parts
> of data from provisioned chunks.
>
>
> >>>>
> >>>> Please confirm the above items. I will come back with more precise
> >>>> details on the details you had requested for.
> >>>>
>
>
> As a side note - metadata device should preferabl sit on different spindle
> (SSD, nvme,...) as this is high-bandwith device and might frequently
> collide
> with your  _tdata volume writes.
>
> Regrards
>
> Zdenek
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20211208/7c371d70/attachment.htm>


More information about the lvm-devel mailing list