[lvm-devel] thin vol write performance variance
Zdenek Kabelac
zdenek.kabelac at gmail.com
Mon Dec 6 13:57:24 UTC 2021
Dne 06. 12. 21 v 7:11 Lakshmi Narasimhan Sundararajan napsal(a):
> Bumping this thread, any inputs would be appreciated.
>
>>>>> Do you measure writes while provisioning thin chunks or on already provisioned
>>>>> device?
>>>>>
>>>>
>>>> Hi Zdenek,
>>>> These are traditional HDDs. Both the thin pool data/metadata reside on
>>>> the same set of drive(s).
>>>> I understand where you are going with this, I will look further into
>>>> defining the hardware/disk before I bring it to your attention.
>>>>
>>>> This run was not on an already provisioned device. I do see improved
>>>> performance of the same volume after the first write.
>>>> I understand this perf gain to be the overhead that is avoided during
>>>> the subsequent run where no mappings need to be established.
>>>>
>>>> But, you mentioned zeroing of provisioned blocks as an issue.
>>>> 1/ during lvcreate -Z from the man pages reports only controls the
>>>> first 4K block. And also implies this is a MUST otherwise fs may hang.
>>>> So, we are using this. Are you saying this controls zeroing of each
>>>> chunk that's mapped to the thin volume?
Yes - for 'lvcreate --type thin-pool' the option -Z controls 'zeroing' of
thin-pool's thin volumes
The bigger the chunks are, the bigger the impact of zeroing will be.
>>>>
>>>> 2/ The other about zeroing all the data chunks mapped to the thin
>>>> volume, I could see only reference in the lvm.conf under
>>>> thin_pool_zero,
>>>> This is default enabled. So are you suggesting I disable this?
If you don't need it - disable it - it's kind of more secure to have it
enabled - but if you use 'filesystem' like ext4 on top - zeroing doesn't help
as user of filesystem cannot read unwritten data. However if you read
device as root - you might be able to read 'stale' data from unwritten parts
of data from provisioned chunks.
>>>>
>>>> Please confirm the above items. I will come back with more precise
>>>> details on the details you had requested for.
>>>>
As a side note - metadata device should preferabl sit on different spindle
(SSD, nvme,...) as this is high-bandwith device and might frequently collide
with your _tdata volume writes.
Regrards
Zdenek
More information about the lvm-devel
mailing list