[linux-lvm] Higher than expected metadata usage?
zkabelac at redhat.com
Tue Mar 27 12:52:25 UTC 2018
Dne 27.3.2018 v 13:05 Gionatan Danti napsal(a):
> On 27/03/2018 12:39, Zdenek Kabelac wrote:
>> And last but not least comment - when you pointed out 4MB extent usage -
>> it's relatively huge chunk - and if the 'fstrim' wants to succeed - those
>> 4MB blocks fitting thin-pool chunks needs to be fully released. >
>> So i.e. if there are some 'sparse' filesystem metadata blocks places - they
>> may prevent TRIM to successeed - so while your filesystem may have a lot of
>> free space for its data - the actually amount if physically trimmed space
>> can be much much smaller.
>> So beware if the 4MB chunk-size for a thin-pool is good fit here....
>> The smaller the chunk is - the better change of TRIM there is...
> Sure, I understand that. Anyway, please note that 4MB chunk size was
> *automatically* chosen by the system during pool creation. It seems to me that
> the default is to constrain the metadata volume to be < 128 MB, right?
Yes - on default lvm2 'targets' to fit metadata into this 128MB size.
Obviously there is nothing like 'one size fits all' - so it really the user
thinks about the use-case and pick better parameters then defaults.
Size 128MB is picked to have metadata that easily fit in RAM.
>> For heavily fragmented XFS even 64K chunks might be a challenge....
> True, but chunk size *always* is a performance/efficiency tradeoff. Making a
> 64K chunk-sided volume will end with even more fragmentation for the
> underlying disk subsystem. Obviously, if many snapshot are expected, a small
> chunk size is the right choice (CoW filesystem as BTRFS and ZFS face similar
> problems, by the way).
Yep - the smaller the chunk is - the less 'max' size of data device can be
supported as there is final number of chunks you can address from maximal
metadata size which is ~16GB and can't get any bigger.
The bigger the chunk is - the less sharing in snapshot happens, but it gets
More information about the linux-lvm