[linux-lvm] Running thin_trim before activating a thin pool

Zdenek Kabelac zdenek.kabelac at gmail.com
Sun Jan 30 17:56:43 UTC 2022


Dne 30. 01. 22 v 18:30 Demi Marie Obenour napsal(a):
> On Sun, Jan 30, 2022 at 12:18:32PM +0100, Zdenek Kabelac wrote:
>> Dne 30. 01. 22 v 2:20 Demi Marie Obenour napsal(a):
>>> On Sat, Jan 29, 2022 at 10:40:34PM +0100, Zdenek Kabelac wrote:
>>>> Dne 29. 01. 22 v 21:09 Demi Marie Obenour napsal(a):
>>>>> On Sat, Jan 29, 2022 at 08:42:21PM +0100, Zdenek Kabelac wrote:
>>>>>> Dne 29. 01. 22 v 19:52 Demi Marie Obenour napsal(a):

>> Discard of thins itself is AFAIC pretty fast - unless you have massively
>> sized thin devices with many GiB of metadata - obviously you cannot process
>> this amount of metadata in nanoseconds (and there are prepared kernel
>> patches to make it even faster)
> 
> Would you be willing and able to share those patches?

Then are always landing in upstream kernel once they are all validated & 
tested (recent kernel already has many speed enhancements).

> 
>> What is the problem is the speed of discard of physical devices.
>> You could actually try to feel difference with:
>> lvchange --discards passdown|nopassdown thinpool
> 
> In Qubes OS I believe we do need the discards to be passed down
> eventually, but I doubt it needs to be synchronous.  Being able to run
> the equivalent of `fstrim -av` periodically would be amazing.  I’m
> CC’ing Marek Marczykowski-Górecki (Qubes OS project lead) in case he
> has something to say.

You could easily run in parallel individual blkdiscards for your thin LVs....
For most modern drives thought it's somewhat waste of time...

Those trimming tools should be used when they are solving some real problems, 
running them just for fun is just energy & performance waste....

> 
>> Also it's very important to keep metadata on fast storage device (SSD/NVMe)!
>> Keeping metadata on same hdd spindle as data is always going to feel slow
>> (in fact it's quite pointless to talk about performance and use hdd...)
> 
> That explains why I had such a horrible experience with my initial
> (split between NVMe and HDD) install.  I would not be surprised if some
> or all of the metadata volume wound up on the spinning disk.

With lvm2 user can always 'pvmove'  any LV to any desired PV.
There is not yet any 'smart' logic to do it automatically.

>>> add support for efficient snapshots of data stored on a VDO volume, and
>>> to have multiple volumes on top of a single VDO volume.  Furthermore,
>>
>> We hope we will add some direct 'snapshot' support to VDO so users will not
>> need to combine both technologies together.
> 
> Does that include support for splitting a VDO volume into multiple,
> individually-snapshottable volumes, the way thin works?

Yes - that's the plan - to have multiple VDO LV in a single VDOPool.

Regards

Zdenek




More information about the linux-lvm mailing list