[linux-lvm] Running thin_trim before activating a thin pool

Zdenek Kabelac zdenek.kabelac at gmail.com
Sun Jan 30 11:18:32 UTC 2022


Dne 30. 01. 22 v 2:20 Demi Marie Obenour napsal(a):
> On Sat, Jan 29, 2022 at 10:40:34PM +0100, Zdenek Kabelac wrote:
>> Dne 29. 01. 22 v 21:09 Demi Marie Obenour napsal(a):
>>> On Sat, Jan 29, 2022 at 08:42:21PM +0100, Zdenek Kabelac wrote:
>>>> Dne 29. 01. 22 v 19:52 Demi Marie Obenour napsal(a):
>>>>> Is it possible to configure LVM2 so that it runs thin_trim before it
>>>>> activates a thin pool?  Qubes OS currently runs blkdiscard on every thin
>>>>> volume before deleting it, which is slow and unreliable.  Would running
>>>>> thin_trim during system startup provide a better alternative?
>>>>
>>>> Hi
>>>>
>>>>
>>>> Nope there is currently no support from lvm2 side for this.
>>>> Feel free to open RFE.
>>>
>>> Done: https://bugzilla.redhat.com/show_bug.cgi?id=2048160
>>>
>>>
>>
>> Thanks
>>
>> Although your use-case Thinpool on top of VDO is not really a good plan and
>> there is a good reason behind why lvm2 does not support this device stack
>> directly (aka thin-pool data LV as VDO LV).
>> I'd say you are stepping on very very thin ice...
> 
> Thin pool on VDO is not my actual use-case.  The actual reason for the
> ticket is slow discards of thin devices that are about to be deleted;

Hi

Discard of thins itself is AFAIC pretty fast - unless you have massively sized 
thin devices with many GiB of metadata - obviously you cannot process this 
amount of metadata in nanoseconds (and there are prepared kernel patches to 
make it even faster)

What is the problem is the speed of discard of physical devices.
You could actually try to feel difference with:
lvchange --discards passdown|nopassdown thinpool

Also it's very important to keep metadata on fast storage device (SSD/NVMe)!
Keeping metadata on same hdd spindle as data is always going to feel slow
(in fact it's quite pointless to talk about performance and use hdd...)

> you can find more details in the linked GitHub issue.  That said, now I
> am curious why you state that dm-thin on top of dm-vdo (that is,
> userspace/filesystem/VM/etc ⇒ dm-thin data (*not* metadata) ⇒ dm-vdo ⇒
> hardware/dm-crypt/etc) is a bad idea.  It seems to be a decent way to

Out-of-space recoveries are ATM much harder then what we want.

So as long as user can maintain free space of your VDO and thin-pool it's ok. 
Once user runs out of space - recovery is pretty hard task (and there is 
reason we have support...)

> add support for efficient snapshots of data stored on a VDO volume, and
> to have multiple volumes on top of a single VDO volume.  Furthermore,

We hope we will add some direct 'snapshot' support to VDO so users will not 
need to combine both technologies together.

Thin is more oriented towards extreme speed.
VDO is more about 'compression & deduplication' - so space efficiency.

Combining both together is kind of harming their advantages.

> https://access.redhat.com/articles/2106521#vdo recommends exactly this
> use-case.  Or am I misunderstanding you?

There are many paths to Rome...
So as mentioned above - you need to pick performance/space effieciency.
And since you want to write your own  thin volume managing software, I'm 
guessing you care about performance a lot  (so we do - but with our given 
constrains that are limiting us to some level)...

>> Also I assume you have already checked performance of discard on VDO, but I
>> would not want to run this operation frequently on any larger volume...
> 
> I have never actually used VDO myself, although the documentation does
> warn about this.

It's been purely related to the initial BZ description which cares a lot about 
thin discard performance and following comment adds VDO discard into same 
equation... :)

Regards

Zdenek




More information about the linux-lvm mailing list