[linux-lvm] exposing snapshot block device

Gionatan Danti g.danti at assyoma.it
Wed Oct 23 14:37:38 UTC 2019


On 23/10/19 14:59, Zdenek Kabelac wrote:
> Dne 23. 10. 19 v 13:08 Gionatan Danti napsal(a):
>> Talking about thin snapshot, an obvious performance optimization which 
>> seems to not be implemented is to skip reading source data when 
>> overwriting in larger-than-chunksize blocks.
> 
> Hi
> 
> There is no such optimization possible for old snapshots.
> You would need to write ONLY to snapshots.
> 
> As soon as you start to write to origin - you have to 'read' original 
> data from origin, copy them to COW storage, once this is finished, you can
> overwrite origin data area with your writing I/O.
> 
> This is simply never going to work fast ;) - the fast way is thin-pool...
> 
> Old snapshots were designed for 'short' lived snapshots (so you can take
> a backup of volume which is not being modified underneath).
> 
> Any idea of improving this old snapshots target are sooner or later 
> going to end-up with thin-pool anyway :)  (we've been in this river many 
> many years back in time...)
> 
> 
>> For example, consider a completely filled 64k chunk thin volume (with 
>> thinpool having ample free space). Snapshotting it and writing a 4k 
>> block on origin 
> 
> There is no support of  snapshot of snapshot  with old snaps...
> It would be extremely slow to use...
> 
>> However, my testing shows that source chunks are always read, even 
>> when completely overwritten.
>>
>> Am I missing something?
> 
> Yep - you would need to always jump to your 'snapshot' - so instead of
> keeping 'origin' on  major:minor  - it would need to become a 'snapshot'...
> Seriously complex concept to work with - especially when there is 
> thin-pool...

Hi, I was speaking about *thin* snapshots here. Rewriting the example 
given above (for clarity):

"For example, consider a completely filled 64k chunk thin volume (with 
thinpool having ample free space). Snapshotting it and writing a 4k 
block on origin will obviously cause a read of the original 64k chunk, 
an in-memory change of the 4k block and a write of the entire modified 
64k block to a new location. But writing, say, a 1 MB block should *not* 
cause the same read on source: after all, the read data will be 
immediately discarded, overwritten by the changed 1 MB block."

I would expect that such large-block *thin* snapshot rewrite behavior 
would not cause a read/modify/write, but it really does.

Is this a low-hanging fruit or there are more fundamental problem 
avoiding read/modify/write in this case?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8




More information about the linux-lvm mailing list