[linux-lvm] exposing snapshot block device

Tomas Dalebjörk tomas.dalebjork at gmail.com
Wed Oct 23 11:24:31 UTC 2019


I have tested FusionIO together with old thick snapshots.
I created the thick snapshot on a separate old traditional SATA drive, just
to check if that could be used as a snapshot target for high performance
disks; like a Fusion IO card.
For those who doesn't know about FusionIO; they can deal with 150-250,000
IOPS.

And to be honest, I couldn't bottle neck the SATA disk I used as a thick
snapshot target.
The reason for why is simple:
- thick snapshots uses sequential write techniques

If I would have been using thin snapshots, than the writes would most
likely be more randomized on disk, which would have required more spindles
to coop with this.

Anyhow;
I am still eager to hear how to use an external device to import snapshots.
And when I say "import"; I am not talking about copyback, more to use to
read data from.

Regards Tomas

Den ons 23 okt. 2019 kl 13:08 skrev Gionatan Danti <g.danti at assyoma.it>:

> On 23/10/19 12:46, Zdenek Kabelac wrote:
> > Just few 'comments' - it's not really comparable - the efficiency of
> > thin-pool metadata outperforms old snapshot in BIG way (there is no
> > point to talk about snapshots that takes just couple of MiB)
>
> Yes, this matches my experience.
>
> > There is also BIG difference about the usage of old snapshot origin and
> > snapshot.
> >
> > COW of old snapshot effectively cuts performance 1/2 if you write to
> > origin.
>
> If used without non-volatile RAID controller, 1/2 is generous - I
> measured performance as low as 1/5 (with fat snapshot).
>
> Talking about thin snapshot, an obvious performance optimization which
> seems to not be implemented is to skip reading source data when
> overwriting in larger-than-chunksize blocks.
>
> For example, consider a completely filled 64k chunk thin volume (with
> thinpool having ample free space). Snapshotting it and writing a 4k
> block on origin will obviously cause a read of the original 64k chunk,
> an in-memory change of the 4k block and a write of the entire modified
> 64k block to a new location. But writing, say, a 1 MB block should *not*
> cause the same read on source: after all, the read data will be
> immediately discarded, overwritten by the changed 1 MB block.
>
> However, my testing shows that source chunks are always read, even when
> completely overwritten.
>
> Am I missing something?
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.danti at assyoma.it - info at assyoma.it
> GPG public key ID: FF5F32A8
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20191023/fc784568/attachment.htm>


More information about the linux-lvm mailing list