[linux-lvm] Why is the performance of my lvmthin snapshot so poor

Gionatan Danti g.danti at assyoma.it
Thu Jun 16 19:50:52 UTC 2022


Il 2022-06-16 18:19 Demi Marie Obenour ha scritto:
> Also heavy fragmentation can make journal replay very slow, to the 
> point
> of taking days on spinning hard drives.  Dave Chinner explains this 
> here:
> https://lore.kernel.org/linux-xfs/20220509230918.GP1098723@dread.disaster.area/.

Thanks, the linked thread was very interesting.

> Also poor out-of-space handling and unbounded worst-case latency.

Very true.

> Is this still a problem on NVMe storage?  HDDs will not really be fast
> no matter what one does, at least unless there is a write-back cache
> that can convert random I/O to sequential I/O.  Even that only helps
> much if your working set fits in cache, or if your workload is
> write-mostly.

One of the key features of ZFS is to transform random writes into 
sequential ones. With the right recordsize, and coupled with prefetch, 
compressed ARC and L2ARC, even HDD pool can be surprisingly usable.

For NVMe pools you should use a much lower recordsize to avoid 
read/write amplification, but not lower than 16K to not impair 
compression efficiency (unless you are storing mostly uncompressible 
stuff). That said, for pure NVMe storage (no compression or other data 
transformations) I think XFS, possibly with direct IO, is the fastest 
choice by a factor of 2x.

> It does not exist yet.  Joe Thornber would be the person to ask
> regarding any plans to create it.

Ok - I was hoping to miss something, but it is not the case.
Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8



More information about the linux-lvm mailing list