[linux-lvm] Looking ahead - tiering with LVM?

Gionatan Danti g.danti at assyoma.it
Wed Sep 9 19:44:22 UTC 2020


Il 2020-09-09 20:47 John Stoffel ha scritto:
> This assumes you're tiering whole files, not at the per-block level
> though, right?

The tiered approach I developed and maintained in the past, yes. For any 
LVM-based tiering, we are speaking about block-level tiering (as LVM 
itself has no "files" concept).

> Do you have numbers?  I'm using DM_CACHE on my home NAS server box,
> and it *does* seem to help, but only in certain cases.   I've got a
> 750gb home directory LV with an 80gb lv_cache writethrough cache
> setup.  So it's not great on write heavy loads, but it's good in read
> heavy ones, such as kernel compiles where it does make a difference.

Numbers for available space for tiering vs cache can vary based on your 
setup. However, storage tiers generally are at least 5-10X apart from 
each other (ie: 1 TB SSD for 10 TB HDD). Hence my gut fealing that 
tiering is not drastically better then  lvm cache. But hey - I reserve 
the right to be totally wrong ;)

> So it's not only the caching being per-file or per-block, but how the
> actual cache is done?  writeback is faster, but less reliable if you
> crash.  Writethrough is slower, but much more reliable.

writeback cache surely is more prone to failure vs writethoug cache. The 
golden rule is that writeback cache should use a mirrored device (with 
device-level powerloss protected writeback cache if sync write speed is 
important).

But this is somewhat ortogonal to the original question: block-level 
tiering itself increases the chances of data loss (ie: losing the SSD 
component will ruin the entire filesystem), so you should used mirrored 
(or parity) device for tiering also.

Regards.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8




More information about the linux-lvm mailing list