[linux-lvm] when bringing dm-cache online, consumes all memory and reboots
g.danti at assyoma.it
Tue Mar 24 11:37:51 UTC 2020
Il 2020-03-24 10:43 Zdenek Kabelac ha scritto:
> By default we require migration threshold to be at least 8 chunks big.
> So with big chunks like 2MiB in size - gives you 16MiBof required I/O
> So if you do i.e. read 4K from disk - it may cause i/o load of 2MiB
> chunk block promotion into cache - so you can see the math here...
Hi Zdenek, I am not sure to following you description of
migration_threshold. From dm-cache kernel doc:
"Migrating data between the origin and cache device uses bandwidth.
The user can set a throttle to prevent more than a certain amount of
migration occurring at any one time. Currently we're not taking any
account of normal io traffic going to the devices. More work needs
doing here to avoid migrating during those peak io moments.
For the time being, a message "migration_threshold <#sectors>"
can be used to set the maximum number of sectors being migrated,
the default being 2048 sectors (1MB)."
Can you better explain what really migration_threshold accomplishes? It
is a "max bandwidth cap" settings, or something more?
> If the main workload is to read whole device over & over again likely
> no caching will enhance your experience and you may simply need fast
From what I understand the OP want to cache filesystem metadata to
speedup rsync directory traversal. So a cache device should definitely
be useful; albeit dm-cache being "blind" in regard to data vs metadata,
the latter should be good candidate for hotspot promotion.
For reference, I have a ZFS system exactly used for such a workload
(backup with rsnapshot, which uses rsync and hardlink to create
deduplicated backups) and setting cache=metadata (rather than "all", so
data and metadata) gives a very noticeable boot to rsync traversal.
Assyoma S.r.l. - www.assyoma.it 
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8
More information about the linux-lvm