[linux-lvm] vgchange -a memory consumption
daniel.stodden at citrix.com
Sat Jul 12 05:57:31 UTC 2008
I'm running, lvm2-2.02.26.
Found that vgchange -a y/n generates remarkable memory consumption on
volumes with larger numbers of small volumes.
Tried on a VG with 1024 LVs of 8MB each. Interestingly, the type of
operation does not seem to matter much. Notably, deactivating an already
unavailable volume group generates similarly high pressure than actually
Looking at the code, I've got only gained a partial understanding where
the memory is exactly spent -- and for what. So much I believe I do
understand, maybe someone can enlighten me:
Iterating through all the volumes pushes the data segment up by about
750k -- per iteration. That memory allocated per volume never seems to
So, together with the memory locking performed in lock_vol, it's only a
matter of installed RAM and volume numbers when the OOM killer will kick
_vg_read() seems to play a role. Apparently replayed twice for each LV
(once for lock, then for unlock).
To a rather outside observer like me, the path taken to get there seems
rather strange. The LV is handed over as a UUID string to lock_vol().
In the '-an' case, this is handed over to lv_deactivate, which will
(re-)load both the VG and LV metadata in order to get the respective
So what I don't really get is:
Why is that data reread? Especially the VG metadata. Or am I missing
something. Second: why isn't that memory freed after returning from
But most importantly: Could this be fixed?
More information about the linux-lvm