[PATCH] [linux-lvm] vgchange -a memory consumption

Daniel Stodden daniel.stodden at citrix.com
Wed Jul 16 16:48:12 UTC 2008


In the hope that somebody finds the time to comment, here's a patch for
the original issue described. I'd just like to see the problem resolved
in future versions. Suggestions very welcome.

Thanks.

Daniel


On Mon, 2008-07-14 at 23:19 -0700, Daniel Stodden wrote:
> Hey Alasdair,
> 
> thanks a lot for the prompt reply.
> 
> 
>  On Sat, 2008-07-12 at 17:51 +0100, Alasdair G Kergon wrote:
> > On Fri, Jul 11, 2008 at 10:57:31PM -0700, Daniel Stodden wrote:
> > > I'm running, lvm2-2.02.26.
> >  
> > Don't bother investigating that version - stuff got changed.
> > Update to the latest release (or CVS) and try again.
> > 
> > > Why is that data reread? 
> > 
> > Because the two parts of the code are designed to be independent.  - The
> > so-called "activation" code sits behind an API in a so-called "locking"
> > module.  There's a choice of locking modules, and some send the requests
> > around a cluster of machines - remote machines will only run the
> > activation code and manage the metadata independently.  We just pass
> > UUIDs through the cluster communication layer, never metadata itself.
> 
> Oooh - kay. I've only been looking at _file..() operations. In the
> clustered version that sounds much more obvious.
> 
> > > Second: why isn't that memory freed after returning from
> > > activate_lv?
> >  
> > It's released after processing the whole command.  If there are cases
> > where too much is still being held while processing in the *current*
> > version of the code, then yes, you might be able to free parts of it
> > sooner.
> 
> I've been running on CVS today. The situation appears to have improved,
> but only slightly. Still way to much memory going down the drain. 
> 
> BTW: Did CVS change the memlocking policy? I just noticed that I can run
> beyond physical RAM now. Is that a bug or a feature?
> 
> I had a very long look at the path down activate/deactivate() in general
> and the dm storage allocator in particular. If I nail a separate per-LV
> pool over the cmd_context in _activate_lvs_in_vg() and empty it once per
> cycle, things slow down a little [1], but the general problem vanishes. 
> 
> Now, overriding cmd->mem isn't exactly beautiful. Any better
> suggestions? I need this fixed. And soon. :}
> 
> Second is revisions: I suppose something like the above would work as a
> patch into elderly source RPMs as well. Such as the .26 I mentioned in
> my original post. Any tips on this? I'd consider upgrading, but I've see
> your advise against that on debian's launchpad, at least regarding .38
> and .39. Which is hip?
> 
> So far, thank you very much again.
> 
> Best,
> Daniel
> 
> [1] For a stack-alike allocator, I think dm_pool_free() generates a
> rather scary number of individual brk()s while rewinding. But that's
> certainly not a functional issue, and I may, again, be mistaken.
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: lvm2-vgchangemem.diff
Type: text/x-patch
Size: 611 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20080716/e075a9cc/attachment.bin>


More information about the linux-lvm mailing list