[linux-lvm] what creates the symlinks in /dev/<volgroup> ?
zkabelac at redhat.com
Fri Jun 24 11:00:03 UTC 2016
Dne 23.6.2016 v 20:02 Chris Friesen napsal(a):
> On 06/23/2016 11:21 AM, Zdenek Kabelac wrote:
>> Dne 23.6.2016 v 18:35 Chris Friesen napsal(a):
>>> [root at centos7 centos]# vgscan --mknodes
>>> Configuration setting "snapshot_autoextend_percent" invalid. It's not part
>>> of any section.
>>> Configuration setting "snapshot_autoextend_threshold" invalid. It's not part
>>> of any section.
>> fix your lvm.conf (uncomment sections)
>>> Reading all physical volumes. This may take a while...
>>> Found volume group "chris-volumes" using metadata type lvm2
>>> Found volume group "centos" using metadata type lvm2
>>> Found volume group "cinder-volumes" using metadata type lvm2
>>> The link /dev/chris-volumes/chris-volumes-pool should have been created by
>> Ok - there seems to be internal bug in lvm2 - which incorrectly hints
>> link creation for this case.
>> There should not have been /dev/vg/pool link - this is correctly marked
>> for udev - but incorrectly for udev validation.
>> However the bug is actually not so much important - it just links
>> to 'wrapper' device - and eventually we will resolve the problem even without
>> this extra device in table.
> The problem that it causes for me is that when I run "vgchange -an
> chris-volumes" it leaves the /dev/chris-volumes with a broken symlink in it
> because udev doesn't remove the symlink added by vgscan.
Yep - as said - in normal circumstance use should NOT run 'vgmknodes'
as the created links will not be known/visible to udev - so this behavior is
> This causes the LVM OCF script in the "resource-agents" package to break,
> because it is using the existance of the /dev/vg directory as a proxy for
> whether the volume group is active (or really as you said earlier, whether
> there are active volumes within the volume group).
> I reported this as a bug to the "resource-agents" package developers, and they
> said that they can't actually call lvm commands in their "status" routines
> because there have been cases where clustered LVM hung when querying status,
> causing the OCF script to hang and monitoring to fail.
> Ultimately I'll see if I can work around it by not calling "vgscan --mknodes".
yes please, start with this one...
vgmknodes are really supposed to be used on some unique urgent problem - not
executed in script every hour...
But yes - lvm2 will need to fix link creation of pool-in-use...
> Originally it was added in to fix some problems, but that was a while back so
> things may behave properly now.
More information about the linux-lvm