[linux-lvm] auto_activation_volume_list in lvm.conf not honored
sb at plzk.de
Fri Dec 2 07:07:10 UTC 2016
Tried now clvmd to set it up but fails with
Dec 2 07:09:46 vm1 LVM(vg): ERROR: connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. Reading all physical volumes. This may take a while... Found volume group "vm1-vg" using metadata type lvm2 Skipping clustered volume group vg
Dec 2 07:09:46 vm1 LVM(vg): ERROR: connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. Skipping clustered volume group vg
Dec 2 07:09:46 vm1 crmd: notice: process_lrm_event: LRM operation vg_start_0 (call=22, rc=1, cib-update=26, confirmed=true) unknown error
Dec 2 07:09:46 vm1 crmd: warning: status_from_rc: Action 23 (vg_start_0) on vm1 failed (target: 0 vs. rc: 1): Error
Dec 2 07:09:46 vm1 crmd: warning: update_failcount: Updating failcount for vg on vm1 after failed start: rc=1 (update=INFINITY, time=1480658986)
Dec 2 07:09:46 vm1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-vg (INFINITY)
If i do a cleanup of the ressource - it is started.
Any help is greatly appreciated.
> Von:David Teigland <teigland at redhat.com>
> Gesendet: Mon 28 November 2016 23:14
> An: Stefan Bauer <sb at plzk.de>
> CC: linux-lvm at redhat.com
> Betreff: Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
> On Fri, Nov 25, 2016 at 10:30:39AM +0100, Zdenek Kabelac wrote:
> > Dne 25.11.2016 v 10:17 Stefan Bauer napsal(a):
> > > Hi Peter,
> > >
> > > as i said, we have master/slave setup _without_ concurrent write/read. So i do not see a reason why i should take care of locking as only one node is activating the volume group at the same time.
> > >
> > > That should be fine - right?
> > Nope it's not.
> > Every i.e. activation DOES validation of all resources and takes ACTION
> > when something is wrong.
> > Sorry, but there is NO way to do this properly without locking manager.
> > Although many lvm2 users always do try to be 'innovative' and try to use in
> > lock-less way - this seems to work most of the time - till the moment some
> > disaster happens - then just lvm2 is blamed about data loss..
> > Interestingly they never tried to think why we invested so much time into
> > locking manager when there is such 'easy-fix' in their eyes...
> > IMHO lvmlockd is relatively 'low-resource/overhead' solution worth to be
> > explored if you don't like clvmd...
> Stefan, as Zdenek points out, even reading VGs on shared storage is not
> entirely safe, because lvm may attempt to fix/repair things on disk while
> it is reading (this becomes more likely if one machine reads while another
> is making changes). Using some kind of locking or clustering (lvmlockd or
> clvm) is a solution.
> Another fairly new option is to use "system ID", which assigns one host as
> the owner of the VG. This avoids the problems mentioned above with
> reading->fixing. But, system ID on its own cannot be used dynamically.
> If you want to fail-over the VG between hosts, the system ID needs to be
> changed, and this needs to be done carefully (e.g. by a resource manager
> or something that takes fencing into account,
> Also https://www.redhat.com/archives/linux-lvm/2016-November/msg00022.html
> linux-lvm mailing list
> linux-lvm at redhat.com
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
More information about the linux-lvm