[linux-lvm] [lvmlockd] Reload configuration while lvmlockd and sanlock running

Damon Wang damon.devops at gmail.com
Thu Apr 26 06:16:31 UTC 2018


tried and worked, really thanks :-D

besides if only change host id, it seems can take effect immediately
on a new vg or stop and change a exists vg as you said:

[root at dev1-2 ~]# vgs
  WARNING: Not using lvmetad because config setting use_lvmetad=0.
  WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --cache).
  Reading VG pool2 without a lock.
  Reading VG pool4 without a lock.
  VG                               #PV #LV #SN Attr   VSize    VFree
  ff35ecc8217543e0a5be9cbe935ffc84   1  55   0 wz--ns <198.00g 147.48g
  pool2                              1   0   0 wz--ns  <10.00g  <9.75g
  pool4                              1   6   0 wz--ns  <39.75g <12.17g

# lock of pool4 not start yet

[root at dev1-2 ~]# sanlock client status
daemon aa68f03c-ab45-484a-ae45-61ac147e939b.dev1-2
p -1 helper
p -1 listener
p 43884 lvmlockd
p -1 status
s lvm_ff35ecc8217543e0a5be9cbe935ffc84:49:/dev/mapper/ff35ecc8217543e0a5be9cbe935ffc84-lvmlock:0

# the host id used for ff35ecc8217543e0a5be9cbe935ffc84 is 49

[root at dev1-2 ~]# lvmconfig --type diff
local {
host_id=49
}
global {
use_lvmetad=0
use_lvmlockd=1
}
devices {
issue_discards=1
}

# it is same as our lvm config is 49

[root at dev1-2 ~]# sed -i 's/.*host_id.*/host_id=94/g' /etc/lvm/lvmlocal.conf
[root at dev1-2 ~]# lvmconfig --type diff
local {
host_id=94
}
global {
use_lvmetad=0
use_lvmlockd=1
}
devices {
issue_discards=1
}

# now we change host id to 94

[root at dev1-2 ~]# vgchange --lock-start pool4
  WARNING: Not using lvmetad because config setting use_lvmetad=0.
  WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --cache).
  VG pool4 starting sanlock lockspace
  Starting locking.  Waiting for sanlock may take 20 sec to 3 min...

# start lock for pool4

[root at dev1-2 ~]# vgs
  WARNING: Not using lvmetad because config setting use_lvmetad=0.
  WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --cache).
  Reading VG pool2 without a lock.
  VG                               #PV #LV #SN Attr   VSize    VFree
  ff35ecc8217543e0a5be9cbe935ffc84   1  55   0 wz--ns <198.00g 147.48g
  pool2                              1   0   0 wz--ns  <10.00g  <9.75g
  pool4                              1   6   0 wz--ns  <39.75g <12.17g

[root at dev1-2 ~]# sanlock client status
daemon aa68f03c-ab45-484a-ae45-61ac147e939b.dev1-2
p -1 helper
p -1 listener
p 43884 lvmlockd
p 43884 lvmlockd
p -1 status
s lvm_pool4:94:/dev/mapper/pool4-lvmlock:0
s lvm_ff35ecc8217543e0a5be9cbe935ffc84:49:/dev/mapper/ff35ecc8217543e0a5be9cbe935ffc84-lvmlock:0
You have new mail in /var/spool/mail/root

# as above, host id for pool4 is 94 while host id for
ff35ecc8217543e0a5be9cbe935ffc84 is still 49

if we stop lock of ff35ecc8217543e0a5be9cbe935ffc84 and then start,
its host id also will change to 94

2018-04-25 23:00 GMT+08:00 David Teigland <teigland at redhat.com>:
>> afaik we can't restart lvmlockd or sanlock without side effects -- all
>> locks must be released before sanlock shutdown, and if wdmd enabled,
>> the host maybe reboot after a while...
>>
>> So if there any way to reload configuration of sanlock and lvmlockd? I
>> don't want to reboot my host...
>
> Deactivate LVs in shared VGs, stop the shared VGs (vgchange --lockstop),
> stop lvmlockd, stop sanlock, stop wdmd, make config changes, then start
> everything again.  Dave
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




More information about the linux-lvm mailing list