[linux-lvm] Q: active/inactive/imported/exported group ?
gerhard.fuernkranz at mchp.siemens.de
Tue Nov 23 15:28:08 UTC 1999
mauelsha at u9etz.ez-darmstadt.telekom.de wrote:
> If you don't change your IO configuration vgscan will not produce
> different LV minors anyway.
> > Any comments?
> This is an option.
> I'ld rather prefer to let vgscan deal with 'sticky' minors based
> on existing lvmtab entries and let it only use free minors for new VGs.
The problem is again the cluster. Additionally to shared volumes I
also may have local volumes on each host. Each host will usually
see the same set of shared volumes, but the sets of each host's
local volumes may differ. But I also want, that a shared volume
gets the same minor number on every host in the cluster in order
that NFS server failover from one host to another one works.
So if I add new (shared) volumes on host A, then it is not guaranteed,
that the new minor number host A assigns to the volume is not already
in use on host B. With my sticky volume table I'd use the following
procedure to create a new shared volume:
1. find a free minor number (manually)
2. enter it to the table on every host in the cluster (manually)
3. then craete the volume: vgcreate on 1st host / vgscan on all
other hosts - vgcreate/vgscan will assign the number from the
Of course my sticky table can also be integrated in lvmtab.
A different approach could be:
- I've seen in the pvdata output, that the volume descriptor
already contains a major/minor number.
- So the desired minor number could reside in the volume descriptor
on the disk together with a sticky flag for the volume.
- such a sticky voulume could e.g. be created with
"lvcreate ... --sticky=<minor> ...",
(which only succeeds if the minor number is not already in use -
either by a currently active volume or another sticky volume).
- vgscan will
1. go through the currently active volumes
2. through the sticky volumes and try to assign exactly
the sticky minor number
3. all other volumes and assign a free minor number
> If you garanty that you must already have a cluster manager... 8*)
More information about the linux-lvm