[linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

David Teigland teigland at redhat.com
Wed Oct 24 14:47:36 UTC 2018


On Tue, Oct 23, 2018 at 08:23:06PM -0600, Gang He wrote:
> Teigland <teigland at redhat.com> wrote:
> > On Mon, Oct 22, 2018 at 08:19:57PM -0600, Gang He wrote:
> >>   Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 
> > (code=exited, status=5)
> >> 
> >> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device 
> > /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
> >> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV 
> > qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because 
> > of previous preference.
> >> Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome 
> > while PVs appear on duplicate devices.
> > 
> > I'd try disabling lvmetad, I've not been testing these with lvmetad on.
> your means is, I should let the user disable lvmetad? 

yes

> > We may need to make pvscan read both the start and end of every disk to
> > handle these md 1.0 components, and I'm not sure how to do that yet
> > without penalizing every pvscan.
> What can we do for now? it looks there needs add more code implement this logic.

Excluding component devices in global_filter is always the most direct way
of solving problems like this.  (I still hope to find a solution that
doesn't require that.)




More information about the linux-lvm mailing list