[linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

Gang He ghe at suse.com
Wed Oct 24 02:23:06 UTC 2018


Hello David,

I am sorry, I can not understand your reply quickly.

>>> On 2018/10/23 at 23:04, in message <20181023150436.GB8413 at redhat.com>, David
Teigland <teigland at redhat.com> wrote:
> On Mon, Oct 22, 2018 at 08:19:57PM -0600, Gang He wrote:
>>   Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 
> (code=exited, status=5)
>> 
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device 
> /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because 
> of previous preference.
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome 
> while PVs appear on duplicate devices.
> 
> I'd try disabling lvmetad, I've not been testing these with lvmetad on.
your means is, I should let the user disable lvmetad? 

> We may need to make pvscan read both the start and end of every disk to
> handle these md 1.0 components, and I'm not sure how to do that yet
> without penalizing every pvscan.
What can we do for now? it looks there needs add more code implement this logic.

Thanks
Gang

> 
> Dave





More information about the linux-lvm mailing list