[linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
Andrew Falgout
digitalw00t at gmail.com
Sat Mar 21 03:22:04 UTC 2020
This started on a Raspberry PI 4 running raspbian. I moved the disks to my
Fedora 31 system, that is running the latest updates and kernel. When I
had the same issues there I knew it wasn't raspbian.
I've reached the end of my rope on this. The disks are there, all three are
accounted for, and the LVM data on them can be seen. But it refuses to
activate stating I/O errors.
[root at hypervisor01 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 local_storage01 lvm2 a-- <931.51g 0
/dev/sdb1 local_storage01 lvm2 a-- <931.51g 0
/dev/sdc1 local_storage01 lvm2 a-- <931.51g 0
/dev/sdd1 local_storage01 lvm2 a-- <931.51g 0
/dev/sde1 local_storage01 lvm2 a-- <931.51g 0
/dev/sdf1 local_storage01 lvm2 a-- <931.51g <931.51g
/dev/sdg1 local_storage01 lvm2 a-- <931.51g <931.51g
/dev/sdh1 local_storage01 lvm2 a-- <931.51g <931.51g
/dev/sdi3 fedora_hypervisor lvm2 a-- 27.33g <9.44g
/dev/sdk1 vg1 lvm2 a-- <7.28t 0
/dev/sdl1 vg1 lvm2 a-- <7.28t 0
/dev/sdm1 vg1 lvm2 a-- <7.28t 0
[root at hypervisor01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
fedora_hypervisor 1 2 0 wz--n- 27.33g <9.44g
local_storage01 8 1 0 wz--n- <7.28t <2.73t
vg1 3 1 0 wz--n- 21.83t 0
[root at hypervisor01 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta%
Move Log Cpy%Sync Convert
root fedora_hypervisor -wi-ao---- 15.00g
swap fedora_hypervisor -wi-ao---- 2.89g
libvirt local_storage01 rwi-aor--- <2.73t
100.00
gluster02 vg1 Rwi---r--- 14.55t
The one in question is the vg1/gluster02 lvm group.
I try to activate the VG:
[root at hypervisor01 ~]# vgchange -ay vg1
device-mapper: reload ioctl on (253:19) failed: Input/output error
0 logical volume(s) in volume group "vg1" now active
I've got the debugging output from :
vgchange -ay vg1 -vvvv -dddd
lvchange -ay --partial vg1/gluster02 -vvvv -dddd
Just not sure where I should dump the data for people to look at. Is there
a way to tell the md system to ignore the metadata since there wasn't an
actual disk failure, and rebuild the metadata off what is in the lvm? Or
can I even get the LV to mount, so I can pull the data off.
Any help is appreciated. If I can save the data great. I'm tossing this
to the community to see if anyone else has an idea of what I can do.
./digitalw00t
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20200320/87f3fd71/attachment.htm>
More information about the linux-lvm
mailing list