<div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>------------------------------<br><br>Message: 6<br>Date: Mon, 10 Dec 2007 21:42:45 -0500
<br>From: "Null EXE" <<a href="mailto:nullexe@gmail.com">nullexe@gmail.com</a>><br>Subject: [linux-lvm] lvm raid failure<br>To: <a href="mailto:linux-lvm@redhat.com">linux-lvm@redhat.com</a><br>Message-ID:
<br> <<a href="mailto:1f31f75c0712101842q4d71c03bu75926f1deb026a26@mail.gmail.com">1f31f75c0712101842q4d71c03bu75926f1deb026a26@mail.gmail.com</a>><br>Content-Type: text/plain; charset="iso-8859-1"<br>
<br>Hi everyone,<br><br><br>I have been running an Edgy fileserver for about 6 months now. When I came<br>home from work last week I found my files all inaccessible. After<br>investigation I found issues with my PV. I would like to recover my data and
<br>replace the drives if possible any help is greatly appreciated!<br><br>I have one system drive and 6 storage drive, all raid1. The storage layout<br>is<br>2x160GB /dev/md0<br>2x250GB /dev/md1<br>2x250GB /dev/md2<br>After investigation I found problems with my lvm. pvdisplay showed:
<br>Couldn't find device with uuid 'xjWU5M-G3WB-tZjB-Tyrk-Q7rN-Yide-1FEVh3'.<br><br>I ran the command:<br>sudo pvcreate --uuid "xjWU5M-G3WB-tZjB-Tyrk-Q7rN-Yide-1FEVh3" --restorefile<br>/etc/lvm/archive/fileserver_00006.vg /dev/md2
<br><br>This seemed to recover /dev/md2. I re-ran pvdisplay and got<br>Couldn't find device with uuid '1ATm8s-oxKG-nz0p-z1QA-a0s4-od9T-AMZRoo'.<br><br>I figured I could run the same command on md1,<br>sudo pvcreate --uuid "1ATm8s-oxKG-nz0p-z1QA-a0s4-od9T-AMZRoo" --restorefile
<br>/etc/lvm/archive/fileserver_00006.vg /dev/md1<br><br>and got the message:<br>Couldn't find device with uuid '1ATm8s-oxKG-nz0p-z1QA-a0s4-od9T-AMZRoo'.<br>Device /dev/md1 not found (or ignored by filtering).
<br><br>After re-executing the above command with -vvv at the bottom of the output I<br>get the following message:<br>#device/dev-io.c:439 Opened /dev/md1 RO<br>#device/dev-io.c:264 /dev/md1: size is 0 sectors
<br>#filters/filter.c:106 /dev/md1: Skipping: Too small to hold a PV<br>#device/dev-io.c:485 Closed /dev/md1<br>#pvcreate.c:81 Device /dev/md1 not found (or ignored by filtering).<br><br>Here is my pvdisplay. Again, any help is Greatly appreciated
<br>***START***<br><br>Couldn't find device with uuid '1ATm8s-oxKG-nz0p-z1QA-a0s4-od9T-AMZRoo'.<br>Couldn't find device with uuid '1ATm8s-oxKG-nz0p-z1QA-a0s4-od9T-AMZRoo'.<br>--- Physical volume ---
<br>PV Name /dev/md0<br>VG Name fileserver<br>PV Size 149.05 GB / not usable 0<br>Allocatable yes<br>PE Size (KByte) 4096<br>Total PE 38156<br>Free PE 9996<br>Allocated PE 28160<br>PV UUID pZV1Ff-Y7fu-S8m1-tVFn-fOMJ-VRls-fLFEov
<br><br>--- Physical volume ---<br>PV Name unknown device<br>VG Name fileserver<br>PV Size 232.88 GB / not usable 0<br>Allocatable yes (but full)<br>PE Size (KByte) 4096<br>Total PE 59618<br>Free PE 0<br>Allocated PE 59618
<br>PV UUID 1ATm8s-oxKG-nz0p-z1QA-a0s4-od9T-AMZRoo<br><br>--- Physical volume ---<br>PV Name /dev/md2<br>VG Name fileserver<br>PV Size 232.88 GB / not usable 0<br>Allocatable yes<br>PE Size (KByte) 4096<br>Total PE 59618<br>
Free PE 9156<br>Allocated PE 50462<br>PV UUID xjWU5M-G3WB-tZjB-Tyrk-Q7rN-Yide-1FEVh3<br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <a href="https://www.redhat.com/archives/linux-lvm/attachments/20071210/1c987df9/attachment.html" target="_blank">
https://www.redhat.com/archives/linux-lvm/attachments/20071210/1c987df9/attachment.html</a><br><br>------------------------------<br><br>Message: 7<br>Date: Tue, 11 Dec 2007 09:20:29 +0100<br>From: Luca Berra <<a href="mailto:bluca@comedia.it">
bluca@comedia.it</a>><br>Subject: Re: [linux-lvm] lvm raid failure<br>To: <a href="mailto:linux-lvm@redhat.com">linux-lvm@redhat.com</a><br>Message-ID: <<a href="mailto:20071211082029.GA1535@percy.comedia.it">20071211082029.GA1535@percy.comedia.it
</a>><br>Content-Type: text/plain; charset=us-ascii; format=flowed<br><br>On Mon, Dec 10, 2007 at 09:42:45PM -0500, Null EXE wrote:<br>>get the following message:<br>>#device/dev-io.c:439 Opened /dev/md1 RO
<br>>#device/dev-io.c:264 /dev/md1: size is 0 sectors<br>>#filters/filter.c:106 /dev/md1: Skipping: Too small to hold a PV<br>>#device/dev-io.c:485 Closed /dev/md1<br>>#pvcreate.c:81 Device /dev/md1 not found (or ignored by filtering).
<br>><br>>Here is my pvdisplay. Again, any help is Greatly appreciated<br>you should investigate what's wrong at the md layer, lvm seems to be<br>just a victim.<br><br>check<br>/proc/mdstat<br>kernel-logs<br>mdadm.conf
<br>mdadm -Es<br><br>L.<br><br>--<br>Luca Berra -- <a href="mailto:bluca@comedia.it">bluca@comedia.it</a><br> Communication Media & Services S.r.l.<br> /"\<br> \ / ASCII RIBBON CAMPAIGN<br> X AGAINST HTML MAIL
<br> / \<br><br><br></blockquote></div> ***<br>/proc/mdstat<br>Personalities : [raid1] <br>md2 : active raid1 dm-5[1]<br> 244195904 blocks [2/1] [_U]<br> <br>md1 : inactive hdj1[0]<br> 244195904 blocks super non-persistent
<br> <br>md0 : active raid1 hdc1[0]<br> 156288256 blocks [2/1] [U_]<br> <br>unused devices: <none><br><br>/should my dm-5 be displayed or should it be a /dev/hd[a-z] device<br><br>***<br>mdadm -Es<br>
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=eeb0befe:9e50d574:974f5eae:ccb8e527<br>ARRAY /dev/md2 level=raid1 num-devices=2 UUID=1e5c1ab5:a5a06f84:d722583d:ca0f8cad<br>ARRAY /dev/md1 level=raid1 num-devices=2 UUID=41fc507e:7293a47c:7f18baeb:89dd5958
<br><br>***<br>mdadm.conf<br>DEVICE partitions<br><br>/proc/partitions<br> 3 0 10022040 hda<br> 3 1 9574708 hda1<br> 3 2 1 hda2<br> 3 5 441756 hda5<br> 22 0 156290904 hdc<br>
22 1 156288321 hdc1<br> 22 64 156290904 hdd<br> 22 65 156288321 hdd1<br> 56 0 244198584 hdi<br> 56 1 244196001 hdi1<br> 56 64 244198584 hdj<br> 56 65 244196001 hdj1<br> 57 0 244198584 hdk
<br> 57 1 244196001 hdk1<br> 57 64 244198584 hdl<br> 57 65 244196001 hdl1<br> 9 0 156288256 md0<br> 9 2 244195904 md2<br> 253 0 9574708 dm-0<br> 253 1 441756 dm-1<br> 253 2 156288321 dm-2
<br> 253 3 156288321 dm-3<br> 253 4 244196001 dm-4<br> 253 5 244196001 dm-5<br> 253 6 244196001 dm-6<br> 253 7 244196001 dm-7<br><br><br>***<br>kernel logs<br>Nov 28 22:58:46 ark-server kernel: [42949384.540000
] raid1: raid set md1 active with 1 out of 2 mirrors<br>Nov 28 22:58:46 ark-server kernel: [42949384.560000] md: md2 stopped.<br>Nov 28 22:58:46 ark-server kernel: [42949384.560000] md: bind<hdj1><br>Nov 28 22:58:46 ark-server kernel: [
42949384.560000] md: hdl1 has same UUID but different superblock to hdj1<br>Nov 28 22:58:46 ark-server kernel: [42949384.560000] md: hdl1 has different UUID to hdj1<br>Nov 28 22:58:46 ark-server kernel: [42949384.560000] md: export_rdev(hdl1)
<br><br>***<br>Looking at all of this. When I set up the array I remember my devices ordered with hd[cdefgh] now I'm seeing md1 trying to use hd[jl]. Is this a problem that it put assigned the drives new letter automatically?
<br><br>