[linux-lvm] LVM issues after replacing linux mdadm RAID5 drive
L. M. J
linuxmasterjedi at free.fr
Wed Apr 30 20:57:28 UTC 2014
On 26 avril 2014 20:47:44 CEST, "L.M.J" <linuxmasterjedi at free.fr> wrote:
>
>What do you think about this command :
>
>
> ~# dd if=/dev/md0 bs=512 count=255 skip=1 of=/tmp/md0.txt
>
> [..]
> physical_volumes {
> pv0 {
> id = "5DZit9-6o5V-a1vu-1D1q-fnc0-syEj-kVwAnW"
> device = "/dev/md0"
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 7814047360
> pe_start = 384
> pe_count = 953863
> }
> }
> logical_volumes {
>
> lvdata {
> id = "JiwAjc-qkvI-58Ru-RO8n-r63Z-ll3E-SJazO7"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> segment_count = 1
> [..]
>
>I presume I still have data on my broken RAID5.
>
>I did a pvcreate --restorefile and vgcfgrestore.
>I can see now my 2 LVM, but my EXT4 filesystem are empty, df reports
>some realist disk usage, fsck (Read
>only) find tons of errors.
>
>Is there a way to recover my data on the EXT4 FS ?
>
>
>
>
>Le Fri, 18 Apr 2014 23:14:17 +0200,
>"L.M.J" <linuxmasterjedi at free.fr> a écrit :
>
>> Le Thu, 17 Apr 2014 15:33:48 -0400,
>> Stuart Gathman <stuart at gathman.org> a écrit :
>>
>> Thanks for your answer
>>
>> > Fortunately, your fsck was read only. At this point, you need to
>> > crash/halt your system with no shutdown (to avoid further writes to
>the
>> > mounted filesystems).
>> > Then REMOVE the new drive. Start up again, and add the new drive
>properly.
>>
>> RAID5 recreate : started with 2 original drives
>>
>> ~# mdadm --assemble --force /dev/md0 /dev/sdc1 /dev/sdd1
>>
>> md0 status is normal, missing new drive : sdb(1)
>> ~# cat /proc/mdstat
>> md0 : active raid5 sdc1[0] sdd1[1]
>> 3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
>>
>>
>> > You should check stuff out READ ONLY. You will need fsck (READ
>ONLY at
>> > first), and at least some data has been destroyed.
>> > If the data is really important, you need to copy the two old
>drives
>> > somewhere before you do ANYTHING else. Buy two more drives! That
>will
>> > let you recover from any more mistakes typing Create instead of
>Assemble
>> > or Manage. (Note that --assume-clean warns you that you really
>need to
>> > know what you are doing!)
>>
>> I try a read-only fsck
>>
>> ~# fsck -n -v -f /dev/lvm-raid/lvmp3
>> fsck from util-linux-ng 2.17.2
>> e2fsck 1.41.11 (14-Mar-2010)
>> Resize inode not valid. Recreate? no
>> Pass 1: Checking inodes, blocks, and sizes
>> Inode 7, i_blocks is 114136, should be 8. Fix? no
>> Inode 786433 is in use, but has dtime set. Fix? no
>> Inode 786433 has imagic flag set. Clear? no
>> Inode 786433 has compression flag set on filesystem without
>compression
>> support. Clear? no Inode 786433 has INDEX_FL flag set but is not a
>directory.
>> Clear HTree index? no
>> HTREE directory inode 786433 has an invalid root node.
>> Clear HTree index? no
>> Inode 786433, i_blocks is 4294967295, should be 0. Fix? no
>> [...]
>> Directory entry for '.' in ... (11) is big.
>> Split? no
>> Missing '.' in directory inode 262145.
>> Fix? no
>> Invalid inode number for '.' in directory inode 262145.
>> Fix? no
>> Directory entry for '.' in ... (262145) is big.
>> Split? no
>> Directory inode 917506, block #0, offset 0: directory corrupted
>> Salvage? no
>> e2fsck: aborted
>>
>>
>> Sounds bad, what should I do know ?
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm at redhat.com
>https://www.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Help please :-(
--
May the open source be with you my young padawan.
Envoyé de mon téléphone, excusez la brièveté.
More information about the linux-lvm
mailing list