[linux-lvm] Removing disk from raid LVM
Libor Klepáč
libor.klepac at bcom.cz
Thu Mar 12 21:32:30 UTC 2015
Hello John,
just a quick question, I'll respond on rest later.
I tried to read data from one of old LVs.
To be precise, I tried to read rimage_* directly.
#dd if=vgPecDisk2-lvBackupPc_rimage_0 of=/mnt/tmp/0 bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.802423 s, 13.1 MB/s
# dd if=vgPecDisk2-lvBackupPc_rimage_1 of=/mnt/tmp/1 bs=10M count=1
dd: reading `vgPecDisk2-lvBackupPc_rimage_1': Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.00582503 s, 0.0 kB/s
#dd if=vgPecDisk2-lvBackupPc_rimage_2 of=/mnt/tmp/2 bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.110792 s, 94.6 MB/s
#dd if=vgPecDisk2-lvBackupPc_rimage_3 of=/mnt/tmp/3 bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.336518 s, 31.2 MB/s
As you can see, three parts are ok (and output files do contain *some* data)
one rimage is missing (well, there is symlink to dm-33 dev node, but it says IO
error)
Is there a way to kick this rimage out and to use those three remaining rimages?
LV was started
#lvchange -ay --partial -v vgPecDisk2/lvBackupPc
Configuration setting "activation/thin_check_executable" unknown.
PARTIAL MODE. Incomplete logical volumes will be processed.
Using logical volume(s) on command line
Activating logical volume "lvBackupPc" exclusively.
activation/volume_list configuration setting not defined: Checking only host
tags for vgPecDisk2/lvBackupPc
Loading vgPecDisk2-lvBackupPc_rmeta_0 table (253:29)
Suppressed vgPecDisk2-lvBackupPc_rmeta_0 (253:29) identical table reload.
Loading vgPecDisk2-lvBackupPc_rimage_0 table (253:30)
Suppressed vgPecDisk2-lvBackupPc_rimage_0 (253:30) identical table
reload.
Loading vgPecDisk2-lvBackupPc_rmeta_1 table (253:33)
Suppressed vgPecDisk2-lvBackupPc_rmeta_1 (253:33) identical table reload.
Loading vgPecDisk2-lvBackupPc_rimage_1 table (253:34)
Suppressed vgPecDisk2-lvBackupPc_rimage_1 (253:34) identical table
reload.
Loading vgPecDisk2-lvBackupPc_rmeta_2 table (253:35)
Suppressed vgPecDisk2-lvBackupPc_rmeta_2 (253:35) identical table reload.
Loading vgPecDisk2-lvBackupPc_rimage_2 table (253:36)
Suppressed vgPecDisk2-lvBackupPc_rimage_2 (253:36) identical table
reload.
Loading vgPecDisk2-lvBackupPc_rmeta_3 table (253:37)
Suppressed vgPecDisk2-lvBackupPc_rmeta_3 (253:37) identical table reload.
Loading vgPecDisk2-lvBackupPc_rimage_3 table (253:108)
Suppressed vgPecDisk2-lvBackupPc_rimage_3 (253:108) identical table
reload.
Loading vgPecDisk2-lvBackupPc table (253:109)
device-mapper: reload ioctl on failed: Invalid argument
#dmesg says
[747203.140882] device-mapper: raid: Failed to read superblock of device at
position 1
[747203.149219] device-mapper: raid: New device injected into existing array
without 'rebuild' parameter specified
[747203.149906] device-mapper: table: 253:109: raid: Unable to assemble
array: Invalid superblocks
[747203.150576] device-mapper: ioctl: error adding target to table
[747227.051339] device-mapper: raid: Failed to read superblock of device at
position 1
[747227.062519] device-mapper: raid: New device injected into existing array
without 'rebuild' parameter specified
[747227.063612] device-mapper: table: 253:109: raid: Unable to assemble
array: Invalid superblocks
[747227.064667] device-mapper: ioctl: error adding target to table
[747308.206650] quiet_error: 62 callbacks suppressed
[747308.206652] Buffer I/O error on device dm-34, logical block 0
[747308.207383] Buffer I/O error on device dm-34, logical block 1
[747308.208069] Buffer I/O error on device dm-34, logical block 2
[747308.208736] Buffer I/O error on device dm-34, logical block 3
[747308.209383] Buffer I/O error on device dm-34, logical block 4
[747308.210020] Buffer I/O error on device dm-34, logical block 5
[747308.210647] Buffer I/O error on device dm-34, logical block 6
[747308.211262] Buffer I/O error on device dm-34, logical block 7
[747308.211868] Buffer I/O error on device dm-34, logical block 8
[747308.212464] Buffer I/O error on device dm-34, logical block 9
[747560.283263] quiet_error: 55 callbacks suppressed
[747560.283267] Buffer I/O error on device dm-34, logical block 0
[747560.284214] Buffer I/O error on device dm-34, logical block 1
[747560.285059] Buffer I/O error on device dm-34, logical block 2
[747560.285633] Buffer I/O error on device dm-34, logical block 3
[747560.286170] Buffer I/O error on device dm-34, logical block 4
[747560.286687] Buffer I/O error on device dm-34, logical block 5
[747560.287151] Buffer I/O error on device dm-34, logical block 6
Libor
On Čt 12. března 2015 13:20:07 John Stoffel wrote:
> Interesting, so maybe it is working, but from looking at the info
> you've provided, it's hard to know what happened. I think it might be
> time to do some testing with some loopback devices so you can setup
> four 100m disks, then put them into a VG and then do some LVs on top
> with the RAID5 setup. Then you can see what happens when you remove a
> disk, either with 'vgreduce' or by stopping the VG and then removing
> a single PV, then re-starting the VG.
>
> Thinking back on it, I suspect the problem was your vgcfgrestore. You
> really really really didn't want to do that, because you lied to the
> system. Instead of four data disks, with good info, you now had three
> good disks, and one blank disk. But you told LVM that the fourth disk
> was just fine, so it started to use it. So I bet that when you read
> from an LV, it tried to spread the load out and read from all four
> disks, so you'd get Good, good, nothing, good data, which just totally
> screwed things up.
>
> Sometimes you were ok I bet because the parity data was on the bad
> disk, but other times it wasn't so those LVs go corrupted because 1/3
> of their data was now garbage. You never let LVM rebuild the data by
> refreshing the new disk.
>
> Instead you probably should have done a vgreduce and then vgextend
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20150312/908dd34b/attachment.htm>
More information about the linux-lvm
mailing list