[dm-devel] Improving dm-mirror as a final year project

Miklos Vajna vmiklos at ulx.hu
Thu Feb 17 10:49:26 UTC 2011


On Wed, Feb 16, 2011 at 06:12:19PM +0100, Miklos Vajna <vmiklos at ulx.hu> wrote:
> > The basic component that covers RAID456 is available upstream, as you  
> > saw.  I have an additional set of ~12 (reasonably small) patches that  
> > add RAID1 and superblock/bitmap support.  These patches are not yet  
> > upstream nor are they in any RHEL product.
> 
> Then what is the recommended platform to hack dm-raid? I have RHEL6 at
> the moment. Is it OK to try to cherry-pick the single commit from
> upstream + apply your patches or is it better to install rawhide where
> the kernel is already 2.6.38rc5 (as far as I see) and only apply your
> patches there?
>
> (...) 
> 
> > For convenience, I've attached the patches I'm working on (quilt  
> > directory) and the latest gime_raid.pl script.

I tried these on Fedora 15 with mixed results.

I prepared a kernel source tree:

----
$ wget http://download.fedora.redhat.com/pub/fedora/linux/development/15/source/SRPMS/kernel-2.6.38-0.rc4.git0.2.fc15.src.rpm
$ rpm -Uvh kernel-2.6.38-0.rc4.git0.2.fc15.src.rpm 2>&1 | grep -v mockb
$ cd ~/rpmbuild/SPECS
# yum install xmlto asciidoc elfutils-devel 'perl(ExtUtils::Embed)' # build-depends for kernel
$ rpmbuild -bp --target=`uname -m` kernel.spec 2>&1 | tee prep.log
----

Then continue the build manually:

----
$ cd ~/rpmbuild/BUILD/kernel-2.6.37.fc15/linux-2.6.37.i686
$ perl -p -i -e "s/^EXTRAVERSION.*/EXTRAVERSION = -0.rc4.git7.1.fc15.i686.PAE/" Makefile

$ cp /usr/src/kernels/2.6.38-0.rc4.git7.1.fc15.i686.PAE/Module.symvers .

$ make prepare
$ make modules_prepare
$ make M=drivers/md
# cp drivers/md/*.ko /lib/modules/2.6.38-0.rc4.git7.1.fc15.i686.PAE/extra/
# depmod -a
----

Apply the patches:

----
$ git quiltimport --author "Jonathan Brassow <jbrassow at redhat.com>" --patches ~/dm-raid-patches/
----

Build, install and reload the modules, then try it:

----
# ~vmiklos/dm-raid-patches/gime_raid.pl raid4 /dev/sd[bcdef]1
RAID type    : raid4
Block devices: /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
Device size  : 770868BB
# ls /dev/mapper/
control  raid  vg_diploma1f-lv_root  vg_diploma1f-lv_swap
----

No hanging! (It did before I applied your patches.)

However almost anything else fails. Here is what I tried:

(I reverted the virtual machine to a snapshot before each attempt.)

Creating a raid1:

----
# ~vmiklos/dm-raid-patches/gime_raid.pl raid1 /dev/sd[bc]1
RAID type    : raid1
Block devices: /dev/sdb1 /dev/sdc1
Device size  : 192717BB

Message from syslogd at diploma1-f at Feb 17 11:19:57 ...
 kernel:[ 1395.143044] Oops: 0000 [#1] SMP

(...)

sh: line 1:  4209 Done                    echo 0 192717 raid raid1 0 2 - /dev/sdb1 - /dev/sdc1
      4210 Killed                  | dmsetup create raid
Failed to create "raid":
  0 192717 raid raid1 0 2 - /dev/sdb1 - /dev/sdc1
----

dmesg:

----
[ 1269.709043] bio: create slab <bio-1> at 1
[ 1269.733414] md/raid1:mdX: not clean -- starting background reconstruction
[ 1269.734568] md/raid1:mdX: active with 2 out of 2 mirrors
[ 1269.752416] BUG: unable to handle kernel NULL pointer dereference at 000001f4
[ 1269.753039] IP: [<c070c3d0>] md_integrity_register+0x29/0xdc
[ 1269.753039] *pdpt = 000000000c77c001 *pde = 000000000f5be067 *pte = 0000000000000000 
[ 1269.753039] Oops: 0000 [#1] SMP 
[ 1269.753039] last sysfs file: /sys/module/raid1/initstate
[ 1269.753039] Modules linked in: dm_raid raid1 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables snd_ens1371 gameport snd_rawmidi snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer ppdev vmw_balloon snd pcnet32 mii soundcore snd_page_alloc i2c_piix4 parport_pc i2c_core parport ipv6 mptspi mptscsih mptbase scsi_transport_spi [last unloaded: raid456]
[ 1269.753039] 
[ 1269.753039] Pid: 4227, comm: dmsetup Not tainted 2.6.38-0.rc4.git7.1.fc15.i686.PAE #1 440BX Desktop Reference Platform/VMware Virtual Platform
[ 1269.753039] EIP: 0060:[<c070c3d0>] EFLAGS: 00010246 CPU: 0
[ 1269.753039] EIP is at md_integrity_register+0x29/0xdc
[ 1269.753039] EAX: 00000000 EBX: cf8dd398 ECX: 00000000 EDX: 00000000
[ 1269.753039] ESI: cf8dd00c EDI: 00000000 EBP: cc79dd4c ESP: cc79dd34
[ 1269.753039]  DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[ 1269.753039] Process dmsetup (pid: 4227, ti=cc79c000 task=cf8e25b0 task.ti=cc79c000)
[ 1269.753039] Stack:
[ 1269.753039]  cc780f68 cc780f68 cf8dd01c cf8dd00c cc780f68 0000000c cc79dd74 d10b7d46
[ 1269.753039]  d10ba952 d10ba775 00000002 00000002 000000e0 cf8dd01c cf8dd00c cf8dd01c
[ 1269.753039]  cc79ddfc c070f8cf 00000001 c098c29a 00000001 00000021 c040b453 cf8dd394
[ 1269.753039] Call Trace:
[ 1269.753039]  [<d10b7d46>] run+0x24c/0x256 [raid1]
[ 1269.753039]  [<c070f8cf>] md_run+0x4f0/0x7ab
[ 1269.753039]  [<c040b453>] ? do_softirq+0x8c/0x92
[ 1269.753039]  [<c0420d64>] ? smp_apic_timer_interrupt+0x6b/0x78
[ 1269.753039]  [<c0435ad9>] ? __might_sleep+0x29/0xe4
[ 1269.753039]  [<c042f5a4>] ? should_resched+0xd/0x27
[ 1269.753039]  [<c07e603e>] ? _cond_resched+0xd/0x21
[ 1269.753039]  [<d10f5ff7>] raid_ctr+0x8b4/0x8d4 [dm_raid]
[ 1269.753039]  [<c07177f5>] dm_table_add_target+0x165/0x202
[ 1269.753039]  [<c04d4f7f>] ? vfree+0x25/0x27
[ 1269.753039]  [<c071715b>] ? alloc_targets+0x8c/0xb1
[ 1269.753039]  [<c071a05c>] table_load+0x233/0x242
[ 1269.753039]  [<c0719aa8>] dm_ctl_ioctl+0x1af/0x1ed
[ 1269.753039]  [<c043474f>] ? pick_next_task_fair+0x85/0x8d
[ 1269.753039]  [<c0719e29>] ? table_load+0x0/0x242
[ 1269.753039]  [<c07198f9>] ? dm_ctl_ioctl+0x0/0x1ed
[ 1269.753039]  [<c04f9c4f>] do_vfs_ioctl+0x451/0x482
[ 1269.753039]  [<c0491ca0>] ? __call_rcu+0xdb/0xe1
[ 1269.753039]  [<c0501713>] ? mntput_no_expire+0x28/0xbd
[ 1269.753039]  [<c05017c6>] ? mntput+0x1e/0x20
[ 1269.753039]  [<c04f4f4f>] ? path_put+0x1a/0x1d
[ 1269.753039]  [<c04f9cc7>] sys_ioctl+0x47/0x60
[ 1269.753039]  [<c040969f>] sysenter_do_call+0x12/0x28
[ 1269.753039] Code: 5d c3 55 89 e5 57 56 53 83 ec 0c 3e 8d 74 26 00 89 c6 8b 5e 10 8d 40 10 89 45 f0 31 c0 3b 5d f0 0f 84 b0 00 00 00 8b 56 30 31 ff <83> ba f4 01 00 00 00 0f 85 9e 00 00 00 eb 38 8b 43 6c a8 02 75 
[ 1269.753039] EIP: [<c070c3d0>] md_integrity_register+0x29/0xdc SS:ESP 0068:cc79dd34
[ 1269.753039] CR2: 00000000000001f4
[ 1269.898378] ---[ end trace 49f34abab1d4a1b8 ]---
----

Creating and then deleting a raid4:

----
root at diploma1-f:~# ~vmiklos/dm-raid-patches/gime_raid.pl raid4 /dev/sd[bcdef]1
RAID type    : raid4
Block devices: /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
Device size  : 770868BB

root at diploma1-f:~# dmsetup remove raid
Killed
----

dmesg:

----
[ 1289.594670] bio: create slab <bio-1> at 1
[ 1289.617603] md/raid:mdX: not clean -- starting background reconstruction
[ 1289.623386] md/raid:mdX: device sdf1 operational as raid disk 4
[ 1289.634279] md/raid:mdX: device sde1 operational as raid disk 3
[ 1289.634889] md/raid:mdX: device sdd1 operational as raid disk 2
[ 1289.635479] md/raid:mdX: device sdc1 operational as raid disk 1
[ 1289.636071] md/raid:mdX: device sdb1 operational as raid disk 0
[ 1289.661945] md/raid:mdX: allocated 5265kB
[ 1289.663995] md/raid:mdX: raid level 5 active with 5 out of 5 devices, algorithm 4
[ 1289.665539] RAID conf printout:
[ 1289.665630]  --- level:5 rd:5 wd:5
[ 1289.665742]  disk 0, o:1, dev:sdb1
[ 1289.665759]  disk 1, o:1, dev:sdc1
[ 1289.665765]  disk 2, o:1, dev:sdd1
[ 1289.665770]  disk 3, o:1, dev:sde1
[ 1289.665776]  disk 4, o:1, dev:sdf1
[ 1289.685823] md: resync of RAID array mdX
[ 1289.686486] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 1289.687117] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 1289.701594] md: using 128k window, over a total of 96256 blocks.
[ 1289.892256] attempt to access beyond end of device
[ 1289.893552] sdc1: rw=0, want=193160, limit=192717
[ 1289.895278] attempt to access beyond end of device
[ 1289.895768] sdf1: rw=0, want=193160, limit=192717
[ 1289.896795] attempt to access beyond end of device
[ 1289.897354] sde1: rw=0, want=193160, limit=192717
[ 1289.897991] attempt to access beyond end of device
[ 1289.898565] sdd1: rw=0, want=193160, limit=192717
[ 1289.899524] attempt to access beyond end of device
[ 1289.900051] sdb1: rw=0, want=193160, limit=192717
[ 1289.901523] md/raid:mdX: Disk failure on sdf1, disabling device.
[ 1289.901528] md/raid:mdX: Operation continuing on 4 devices.
[ 1289.909029] md/raid:mdX: Disk failure on sde1, disabling device.
[ 1289.909033] md/raid:mdX: Operation continuing on 3 devices.
[ 1289.917651] md/raid:mdX: Disk failure on sdd1, disabling device.
[ 1289.917656] md/raid:mdX: Operation continuing on 2 devices.
[ 1289.918748] md/raid:mdX: Disk failure on sdc1, disabling device.
[ 1289.918752] md/raid:mdX: Operation continuing on 1 devices.
[ 1289.933445] md/raid:mdX: Disk failure on sdb1, disabling device.
[ 1289.933449] md/raid:mdX: Operation continuing on 0 devices.
[ 1289.936280] Buffer I/O error on device dm-2, logical block 192673
[ 1289.937184] Buffer I/O error on device dm-2, logical block 192672
[ 1289.954961] Buffer I/O error on device dm-2, logical block 192673
[ 1289.955619] Buffer I/O error on device dm-2, logical block 192672
[ 1289.956650] Buffer I/O error on device dm-2, logical block 192712
[ 1289.960572] Buffer I/O error on device dm-2, logical block 192713
[ 1289.961338] Buffer I/O error on device dm-2, logical block 192712
[ 1289.962174] Buffer I/O error on device dm-2, logical block 192713
[ 1289.963016] Buffer I/O error on device dm-2, logical block 0
[ 1289.977325] Buffer I/O error on device dm-2, logical block 1
[ 1290.812860] md: mdX: resync done.
[ 1290.813983] md: checkpointing resync of mdX.
[ 1290.822997] md: resync of RAID array mdX
[ 1290.831231] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 1290.832595] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 1290.833594] md: using 128k window, over a total of 96256 blocks.
[ 1290.834244] md: resuming resync of mdX from checkpoint.
[ 1290.834877] md: mdX: resync done.
[ 1290.850001] RAID conf printout:
[ 1290.850062]  --- level:5 rd:5 wd:0
[ 1290.850070]  disk 0, o:0, dev:sdb1
[ 1290.850076]  disk 1, o:0, dev:sdc1
[ 1290.850081]  disk 2, o:0, dev:sdd1
[ 1290.850086]  disk 3, o:0, dev:sde1
[ 1290.850091]  disk 4, o:0, dev:sdf1
[ 1293.215726] BUG: sleeping function called from invalid context at arch/x86/mm/fault.c:1081
[ 1293.216017] in_atomic(): 0, irqs_disabled(): 1, pid: 4250, name: dmsetup
[ 1293.216017] Pid: 4250, comm: dmsetup Not tainted 2.6.38-0.rc4.git7.1.fc15.i686.PAE #1
[ 1293.216017] Call Trace:
[ 1293.216017]  [<c0435b8d>] ? __might_sleep+0xdd/0xe4
[ 1293.216017]  [<c07ea191>] ? do_page_fault+0x179/0x30c
[ 1293.216017]  [<c0465112>] ? tick_dev_program_event+0x2f/0x137
[ 1293.216017]  [<c04e0b72>] ? kmem_cache_free+0x67/0x94
[ 1293.216017]  [<c0535dfd>] ? release_sysfs_dirent+0x79/0x8c
[ 1293.216017]  [<c07ea018>] ? do_page_fault+0x0/0x30c
[ 1293.216017]  [<c07e7d97>] ? error_code+0x67/0x6c
[ 1293.216017]  [<c07e726b>] ? _raw_spin_lock_irqsave+0x15/0x27
[ 1293.216017]  [<c05c7d92>] ? __disk_unblock_events+0x23/0x9e
[ 1293.216017]  [<c042f5a4>] ? should_resched+0xd/0x27
[ 1293.216017]  [<c05c91bc>] ? disk_unblock_events+0x1b/0x1d
[ 1293.216017]  [<c0511052>] ? blkdev_put+0xbb/0xe7
[ 1293.216017]  [<c0716f35>] ? close_dev+0x30/0x3a
[ 1293.216017]  [<c0716f61>] ? dm_put_device+0x22/0x33
[ 1293.216017]  [<d10f56e7>] ? context_free+0x58/0x73 [dm_raid]
[ 1293.216017]  [<d10f5740>] ? raid_dtr+0x3e/0x41 [dm_raid]
[ 1293.216017]  [<c0717550>] ? dm_table_destroy+0x5b/0xcf
[ 1293.216017]  [<c0469a28>] ? arch_local_irq_save+0x12/0x17
[ 1293.216017]  [<c0715ae3>] ? __dm_destroy+0xfa/0x1c2
[ 1293.216017]  [<c0716412>] ? dm_destroy+0x12/0x14
[ 1293.216017]  [<c071a146>] ? dev_remove+0xdb/0xe5
[ 1293.216017]  [<c05dae00>] ? copy_to_user+0x12/0x4b
[ 1293.216017]  [<c0719aa8>] ? dm_ctl_ioctl+0x1af/0x1ed
[ 1293.216017]  [<c058fae7>] ? newary+0x10a/0x11c
[ 1293.216017]  [<c071a06b>] ? dev_remove+0x0/0xe5
[ 1293.216017]  [<c07198f9>] ? dm_ctl_ioctl+0x0/0x1ed
[ 1293.216017]  [<c04f9c4f>] ? do_vfs_ioctl+0x451/0x482
[ 1293.216017]  [<c0491ca0>] ? __call_rcu+0xdb/0xe1
[ 1293.216017]  [<c0501713>] ? mntput_no_expire+0x28/0xbd
[ 1293.216017]  [<c05017c6>] ? mntput+0x1e/0x20
[ 1293.216017]  [<c04f4f4f>] ? path_put+0x1a/0x1d
[ 1293.216017]  [<c04f9cc7>] ? sys_ioctl+0x47/0x60
[ 1293.216017]  [<c040969f>] ? sysenter_do_call+0x12/0x28
[ 1293.216017] BUG: unable to handle kernel NULL pointer dereference at 0000080c
[ 1293.216017] IP: [<c07e726b>] _raw_spin_lock_irqsave+0x15/0x27
[ 1293.216017] *pdpt = 000000000f7cb001 *pde = 000000000c661067 *pte = 0000000000000000 
[ 1293.216017] Oops: 0002 [#1] SMP 
[ 1293.216017] last sysfs file: /sys/devices/virtual/block/dm-2/range
[ 1293.216017] Modules linked in: dm_raid raid1 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables snd_ens1371 gameport snd_rawmidi snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer ppdev vmw_balloon snd pcnet32 mii soundcore snd_page_alloc i2c_piix4 parport_pc i2c_core parport ipv6 mptspi mptscsih mptbase scsi_transport_spi [last unloaded: raid456]
[ 1293.216017] 
[ 1293.216017] Pid: 4250, comm: dmsetup Not tainted 2.6.38-0.rc4.git7.1.fc15.i686.PAE #1 440BX Desktop Reference Platform/VMware Virtual Platform
[ 1293.216017] EIP: 0060:[<c07e726b>] EFLAGS: 00010082 CPU: 0
[ 1293.216017] EIP is at _raw_spin_lock_irqsave+0x15/0x27
[ 1293.216017] EAX: 00000282 EBX: 0000080c ECX: 00000083 EDX: 00000100
[ 1293.216017] ESI: cf98be00 EDI: 00000001 EBP: cf5b9dfc ESP: cf5b9df8
[ 1293.216017]  DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[ 1293.216017] Process dmsetup (pid: 4250, ti=cf5b8000 task=cfa77110 task.ti=cf5b8000)
[ 1293.216017] Stack:
[ 1293.216017]  00000800 cf5b9e18 c05c7d92 c042f5a4 0000080c cbc31400 cbc31410 00000083
[ 1293.216017]  cf5b9e20 c05c91bc cf5b9e34 c0511052 cda488c0 cccc5000 00000000 cf5b9e40
[ 1293.216017]  c0716f35 cda488c0 cf5b9e4c c0716f61 cccc5000 cf5b9e60 d10f56e7 cccc5000
[ 1293.216017] Call Trace:
[ 1293.216017]  [<c05c7d92>] __disk_unblock_events+0x23/0x9e
[ 1293.216017]  [<c042f5a4>] ? should_resched+0xd/0x27
[ 1293.216017]  [<c05c91bc>] disk_unblock_events+0x1b/0x1d
[ 1293.216017]  [<c0511052>] blkdev_put+0xbb/0xe7
[ 1293.216017]  [<c0716f35>] close_dev+0x30/0x3a
[ 1293.216017]  [<c0716f61>] dm_put_device+0x22/0x33
[ 1293.216017]  [<d10f56e7>] context_free+0x58/0x73 [dm_raid]
[ 1293.216017]  [<d10f5740>] raid_dtr+0x3e/0x41 [dm_raid]
[ 1293.216017]  [<c0717550>] dm_table_destroy+0x5b/0xcf
[ 1293.216017]  [<c0469a28>] ? arch_local_irq_save+0x12/0x17
[ 1293.216017]  [<c0715ae3>] __dm_destroy+0xfa/0x1c2
[ 1293.216017]  [<c0716412>] dm_destroy+0x12/0x14
[ 1293.216017]  [<c071a146>] dev_remove+0xdb/0xe5
[ 1293.216017]  [<c05dae00>] ? copy_to_user+0x12/0x4b
[ 1293.216017]  [<c0719aa8>] dm_ctl_ioctl+0x1af/0x1ed
[ 1293.216017]  [<c058fae7>] ? newary+0x10a/0x11c
[ 1293.216017]  [<c071a06b>] ? dev_remove+0x0/0xe5
[ 1293.216017]  [<c07198f9>] ? dm_ctl_ioctl+0x0/0x1ed
[ 1293.216017]  [<c04f9c4f>] do_vfs_ioctl+0x451/0x482
[ 1293.216017]  [<c0491ca0>] ? __call_rcu+0xdb/0xe1
[ 1293.216017]  [<c0501713>] ? mntput_no_expire+0x28/0xbd
[ 1293.216017]  [<c05017c6>] ? mntput+0x1e/0x20
[ 1293.216017]  [<c04f4f4f>] ? path_put+0x1a/0x1d
[ 1293.216017]  [<c04f9cc7>] sys_ioctl+0x47/0x60
[ 1293.216017]  [<c040969f>] sysenter_do_call+0x12/0x28
[ 1293.216017] Code: 0f 95 c0 0f b6 c0 c3 55 89 e5 3e 8d 74 26 00 e8 06 28 c8 ff 5d c3 55 89 e5 53 3e 8d 74 26 00 89 c3 e8 b0 27 c8 ff ba 00 01 00 00 <3e> 66 0f c1 13 38 f2 74 06 f3 90 8a 13 eb f6 5b 5d c3 55 89 e5 
[ 1293.216017] EIP: [<c07e726b>] _raw_spin_lock_irqsave+0x15/0x27 SS:ESP 0068:cf5b9df8
[ 1293.216017] CR2: 000000000000080c
[ 1293.216017] ---[ end trace 49f34abab1d4a1b8 ]---
----

Creating a raid4 and creating a filesystem on it:

----
root at diploma1-f:~# ~vmiklos/dm-raid-patches/gime_raid.pl raid4 /dev/sd[bcdef]1
RAID type    : raid4
Block devices: /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
Device size  : 770868BB

root at diploma1-f:~# mkfs.ext4 /dev/mapper/raid 
mke2fs 1.41.14 (22-Dec-2010)
----

and it hangs here.

dmesg:

----
[ 1270.954599] bio: create slab <bio-1> at 1
[ 1270.963550] md/raid:mdX: not clean -- starting background reconstruction
[ 1270.987463] md/raid:mdX: device sdf1 operational as raid disk 4
[ 1270.989895] md/raid:mdX: device sde1 operational as raid disk 3
[ 1270.990513] md/raid:mdX: device sdd1 operational as raid disk 2
[ 1270.991101] md/raid:mdX: device sdc1 operational as raid disk 1
[ 1270.992075] md/raid:mdX: device sdb1 operational as raid disk 0
[ 1271.028680] md/raid:mdX: allocated 5265kB
[ 1271.047135] md/raid:mdX: raid level 5 active with 5 out of 5 devices, algorithm 4
[ 1271.048513] RAID conf printout:
[ 1271.048604]  --- level:5 rd:5 wd:5
[ 1271.048749]  disk 0, o:1, dev:sdb1
[ 1271.048768]  disk 1, o:1, dev:sdc1
[ 1271.048773]  disk 2, o:1, dev:sdd1
[ 1271.048779]  disk 3, o:1, dev:sde1
[ 1271.048784]  disk 4, o:1, dev:sdf1
[ 1271.079401] md: resync of RAID array mdX
[ 1271.080482] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 1271.081111] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 1271.082639] md: using 128k window, over a total of 96256 blocks.
[ 1271.261775] attempt to access beyond end of device
[ 1271.262542] sdc1: rw=0, want=193160, limit=192717
[ 1271.263924] attempt to access beyond end of device
[ 1271.264467] sdf1: rw=0, want=193160, limit=192717
[ 1271.265172] attempt to access beyond end of device
[ 1271.265747] sde1: rw=0, want=193160, limit=192717
[ 1271.266436] attempt to access beyond end of device
[ 1271.266946] sdd1: rw=0, want=193160, limit=192717
[ 1271.267872] attempt to access beyond end of device
[ 1271.281551] sdb1: rw=0, want=193160, limit=192717
[ 1271.282992] md/raid:mdX: Disk failure on sdf1, disabling device.
[ 1271.282996] md/raid:mdX: Operation continuing on 4 devices.
[ 1271.284807] md/raid:mdX: Disk failure on sde1, disabling device.
[ 1271.284811] md/raid:mdX: Operation continuing on 3 devices.
[ 1271.285962] md/raid:mdX: Disk failure on sdd1, disabling device.
[ 1271.285966] md/raid:mdX: Operation continuing on 2 devices.
[ 1271.300746] md/raid:mdX: Disk failure on sdc1, disabling device.
[ 1271.300750] md/raid:mdX: Operation continuing on 1 devices.
[ 1271.302077] md/raid:mdX: Disk failure on sdb1, disabling device.
[ 1271.302081] md/raid:mdX: Operation continuing on 0 devices.
[ 1271.322206] Buffer I/O error on device dm-2, logical block 192673
[ 1271.323102] Buffer I/O error on device dm-2, logical block 192672
[ 1271.326465] Buffer I/O error on device dm-2, logical block 192672
[ 1271.338964] Buffer I/O error on device dm-2, logical block 192673
[ 1271.339820] Buffer I/O error on device dm-2, logical block 192712
[ 1271.340476] Buffer I/O error on device dm-2, logical block 192713
[ 1271.341118] Buffer I/O error on device dm-2, logical block 192712
[ 1271.341700] Buffer I/O error on device dm-2, logical block 192713
[ 1271.356296] Buffer I/O error on device dm-2, logical block 0
[ 1271.356867] Buffer I/O error on device dm-2, logical block 1
[ 1272.188477] md: mdX: resync done.
[ 1272.189634] md: checkpointing resync of mdX.
[ 1272.192828] md: resync of RAID array mdX
[ 1272.193363] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 1272.194090] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 1272.195083] md: using 128k window, over a total of 96256 blocks.
[ 1272.195723] md: resuming resync of mdX from checkpoint.
[ 1272.209475] md: mdX: resync done.
[ 1272.210697] RAID conf printout:
[ 1272.210709]  --- level:5 rd:5 wd:0
[ 1272.210716]  disk 0, o:0, dev:sdb1
[ 1272.210721]  disk 1, o:0, dev:sdc1
[ 1272.210726]  disk 2, o:0, dev:sdd1
[ 1272.210731]  disk 3, o:0, dev:sde1
[ 1272.210736]  disk 4, o:0, dev:sdf1
----

Are these expected or the above works for you?

Thanks.




More information about the dm-devel mailing list