Failed Drives in Software RAID
Shane Archer
shane-fedora at rctech.net
Tue Jan 18 21:10:30 UTC 2005
Hi all,
I am trying to set up a software RAID5 on FC3, and having a heckuva time.
The computer is configured like this:
/boot and / are on a drive that is plugged into the motherboard's SATA0 port.
Three 160GB WD drives are plugged into a Promise SATA150TX2 controller.
The motherboard is an Intel SE7520AF2 server board.
The problem is, whenever I go to create the array, one drive always gets
reported as failed. I have done repeated surface scans on the drives (using
both fsck and badblocks) and I am quite certain the drives are alright. On
top of that, when the array starts building, the system eventually locks
up, without fail.
So the question becomes, where might I best troubleshoot this? Could it be
a software problem? A hardware problem? Has anybody had problems with the
Promise card in FC3?
Here's the mdadm command I am using:
# mdadm -Cv /dev/md0 -l5 -n3 -c128 /dev/sdb /dev/sdc /dev/sdd
And here's the log after running it:
Jan 17 21:57:47 xtreme kernel: md: bind<sdb>
Jan 17 21:57:47 xtreme kernel: md: bind<sdc>
Jan 17 21:57:47 xtreme kernel: md: bind<sdd>
Jan 17 21:57:47 xtreme kernel: raid5: automatically using best checksumming
function: pIII_sse
Jan 17 21:57:47 xtreme kernel: pIII_sse : 2896.000 MB/sec
Jan 17 21:57:47 xtreme kernel: raid5: using function: pIII_sse (2896.000
MB/sec)
Jan 17 21:57:47 xtreme kernel: md: raid5 personality registered as nr 4
Jan 17 21:57:47 xtreme kernel: raid5: device sdc operational as raid disk 1
Jan 17 21:57:47 xtreme kernel: raid5: device sdb operational as raid disk 0
Jan 17 21:57:47 xtreme kernel: raid5: allocated 3162kB for md0
Jan 17 21:57:47 xtreme kernel: raid5: raid level 5 set md0 active with 2
out of 3 devices, algorithm 2
Jan 17 21:57:47 xtreme kernel: RAID5 conf printout:
Jan 17 21:57:47 xtreme kernel: --- rd:3 wd:2 fd:1
Jan 17 21:57:47 xtreme kernel: disk 0, o:1, dev:sdb
Jan 17 21:57:47 xtreme kernel: disk 1, o:1, dev:sdc
Jan 17 21:57:47 xtreme kernel: RAID5 conf printout:
Jan 17 21:57:47 xtreme kernel: --- rd:3 wd:2 fd:1
Jan 17 21:57:47 xtreme kernel: disk 0, o:1, dev:sdb
Jan 17 21:57:47 xtreme kernel: disk 1, o:1, dev:sdc
Jan 17 21:57:47 xtreme kernel: disk 2, o:1, dev:sdd
Jan 17 21:57:47 xtreme kernel: .<6>md: syncing RAID array md0
Jan 17 21:57:47 xtreme kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Jan 17 21:57:47 xtreme kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Jan 17 21:57:47 xtreme kernel: md: using maximum available idle IO bandwith
(but not more than 200000 KB/sec)
for reconstruction.
Jan 17 21:57:47 xtreme kernel: md: using 128k window, over a total of
156249856 blocks.
And here's the output from mdadm -E afterwards:
# mdadm -E /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 00.90.01
UUID : 43e05773:9c581b8a:23c8cdb0:d71025ac
Creation Time : Mon Jan 17 21:57:47 2005
Raid Level : raid5
Device Size : 156249856 (149.01 GiB 159.100 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Update Time : Mon Jan 17 21:57:47 2005
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Checksum : 11683037 - correct
Events : 0.1
Layout : left-symmetric
Chunk Size : 128K
Number Major Minor RaidDevice State
this 0 8 16 0 active sync /dev/sdb
0 0 8 16 0 active sync /dev/sdb
1 1 8 32 1 active sync /dev/sdc
2 2 0 0 2 faulty removed
3 3 8 48 3 spare /dev/sdd
From the output above, it would appear that /dev/sdd is the dead drive.
However, if I unplug that drive, reboot the system, and run the same mdadm
-Cv command with only two drives, it still reports 1 working, 1 failed.
So...hardware problem?
Thanks for any help,
Shane
More information about the fedora-list
mailing list