FC4 RAID5 failed

毛睿 maorui2k at 163.com
Sat Jan 28 08:05:53 UTC 2006


I got a strange problem with FC4 software RAID5.
I have 2 RAID5s in my FC4 box. One contain 6 partitions /dev/hd[cdefgh]1, another contain 8 partitions /dev/hd[ab]3 + /dev/hd[cdefgh]2. They all worked fine before.
After I replaced one failed disk, a strange problem is happened. I removed the failed disk, and added new one. Syncing was ok, and was finished after hours. /proc/mdstat was also normal. But after I reboot the linux box. The 2 RAID5s are all in downgrade mode again! The new disk was kicked out! I never met such problem before with RH9. I tried many times, the result were all same. I can manually stop/start RAIDs, SuperBlocks and /proc/mdstat are all in good condition. But whenever I reboot, the new disk will be kicked out again. I can guarantee the new disk is good. In /var/log/message, I didn't see any error message during shutdown. And during the boot procedure, the RAID start procedure even didn't check the failed disk.
 
Does anybody ever meet the same problem?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/fedora-list/attachments/20060128/7f900dac/attachment-0001.htm>


More information about the fedora-list mailing list