Revisit of U320 and U160 SCSI bus problems with FC3t2.91

Bill Cronk ngc4013 at cox.net
Mon Oct 11 00:59:42 UTC 2004


I have consistantly had problems with booting systems which have a JBOD 
RAID box and FC2 or FC3t2.91. The problem remains the same and the work 
around I figured out is only temporary.

The servers are Tyan dual Athlon 2200+ to 2800+ MP motherboards. A 
couple of models have the U160 and a couple have the U320 SCSI bus on 
board. In all cases the installation boots fine without the RAID 
configured. I found that I could boot the boxes then configure RAID0 or 
RAID5 and all would work very well until I perform a reboot. Upon 
rebooting then the boot cycle drops out into single user mode with 
errors relating to the RAID configuration in the /etc/raidtab file. Mind 
you the configuration works just fine since I already had it running 
when I first set it up.

So when I set out to install FC3t2.91 I figured maybe the same issues 
would not be present since there is now mdadm to use instead of the old 
raidtools. Again I installed the FC3t2.91 (BTW, in all cases the FC 
installs are all full installs), then set out to set up the RAID. So 
when I figured out what was needed to properly configure the 
/etc/mdadm.conf file and set the RAID5 configuration up that I wished to 
use, I started the RAID up and it ran fine as expected. The RAID was 
accesible, writable, and I even did an NFS export which could be 
accessed by other computers. Then I rebooted and the same type of 
failure occured dropping me out into single user mode stating that there 
was a problem with the /dev/md0 configuration.

My work around:

So what I discovered I could do was simple, but not really a fix for the 
problem. I found that if I removed the mount point in the /etc/fstab file:

/dev/md0  /export/db-f8017_raid5  reiserfs  defaults  0 0

and instead created a script file in /root which contained the following 
4 lines:

mdadm -A /dev/md0
mdadm --details /dev/md0
mount -t reiserfs /dev/md0 /export/db-f8017_raid5
df

First line is close, but may be missing something since the files are on 
my computers at work. Second line is just as a test reference point 
showing that the RAID is active. Third line is my mount point while the 
fourth shows the active mounts.

It accesses the same /etc/mdadm.conf file and the mount point is exactly 
the way it originally was written in /etc/fstab. The system will boot 
flawless and once logged in I can execute the script then the RAID comes 
online and works perfectly.

So now I ask the questions... Why?? Any further info needed? Can someone 
else setup a RAID box and recreate the problem to confirm that there are 
issues in FC2 and FC3 with RAID durring the boot cycle? So far it 
happens with a 144GB, 324GB, 657GB, and 2TB RAID boxes configured as 
either RAID0 or RAID5, on internal U160 or U320 SCSI ports and an addon 
U320 Adaptec card.

I think there is an issue with the events durring the boot cycle in that 
it tries to mount the RAID before the RAID configuration sets the RAID up.

Bill




More information about the fedora-test-list mailing list