[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

scary RAID-1 problems



    I had a close call with my promise-sata RAID-1 drives today.
The machine had been stable for several weeks now configured
with software RAID-1 across two identical SATA drives. Both drives
have RAID-1 partitions for /var, /boot, /, swap and /home.
Earlier this week I followed the instructions at...

http://www.dirigo.net/tuxTips/avoidingProblems/GrubMdMbr.php

to allow the second SATA drive to be bootable if the first
drive failed. I followed the instructions...

device (hd0) /dev/sdb
root (hd0,0)
setup (hd0)
quit

within grub. The machine seemed fine for several reboot.
Today however I found it frozen and had to do a hard reboot.
The machine refused to load the boot loader claiming stage2
was missing. I was able to get the 'linux rescue' mode to
find the RAID-1 partitions and mount them as /mnt/sysimage.
However I found that after a 'chroot /mnt/sysimage' that
executing 'grub-install /dev/sda' couldn't reinstall the
boot loader on the MBR because grub claimed that the BIOS
didn't have md devices.
    At this point I tried another reboot and fortunately
the boot loader finally seemed to work this time and I
was able to boot into the kernel. Executing 'cat /proc/mdstat'
shows that md1 for the / partition is resyncing. My questions
are...

1) are those instructions for making the second drive bootable
incorrect?
2) how exactly is one supposed to reinstall the MBR for a 
RAID-1 when in linux rescue mode?

I wonder if I should have done...

device (hd0) /dev/sda
root (hd0,0)
setup (hd0)
quit

from within grub rather than just 'grub-install /dev/sda'?
                               Jack



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]