mdadm udev rules don't appear to work correctly
Robin Laing
Robin.Laing at drdc-rddc.gc.ca
Mon Mar 9 19:54:04 UTC 2009
Bruno Wolff III wrote:
> Since upgrading to mdadm 3 I have had to remove the udev rules file for mdadm
> as they were having arrays set up twice duruing the udev step resulting in
> booting not completing.
> Even with them removed if something happens to an array that drops an
> element, both parts result in an array being created instead of leaving
> the failed elements alone. However, this does stop the boot process
> from completing.
> Gnome has been fubar'd since the mass rebuild making a bit harder to
> fill in bug reports (and a lot of other stuff), but I'll eventually
> get around to filing a real bug report.
> [root at cerberus ~]# cat /proc/mdstat
> Personalities : [raid1]
> md126 : active raid1 sdb3[1]
> 41945600 blocks [2/1] [_U]
>
> md127 : active raid1 sdb6[1]
> 149500736 blocks [2/1] [_U]
>
> md0 : active raid1 sda1[0] sdb1[1]
> 264960 blocks [2/2] [UU]
>
> md3 : active raid1 sda2[0] sdb2[1]
> 41945600 blocks [2/2] [UU]
>
> md4 : active raid1 sda6[0]
> 149500736 blocks [2/1] [U_]
>
> md1 : active raid1 sda5[0] sdb5[1]
> 10482304 blocks [2/2] [UU]
>
> md2 : active raid1 sda3[0]
> 41945600 blocks [2/1] [U_]
>
> unused devices: <none>
>
> The md126 and md127 arrays shouldn't exist. They started showing up after
> sdb3 and sdb6 had issues during a boot that got them kicked out of their
> arrays.
> With the mdadm udev rules file in place I also get an md128.
>
This thread hit home this weekend. My RAID drives decided to show up
like the above. Some had both drives working and others just one.
When I was setting up F10, I had problems with the raid arrays and it
took some time to figure out that I needed to manually create the
mdadm.conf file. This is what I needed to get rid of the md126/127 type
numbers that started to appear.
As long as I tried to depend on the udev, I couldn't create RAID 1 arrays.
--
Robin Laing
More information about the fedora-test-list
mailing list