mdadm udev rules don't appear to work correctly

Bruno Wolff III bruno at wolff.to
Fri Mar 6 17:21:19 UTC 2009


Since upgrading to mdadm 3 I have had to remove the udev rules file for mdadm
as they were having arrays set up twice duruing the udev step resulting in
booting not completing.
Even with them removed if something happens to an array that drops an
element, both parts result in an array being created instead of leaving
the failed elements alone. However, this does stop the boot process
from completing.
Gnome has been fubar'd since the mass rebuild making a bit harder to
fill in bug reports (and a lot of other stuff), but I'll eventually
get around to filing a real bug report.
[root at cerberus ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb3[1]
      41945600 blocks [2/1] [_U]

md127 : active raid1 sdb6[1]
      149500736 blocks [2/1] [_U]

md0 : active raid1 sda1[0] sdb1[1]
      264960 blocks [2/2] [UU]

md3 : active raid1 sda2[0] sdb2[1]
      41945600 blocks [2/2] [UU]

md4 : active raid1 sda6[0]
      149500736 blocks [2/1] [U_]

md1 : active raid1 sda5[0] sdb5[1]
      10482304 blocks [2/2] [UU]

md2 : active raid1 sda3[0]
      41945600 blocks [2/1] [U_]

unused devices: <none>

The md126 and md127 arrays shouldn't exist. They started showing up after
sdb3 and sdb6 had issues during a boot that got them kicked out of their
arrays.
With the mdadm udev rules file in place I also get an md128.




More information about the fedora-test-list mailing list