2.6.16-1.2080_FC5 kernel panic (nv raid0, 86_64 architecture)
Terry Kemp
tkemp at mer-med.com
Thu Apr 6 23:28:34 UTC 2006
On Thu, 2006-04-06 at 09:21 -0400, Debbie Deutsch wrote:
> Terry Kemp wrote:
> > On Wed, 2006-04-05 at 20:15 -0400, Debbie Deutsch wrote:
>
> [SNIP]
>
> >> In any case, here is what my /etc/fstab file says. (Note that I have
> >> adjusted the white spaces to help with readability.)
> >>
> >> /dev/VolGroup00/LogVol00 / ext3 defaults 1 1
> >> LABEL=/boot /boot ext3 defaults 1 2
> >> devpts /dev/pts devpts gid=5,mode=620 0 0
> >> tmpfs /dev/shm tmpfs defaults 0 0
> >> /dev/VolGroup00/LogVol02 /home ext3 defaults 1 2
> >> proc /proc proc defaults 0 0
> >> /dev/VolGroup00/LogVol03 /shared ext3 defaults 1 2
> >> sysfs /sys sysfs defaults 0 0
> >> /dev/VolGroup00/LogVol01 swap swap defaults 0 0
>
> [SNIP]
>
> >
> > OK our problems are a bit different (but probably attributed to the same
> > kernel issue).
> > Is this software Raid0?
> > Can you post the results of fdisk -l
>
> The output of fdisk -l is as follows:
>
> Disk /dev/sda: 320.0 GB, 320072933376 bytes
> 255 heads, 63 sectors/track, 38913 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sda1 * 1 13 104391 83 Linux
> /dev/sda2 14 77826 625032922+ 8e Linux LVM
>
> Disk /dev/sdb: 320.0 GB, 320072933376 bytes
> 255 heads, 63 sectors/track, 38913 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> After providing the above results, fdisk complains that "Disk /dev/sdb
> doesn't contain a valid partition table". That's not surprising.
> /dev/sda and /dev/sdb are the two hard drives that together comprise my
> system's RAID array. It's RAID 0. Although I have never before delved
> into how partition information is written to hard drives in a RAID 0
> array, it seems logical that it would go on the first drive and not be
> duplicated on the other(s).
>
> Just for fun, I also ran fdisk -l on the the RAID device itself
> (/dev/mapper/nvidia_ehbjhcdb). Here is its output. This time there was
> no error message.
>
> Disk /dev/mapper/nvidia_ehbjhcdb: 640.1 GB, 640145864704 bytes
> 255 heads, 63 sectors/track, 77826 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/mapper/nvidia_ehbjhcdb1 * 1 13 104391 83 Linux
> /dev/mapper/nvidia_ehbjhcdb2 14 77826 625032922+ 8e Linux LVM
>
> This looks fine to me, but as I have mentioned before, I am not an
> expert when it comes to how Linux structures and stores partition
> information.
>
> Once again, thanks for your help. It's very much appreciated.
>
> Debbie
Sorry for the delay is responding... digest mode and crashed Winblows
server (raid problem hehe)
With LVM on Raid0 you have got a hard one to resolve!
I do remember having some problems with the device mapper on
2.6.15-1.2054_FC5. On 2.6.16-1.2080_FC5 there is
no /dev/mapper/nvidiaxxxxxx but if I reboot into the install 2054 kernel
I see /dev/mapper/nvidia_abaaggda. I am sure this is where the problem
is but whether its the kernel or dmraid I never found out.
Having raid1 I was able to boot off 1 of the raid disks and ended up
backing out of raid all together. I did have a raid0 swap partition and
that would definitely not come up with the new kernel (or even a vanilla
kernel I built).
In my search for answers I stumbled across this...
http://www.fedoraforum.org/forum/archive/index.php/t-96108.html
maybe it will help.
Terry
More information about the fedora-list
mailing list