[Linux-cluster] GFS 6.0 crashing x86_64 machine

micah nerren mnerren at paracel.com
Fri Aug 6 23:31:55 UTC 2004


On Fri, 2004-08-06 at 15:35, Michael Conrad Tadpol Tilstra wrote:
> On Fri, Aug 06, 2004 at 03:03:39PM -0700, micah nerren wrote:
> > We have used nolock instead of gulm, still on the pool device over the
> > HBA, and received a crash. Attached are two traces of the crashes. We
> > edited the code sprinkling printk's throughout to get some output. 
> > 
> > Using lock_nolock instead of lock_gulm still crashes, but slightly
> > differently. See koops-nolock.txt
> 
> er, you might want to double check this run, looking at the oops and
> loging, it looks like it is still trying to use gulm.
> 
> the line:
>  Gulm v6.0.0 (built Aug  5 2004 16:27:11) installed
> in the file: koops-nolock.txt
> lead me to believe this, along with the lock_gulm sysmbols in the oops.
> 
> i could be imaging things too...

Ok, using a disk attached via fibre channel to a single machine via an
LSI hba I can create and mount a GFS file system using lock_nolock
without a pool. A start!

Console log:

GFS: fsid=(8,2).0: Joined cluster. Now mounting FS...
GFS: fsid=(8,2).0: jid=0: Trying to acquire journal lock...
GFS: fsid=(8,2).0: jid=0: Looking at journal...
GFS: fsid=(8,2).0: jid=0: Done

I then tried to do lock_nolock on a pool device, and that worked as
well:

GFS: fsid=hopkins:gfs01.0: Joined cluster. Now mounting FS...
GFS: fsid=hopkins:gfs01.0: jid=0: Trying to acquire journal lock...
GFS: fsid=hopkins:gfs01.0: jid=0: Looking at journal...
GFS: fsid=hopkins:gfs01.0: jid=0: Done

So it appears to be specifically related to lock_gulm.

Anything else I should try?

I really appreciate all your help in debugging this!

Thanks,

Micah




More information about the Linux-cluster mailing list