[Linux-cluster] groupd SEGFAULT
Ray Van Dolson
rvandolson at esri.com
Thu May 7 19:01:07 UTC 2009
I'm trying to set up a 2-node cluster on RHEL 5.3 POWER (ppc64). When
I do a service cman start after configuring my cluster.conf file (via
system-config-cluster), it appears that groupd is segfaulting.
/var/log/groupd:
1241721765 cman: our nodeid 2 name domusB.domain.com quorum 1
1241721765 groupd segfault log follows:
There are core files generated in /. If I install the debuginfo's and
run via gdb:
(gdb) set follow-fork-mode child
(gdb) r
Starting program: /sbin/groupd
Program received signal SIGSEGV, Segmentation fault.
[Switching to process 19766]
0x0ff238b0 in semctl@@GLIBC_2.2 () from /lib/libc.so.6
(gdb) bt full
#0 0x0ff238b0 in semctl@@GLIBC_2.2 () from /lib/libc.so.6
No symbol table info available.
#1 0x0fe1314c in openais_service_connect () from /usr/lib/openais/libcpg.so.2
No symbol table info available.
#2 0x0fe13d60 in cpg_initialize () from /usr/lib/openais/libcpg.so.2
No symbol table info available.
#3 0x1000c96c in setup_cpg () at cpg.c:638
error = 1516
fd = 1241721509
#4 0x10012930 in loop () at main.c:816
rv = 0
i = 268367568
timeout = -1
workfn = (void (*)(int)) 0x80
deadfn = (void (*)(int)) 0xfff8fa64
#5 0x1001357c in main (argc=1, argv=0xfff8fa64) at main.c:1054
i = 4
My cluster.conf is as follows:
<?xml version="1.0"?>
<cluster alias="domus" config_version="4" name="domus">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="domusA.domain.com" nodeid="1" votes="1">
<fence>
<method name="1">
<device blade="4" name="sysibmbc1.domain.com"/>
</method>
</fence>
</clusternode>
<clusternode name="domusB.domain.com" nodeid="2" votes="1">
<fence>
<method name="1">
<device blade="2" name="sysibmbc1.domain.com"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_bladecenter" ipaddr="10.49.4.192" login="user" name="sysibmbc1.domain.com" passwd="pass"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources>
<ip address="10.49.6.97" monitor_link="1"/>
</resources>
</rm>
</cluster>
Have I hit a bug or maybe just configured something wrong?
Ray
More information about the Linux-cluster
mailing list