[Linux-cluster] O/P of cman_tool service

Arun Purushothaman arunkp1987 at gmail.com
Thu Feb 2 08:47:17 UTC 2012


Hi,

O/p of cman_service

[root at ssdgblade1 ~]# cman_tool services
type             level name       id       state
fence            0     default    00010001 none
[1 2]
dlm              1     clvmd      00020001 none
[1 2]
dlm              1     rgmanager  00030001 none
[1 2]
dlm              1     gfs        00050001 none
[1]
gfs              2     gfs        00040001 none
[1]


Regads
Arun K P


On 01/02/2012, linux-cluster-request at redhat.com
<linux-cluster-request at redhat.com> wrote:
> Send Linux-cluster mailing list submissions to
> 	linux-cluster at redhat.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://www.redhat.com/mailman/listinfo/linux-cluster
> or, via email, send a message with subject or body 'help' to
> 	linux-cluster-request at redhat.com
>
> You can reach the person managing the list at
> 	linux-cluster-owner at redhat.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-cluster digest..."
>
>
> Today's Topics:
>
>    1. Re: Nodes are getting Down while relocating service
>       (jose nuno neto)
>    2. GFS2 and VM High Availability/DRS (Wes Modes)
>    3. gfs2-utils 3.1.4 Released (Andrew Price)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 31 Jan 2012 17:25:41 -0000 (GMT)
> From: "jose nuno neto" <jose.neto at liber4e.com>
> To: "linux clustering" <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] Nodes are getting Down while relocating
> 	service
> Message-ID: <bb5da13c5b8445847d0d479c4b9defdd.squirrel at liber4e.com>
> Content-Type: text/plain;charset=iso-8859-1
>
> Hi
> Well just not fully sure what logging that was
>
> Anyway, to help clarify, if the cluster works ok, up until you start
> services, I'll investigate the services
>
> can you post the output of
> cman_tool services
>
> when cluster is running ok
>
> Cheers
> Jose
>
>> Hello Jose
>>
>> If you look the cluster.conf you can see his dosn't using drbd
>>
>> Like i sayed beforce
>> ===================================================
>> [network_problem]
>> ===================================================
>> Jan 28 15:50:05 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:05 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:05 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:05 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:06 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:06 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:06 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:06 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:07 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:07 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:07 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:07 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:08 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:08 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:08 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:08 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:09 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:09 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:09 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:09 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:10 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:10 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:10 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:10 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:11 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:11 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> Jan 28 15:50:11 ssdgblade2 openais[10324]: [TOTEM] FAILED TO RECEIVE
>> Jan 28 15:50:11 ssdgblade2 openais[10324]: [TOTEM] entering GATHER state
>> from 6.
>> ==================================================================
>>
>> the first think it can be utils it's stops iptables
>>
>> 2012/1/31 jose nuno neto <jose.neto at liber4e.com>
>>
>>> Hello
>>>
>>> Took a quick look on the messages and see no fence reference, there's a
>>> break in token messages, recovering, cluster.conf change, comunication
>>> lost again....
>>> could be the service shutdown, after cluster.conf update, forcing
>>> shutdown
>>>
>>> do you have drbd running too?
>>>
>>> Cheers
>>> Jose Neto
>>>
>>> > Hi,
>>> >
>>> > We  are facing some issue while configuring cluster in Centos 5.5
>>> >
>>> >
>>> > Here is the scenario where we got stuck.
>>> >
>>> > Issue:
>>> >
>>> > All nodes in the cluster turned of if cluster services restarted or
>>> > disabled or enabled.
>>> >
>>> > Three services should work as a clustered service,
>>> >
>>> > 1.     Postgresql.
>>> > 2.     GFS (1TB SAN space which is mounted on /var/lib/pgsql)
>>> > 3.     Virtual IP (common IP)?IP 10.242.108.42
>>> >
>>> > Even we tried adding only Virtual IP as a cluster service then also,
>>> >
>>> > #clusvcadm  -r DBService ?m ssdgblade2.db2   (from ssdgblade1.db1)
>>> >
>>> > Could not relocate the service and both node get turned off.
>>> >
>>> > Environment
>>> >
>>> > CentOS 5.5
>>> > Postgresql 8.3.3
>>> > Kernel version-2.6.18-194
>>> > CentOs Cluster Suit.
>>> >
>>> > Hardware:
>>> >
>>> > 1.    Chasis IBM BladeCenter E.
>>> > 2.    IBM HS22 blades (8 numbers)?clustering is done in blade1 and
>>> blade2
>>> > 3.    Blade Management Module IP is 10.242.108.58
>>> > 4.    Fence device IBM Bladecenter.( login successful via telnet and
>>> > web browser to management module).
>>> > 5.    Cisco Catalyst 2960G Switch.
>>> >
>>> > IP:
>>> >
>>> > 10.242.108.41 (ssdgblade1.db1)
>>> > 10.242.108.43 (ssdgblade2.db2)
>>> >
>>> > Virtual IP 10.242.108.42
>>> > Multicast IP 239.192.247.38
>>> >
>>> >
>>> > Diagnostic Steps followed:
>>> >
>>> > 1.     Removed postgresql and GFS from cluster service and rebooted
>>> > both the server with only VIP service. Still problem exist. Can not
>>> > relocate the service.
>>> > 2.    Tested fencing by,
>>> >
>>> > #fence_node ssdgblade2.db2   (from db1)
>>> > #fence_node ssdgblade1.db1   (from db2)
>>> >
>>> > Can fence the given node.  But during boot up it fence the other node.
>>> >
>>> > Please find the attachment for your reference.
>>> > --
>>> >
>>> >
>>> > Thanks & Regards,
>>> >
>>> > *Arun K P
>>> > *
>>> >
>>> > System Administrator
>>> >
>>> > *HCL Infosystems Ltd*.
>>> >
>>> > *Kolkata*
>>> >
>>> > Mob: +91- 9903361422
>>> >
>>> > *www.hclinfosystems.in* <http://www.hclinfosystems.in/>
>>> >
>>> > *Technology that touches lives* *TM*
>>> > **
>>> >
>>> > --
>>> > This message has been scanned for viruses and
>>> > dangerous content by MailScanner, and is
>>> > believed to be clean.
>>> >
>>> > --
>>> > Linux-cluster mailing list
>>> > Linux-cluster at redhat.com
>>> > https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>>
>>> --
>>> This message has been scanned for viruses and
>>> dangerous content by MailScanner, and is
>>> believed to be clean.
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>
>>
>>
>> --
>> esta es mi vida e me la vivo hasta que dios quiera
>>
>> --
>> This message has been scanned for viruses and
>> dangerous content by MailScanner, and is
>> believed to be clean.
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 31 Jan 2012 16:07:32 -0800
> From: Wes Modes <wmodes at ucsc.edu>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: [Linux-cluster] GFS2 and VM High Availability/DRS
> Message-ID: <4F288244.7060904 at ucsc.edu>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Howdy, thanks for all your answers here.  With your help (particularly
> Digimer), I was able to set up my little two node GFS2 cluster.  I can't
> pretend yet to understand everything, but I have a blossoming awareness
> of what and why and how.
>
> The way I finally set it up for my test cluster was
>
>  1. LUN on SAN
>  2. configured through ESXi as RDM
>  3. RDM made available to OS
>  4. parted RDM device
>  5. pvcreate/vgcreate/lvcreate to create logical volume on device
>  6. mkfs.gfs2 to create GFS2 filesystem on volume supported by clvmd and
>     cman, etc
>
> It works and that's great.  BUT the lit says VMWare's vMotion/HA/DRS
> doesn't support RDM (though others say that isn't a problem)
>
> I am setting up GFS2 on CentOS running on VMWare and a SAN.  We want to
> take advantage of VMWare's High Availability (HA) and Distributed
> Resource Scheduler (DRS) which allow the VM cluster to migrate a guest
> to another host if the guest becomes unavailable for any reason.  I've
> come across some contradictory statements regarding the compatibility of
> RDMs and HA/DRS.  So naturally, I have some questions:
>
> 1)  If my shared cluster filesystem resides on an RDM on a SAN and is
> available to all of the ESXi hosts, can I use HA/DRS or not?  If so,
> what are the limitations?  If not, why not?
>
> 2)  If I cannot use an RDM for the cluster filesystem, can I use VMFS so
> vmware can deal with it?  What are the limitations of this?
>
> 3)  Is there some other magic way using iSCSI connectors or something
> bypassing vmware?  Anyone have experience with this?
>
> Wes
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <https://www.redhat.com/archives/linux-cluster/attachments/20120131/8bfe164c/attachment.html>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 01 Feb 2012 13:13:46 +0000
> From: Andrew Price <anprice at redhat.com>
> To: cluster-devel at redhat.com, linux-cluster at redhat.com
> Subject: [Linux-cluster] gfs2-utils 3.1.4 Released
> Message-ID: <4F293A8A.9070504 at redhat.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi,
>
> gfs2-utils 3.1.4 has been released. This version features a new
> gfs2_lockgather script to aid diagnosis of GFS2 locking issues, more
> clean-ups and fixes based on static analysis results, and various other
> minor enhancements and bug fixes. See below for a full list of changes.
>
> The source tarball is available from:
>
>     https://fedorahosted.org/released/gfs2-utils/gfs2-utils-3.1.4.tar.gz
>
> To report bugs or issues, please use:
>
>     https://bugzilla.redhat.com/
>
> Regards,
>
> Andy Price
> Red Hat File Systems
>
>
> Changes since 3.1.3:
>
> Adam Drew (1):
>        Added gfs2_lockgather data gathering script.
>
> Andrew Price (30):
>        libgfs2: Expand out calls to die()
>        libgfs2: Push down die() into the utils and remove it
>        gfs2_edit: Remove a useless assignment
>        gfs2_edit: Check return value of compute_constants
>        gfs2_edit: Fix possible uninitialized access
>        gfs2_edit: Fix memory leak in dump_journal()
>        gfs2_edit: Fix null pointer dereference in dump_journal
>        gfs2_edit: Remove unused j_inode from find_journal_block()
>        gfs2_edit: Fix memory leak in find_journal_block
>        gfs2_edit: Check for error value from gfs2_get_bitmap
>        gfs2_edit: Fix resource leaks in display_extended()
>        gfs2_edit: Fix resource leak in print_block_details()
>        gfs2_edit: Fix null pointer derefs in display_block_type()
>        gfs2_edit: Check more error values from gfs2_get_bitmap
>        gfs2_edit: Fix another resource leak in display_extended
>        mkfs.gfs2: Fix use of uninitialized value in check_dev_content
>        gfs2_convert: Fix null pointer deref in journ_space_to_rg
>        gfs2_convert: Fix null pointer deref in conv_build_jindex
>        fsck.gfs2: Remove unsigned comparisons with zero
>        fsck.gfs2: Plug a leak in init_system_inodes()
>        libgfs2: Set errno in dirent_alloc and use dir_add consistently
>        fsck.gfs2: Plug memory leak in check_system_dir()
>        fsck.gfs2: Fix null pointer deref in check_system_dir()
>        fsck.gfs2: Plug a leak in find_block_ref()
>        fsck.gfs2: Remove unused hash.c, hash.h
>        mkfs.gfs2: Improve error messages
>        libgfscontrol: Fix resource leaks
>        fsck.gfs2: Plug a leak in peruse_system_dinode()
>        fsck.gfs2: Fix unchecked malloc in gfs2_dup_set()
>        gfs2_edit: Don't exit prematurely in display_block_type
>
> Carlos Maiolino (2):
>        i18n: Update gfs2-utils.pot file
>        Merge branch 'master' of ssh://git.fedorahosted.org/git/gfs2-utils
>
> Steven Whitehouse (13):
>        gfs2_convert: clean up question asking code
>        fsck.gfs2: Use sigaction and not signal syscall
>        fsck.gfs2: Clean up pass calling code
>        libgfs2: Add iovec to gfs2_buffer_head
>        libgfs2: Add beginnings of a metadata description
>        libgfs2: Remove struct gfs_rindex from header, etc
>        libgfs2: Use endian defined types for GFS1 on disk structures
>        edit: Fix up block type recognition
>        libgfs2: Add a few structures missed from the initial version of
> meta.c
>        fsck/libgfs2: Add a couple of missing header files
>        libgfs2: Add some tables of symbolic constant names
>        edit: Hook up gfs2_edit to use new metadata info from libgfs2
>        libgfs2: Add flags to metadata description
>
>
>
> ------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> End of Linux-cluster Digest, Vol 94, Issue 1
> ********************************************
>




More information about the Linux-cluster mailing list