From askfuhu at gmail.com Fri Nov 1 03:26:49 2013 From: askfuhu at gmail.com (Hu Fu) Date: Fri, 1 Nov 2013 11:26:49 +0800 Subject: [Linux-cluster] A ccsd error with sendto function Message-ID: Hi, I have a problem with my cluster. The error I've found is this: 2012-05-24T10:20:05.880675+08:00 h58 ccsd[3415]: Sendto failed: Message too long 2012-05-24T10:20:05.880813+08:00 h58 ccsd[3415]: Error while processing broadcast: Message too long When created 500 linked clone vm, this error always be found. I think it's a broadcast problem, is there any way to slove it? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From jplorier at gmail.com Fri Nov 1 14:38:26 2013 From: jplorier at gmail.com (Juan Pablo Lorier) Date: Fri, 01 Nov 2013 12:38:26 -0200 Subject: [Linux-cluster] Sharing a software raid Message-ID: <5273BCE2.8010605@gmail.com> Hi, I have a file server from SuperMicro that has two servers plus 16 sata disks shared for the two servers. I have created a software raid with one of the servers using the 16 disks and on top of the md created an lvm volume with has a gfs2 partition. The question is if it's safe that the second server uses the md device (and pv) to access the gfs2 partition as I'm concerned about both servers touching md and lvm metadata at the same time. Regards, From rpeterso at redhat.com Fri Nov 1 14:57:12 2013 From: rpeterso at redhat.com (Bob Peterson) Date: Fri, 1 Nov 2013 10:57:12 -0400 (EDT) Subject: [Linux-cluster] Sharing a software raid In-Reply-To: <5273BCE2.8010605@gmail.com> References: <5273BCE2.8010605@gmail.com> Message-ID: <1623855893.14753666.1383317832051.JavaMail.root@redhat.com> ----- Original Message ----- | Hi, | | I have a file server from SuperMicro that has two servers plus 16 sata | disks shared for the two servers. I have created a software raid with | one of the servers using the 16 disks and on top of the md created an | lvm volume with has a gfs2 partition. | The question is if it's safe that the second server uses the md device | (and pv) to access the gfs2 partition as I'm concerned about both | servers touching md and lvm metadata at the same time. | Regards, Nope. GFS2 will only work properly with hardware RAID (or no RAID), but not software raid. Bob Peterson Red Hat File Systems From morpheus.ibis at gmail.com Fri Nov 1 14:58:14 2013 From: morpheus.ibis at gmail.com (Pavel Herrmann) Date: Fri, 01 Nov 2013 15:58:14 +0100 Subject: [Linux-cluster] Sharing a software raid In-Reply-To: <5273BCE2.8010605@gmail.com> References: <5273BCE2.8010605@gmail.com> Message-ID: <2110097.Jeo9KNyTGv@gesher> Hey On Friday 01 November 2013 12:38:26 Juan Pablo Lorier wrote: > Hi, > > I have a file server from SuperMicro that has two servers plus 16 sata > disks shared for the two servers. I have created a software raid with > one of the servers using the 16 disks and on top of the md created an > lvm volume with has a gfs2 partition. > The question is if it's safe that the second server uses the md device > (and pv) to access the gfs2 partition as I'm concerned about both > servers touching md and lvm metadata at the same time. > Regards, LVM metadata is safe, as long as you use clvm MD metadata is probably not safe for multiple access though. Regards Pavel Herrmann From jplorier at gmail.com Fri Nov 1 15:30:42 2013 From: jplorier at gmail.com (Juan Pablo Lorier) Date: Fri, 01 Nov 2013 13:30:42 -0200 Subject: [Linux-cluster] Sharing a software raid In-Reply-To: <2110097.Jeo9KNyTGv@gesher> References: <5273BCE2.8010605@gmail.com> <2110097.Jeo9KNyTGv@gesher> Message-ID: <5273C922.5050202@gmail.com> Hi Pavel, Thank you very much. I was affraid of that, I'll have to change my schema. Regards, On 01/11/13 12:58, Pavel Herrmann wrote: > Hey > > On Friday 01 November 2013 12:38:26 Juan Pablo Lorier wrote: >> Hi, >> >> I have a file server from SuperMicro that has two servers plus 16 sata >> disks shared for the two servers. I have created a software raid with >> one of the servers using the 16 disks and on top of the md created an >> lvm volume with has a gfs2 partition. >> The question is if it's safe that the second server uses the md device >> (and pv) to access the gfs2 partition as I'm concerned about both >> servers touching md and lvm metadata at the same time. >> Regards, > LVM metadata is safe, as long as you use clvm > > MD metadata is probably not safe for multiple access though. > > Regards > Pavel Herrmann From arnold at arnoldarts.de Fri Nov 1 16:34:22 2013 From: arnold at arnoldarts.de (Arnold Krille) Date: Fri, 1 Nov 2013 17:34:22 +0100 Subject: [Linux-cluster] Sharing a software raid In-Reply-To: <5273C922.5050202@gmail.com> References: <5273BCE2.8010605@gmail.com> <2110097.Jeo9KNyTGv@gesher> <5273C922.5050202@gmail.com> Message-ID: <20131101173422.4496d098@xingu.arnoldarts.de> On Fri, 01 Nov 2013 13:30:42 -0200 Juan Pablo Lorier wrote: > Thank you very much. I was affraid of that, I'll have to change my > schema. Regards, But with (c)lvm you don't really need raid underneath. You can do striping and mirroring with lvm too. - Arnold -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 230 bytes Desc: not available URL: From jplorier at gmail.com Fri Nov 1 17:03:47 2013 From: jplorier at gmail.com (Juan Pablo Lorier) Date: Fri, 01 Nov 2013 15:03:47 -0200 Subject: [Linux-cluster] Sharing a software raid Message-ID: <5273DEF3.3000806@gmail.com> Thanks Bob, I'll have to create a raid via LVM using clvm as Herrmann suggested and on top of that create the gfs2 partition. Regards, From jplorier at gmail.com Sat Nov 2 17:51:29 2013 From: jplorier at gmail.com (Juan Pablo Lorier) Date: Sat, 02 Nov 2013 15:51:29 -0200 Subject: [Linux-cluster] Linux-cluster Digest, Vol 115, Issue 2 In-Reply-To: References: Message-ID: <52753BA1.1050306@gmail.com> Hi Arnold, Thanks, I wasn't aware that logical raid 6 volume was available in LVM, thus I used md underneath to bring that functionality. I now can create just the lvm volume and use gfs on top. Regards, El 02/11/13 14:00, linux-cluster-request at redhat.com escribi?: > Re: [Linux-cluster] Sharing a software raid From queszama at yahoo.in Mon Nov 4 10:40:55 2013 From: queszama at yahoo.in (Zama Ques) Date: Mon, 4 Nov 2013 18:40:55 +0800 (SGT) Subject: [Linux-cluster] Adding a node back to cluster failing Message-ID: <1383561655.40933.YahooMailNeo@web193505.mail.sg3.yahoo.com> Hi , We are having a node two cluster with manual fencing configuration . One of the node died because of some hardware issue and we removed the dead node from the cluster using the following commands. fence_manual ? ? -n??? db2.example.com fence_ack_manual??? -n??? db2.example.com The faulty node is now recovered and we need to add this node back to cluster . We are trying to add the node using luci interface . But while adding the add using "Add a node " from the luci interface , the addition of node fails with the error that "that node is already a part of the cluster " . But the node name is not there in cluster.conf file . === ?cat /etc/cluster/cluster.conf ??????? ??????? ??????????????? ??????????????????????? ??????????????? ??????? ??????? ??????? ??????? ??????????????? ??????????????????????? ??????????????????????????????? ??????????????????????? ??????????????? ??????????????? ??????????????????????? ??????????????????????? ??????????????????????? ???????????????????????