From vectorz3 at gmail.com Fri Feb 1 01:20:29 2008 From: vectorz3 at gmail.com (Vectorz Sigma) Date: Thu, 31 Jan 2008 17:20:29 -0800 Subject: [Linux-cluster] How to set node timeout when using gulm? Message-ID: <940a2e6d0801311720m695a992bq99ec592a9644bc51@mail.gmail.com> I'm aware of how to do this for CMAN but I'm running gulm. I can't find information anywhere on how to do this. Anyone know? -------------- next part -------------- An HTML attachment was scrubbed... URL: From isplist at logicore.net Fri Feb 1 01:29:49 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Thu, 31 Jan 2008 19:29:49 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <47A2529C.1040404@io-consulting.net> Message-ID: <2008131192949.627124@leena> > i think that's the file he sent you ;) (starting with "# /etc/udev..") >thinking about it, i think i should add to say that you need to edit >that file for your needs (adjust the scsi id). i didn't use udev yet, >the only place where i scsi_id etc together with dm-multipath. I've not looked into this yet so don't know what to edit or add. On one node, the drive is /dev/sda, but on some nodes, it'll be sdx because some of the machines already have SCSI drives in them. Not hard to see, just have to keep it in mind since I'm assuming each machine needs this file. I see a catch all ( sd*[!0-9] ) but don't know where I need to put say /dev/sda. # /etc/udev/rules.d/69-xraid.rules # 2008-01-18 # Brian Kroth # Setup devices nodes for the two storage arrays based upon their unique # scsi_id ACTION!="add", GOTO="xraid_end" KERNEL=="sd*[!0-9]", ENV{ID_SERIAL}=="3600039300001e6db01000000d52b0606", SYMLINK+="xraid-c1-a1" KERNEL=="sd*[!0-9]", ENV{ID_SERIAL}=="3600039300001e6db020000006a95fef7", SYMLINK+="xraid-c1-a2" KERNEL=="sd*[0-9]", ENV{ID_SERIAL}=="3600039300001e6db01000000d52b0606", SYMLINK+="xraid-c1-a1-p%n" KERNEL=="sd*[0-9]", ENV{ID_SERIAL}=="3600039300001e6db020000006a95fef7", SYMLINK+="xraid-c1-a2-p%n" LABEL="xraid_end" >johannes From td3201 at gmail.com Fri Feb 1 05:10:15 2008 From: td3201 at gmail.com (Terry) Date: Thu, 31 Jan 2008 23:10:15 -0600 Subject: [Linux-cluster] postgres cluster with RHEL5 Message-ID: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> Good evening, I am trying to get an active-passive postgres cluster going. I have a shared storage with NFS. I just can't get it going. I am using luci to configure this which, from what I have been reading, is somewhat buggy in the postgres-8 arena. My first question is what components of postgres should live on the shared storage? Here are the relevant parts of my config: From qqlka at nask.pl Fri Feb 1 07:34:18 2008 From: qqlka at nask.pl (=?iso-8859-2?Q?Agnieszka_Kuka=B3owicz?=) Date: Fri, 1 Feb 2008 08:34:18 +0100 Subject: [Linux-cluster] Power fencing, Xen, Virtual Services, RHCS Message-ID: <047001c864a4$dbb58660$0777b5c2@gda07ak> Hi, I have problem with 2 nodes cluster runnig xen virtual machines. The configuration is very simple. Node 1 - d1 runs vm_service1 and node 2 - d2 runs vm_service2 and have configured APC Master Switch as fence devices. Everything works well: starting, stopping and migrating virtual services between nodes. But the problem occurs when I try to test crash one of the nodes by, for example, shutting down node d2. In this case node d1 discovers node d2 failed and fences it through APC device. After node d2 is up it joins cluster and try to relocate vm_service2. But during that I get strange logs on node d2: Jan 31 21:18:11 d2 openais[5485]: [TOTEM] entering OPERATIONAL state. Jan 31 21:18:11 d2 openais[5485]: [CLM ] got nodejoin message 10.0.200.101 Jan 31 21:18:11 d2 openais[5485]: [CLM ] got nodejoin message 10.0.200.102 Jan 31 21:18:11 d2 openais[5485]: [CPG ] got joinlist message from node 2 Jan 31 21:18:45 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:18:46 d2 openais[5485]: [TOTEM] Retransmit List: 31 .... Jan 31 21:19:10 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:11 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:11 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:11 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:12 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:15 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:15 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:16 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:16 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:16 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:16 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:17 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:17 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:18 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:18 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:18 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:19 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:19 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:19 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:20 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:20 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:20 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:20 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:21 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:21 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:21 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:21 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:21 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:21 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:23 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:23 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:23 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:24 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:24 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:24 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:26 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:26 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:26 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:27 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:27 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:27 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:29 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:29 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:29 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:30 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:30 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:30 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:32 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:32 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:32 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:33 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:33 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:33 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:35 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:35 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:35 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:36 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:36 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:36 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:38 d2 openais[5485]: [TOTEM] Retransmit List: 31 Jan 31 21:19:38 d2 openais[5485]: [TOTEM] FAILED TO RECEIVE Jan 31 21:19:38 d2 openais[5485]: [TOTEM] entering GATHER state from 6. Jan 31 21:19:43 d2 openais[5485]: [TOTEM] entering GATHER state from 0. Jan 31 21:20:18 d2 openais[5485]: [TOTEM] The consensus timeout expired. Jan 31 21:20:18 d2 openais[5485]: [TOTEM] entering GATHER state from 3. Jan 31 21:20:52 d2 openais[5485]: [TOTEM] The consensus timeout expired. Jan 31 21:20:52 d2 openais[5485]: [TOTEM] entering GATHER state from 3. And on node d2: Jan 31 21:18:08 d1 openais[5467]: [CLM ] CLM CONFIGURATION CHANGE Jan 31 21:18:08 d1 openais[5467]: [CLM ] New Configuration: Jan 31 21:18:08 d1 openais[5467]: [CLM ] r(0) ip(10.0.200.101) Jan 31 21:18:08 d1 openais[5467]: [CLM ] Members Left: Jan 31 21:18:08 d1 openais[5467]: [CLM ] Members Joined: Jan 31 21:18:08 d1 openais[5467]: [CLM ] CLM CONFIGURATION CHANGE Jan 31 21:18:08 d1 openais[5467]: [CLM ] New Configuration: Jan 31 21:18:09 d1 openais[5467]: [CLM ] r(0) ip(10.0.200.101) Jan 31 21:18:09 d1 openais[5467]: [CLM ] r(0) ip(10.0.200.102) Jan 31 21:18:09 d1 openais[5467]: [CLM ] Members Left: Jan 31 21:18:09 d1 openais[5467]: [CLM ] Members Joined: Jan 31 21:18:09 d1 openais[5467]: [CLM ] r(0) ip(10.0.200.102) Jan 31 21:18:09 d1 openais[5467]: [SYNC ] This node is within the primary component and will provide service. Jan 31 21:18:09 d1 openais[5467]: [TOTEM] entering OPERATIONAL state. Jan 31 21:18:09 d1 openais[5467]: [CLM ] got nodejoin message 10.0.200.101 Jan 31 21:18:10 d1 openais[5467]: [CLM ] got nodejoin message 10.0.200.102 Jan 31 21:18:10 d1 openais[5467]: [CPG ] got joinlist message from node 2 Jan 31 21:18:15 d1 kernel: dlm: connecting to 1 Jan 31 21:18:15 d1 kernel: dlm: got connection from 1 Jan 31 21:19:47 d1 openais[5467]: [TOTEM] The token was lost in the OPERATIONAL state. Jan 31 21:19:47 d1 openais[5467]: [TOTEM] Receive multicast socket recv buffer size (288000 bytes). Jan 31 21:19:47 d1 openais[5467]: [TOTEM] Transmit multicast socket send buffer size (288000 bytes). Jan 31 21:19:47 d1 openais[5467]: [TOTEM] entering GATHER state from 2. Jan 31 21:19:52 d1 openais[5467]: [TOTEM] entering GATHER state from 0. Jan 31 21:19:52 d1 openais[5467]: [TOTEM] Creating commit token because I am the rep. Jan 31 21:19:52 d1 openais[5467]: [TOTEM] Saving state aru 30 high seq received 31 Jan 31 21:19:52 d1 openais[5467]: [TOTEM] Storing new sequence id for ring 4bc Jan 31 21:19:52 d1 openais[5467]: [TOTEM] entering COMMIT state. Jan 31 21:19:52 d1 openais[5467]: [TOTEM] entering RECOVERY state. Jan 31 21:19:52 d1 openais[5467]: [TOTEM] position [0] member 10.0.200.101: Jan 31 21:19:52 d1 kernel: dlm: closing connection to node 1 Jan 31 21:19:52 d1 openais[5467]: [TOTEM] previous ring seq 1208 rep 10.0.200.101 Jan 31 21:19:52 d1 openais[5467]: [TOTEM] aru 30 high delivered 30 received flag 0 Jan 31 21:19:52 d1 openais[5467]: [TOTEM] copying all old ring messages from 31-31. Jan 31 21:19:52 d1 openais[5467]: [TOTEM] Originated 0 messages in RECOVERY. Jan 31 21:19:52 d1 openais[5467]: [TOTEM] Originated for recovery: Jan 31 21:19:52 d1 fenced[5484]: d2.local.polska.pl not a cluster member after 0 sec post_fail_delay Jan 31 21:19:52 d1 openais[5467]: [TOTEM] Not Originated for recovery: 31 Jan 31 21:19:52 d1 fenced[5484]: fencing node "d2" Jan 31 21:19:52 d1 openais[5467]: [TOTEM] Sending initial ORF token Jan 31 21:19:53 d1 fenced[5484]: fence "d2" success In consequence, I cannot start cluster because node d1 constantly fences node d2. Making some research I find out that the problem might be in xen networking. During staring virtual service the xen bridges are reconfigurating (am I wrong?) and therefore there is a problem with communication between nodes. But I don't know what to do with xen configuration the cluster starts working. Cheers Agnieszka Kukalowicz From mpartio at gmail.com Fri Feb 1 07:37:23 2008 From: mpartio at gmail.com (Mikko Partio) Date: Fri, 1 Feb 2008 09:37:23 +0200 Subject: [Linux-cluster] postgres cluster with RHEL5 In-Reply-To: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> References: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> Message-ID: <2ca799770801312337q3ce5b981l8d09e53720617b5@mail.gmail.com> > I am trying to get an active-passive postgres cluster going. I have a > shared storage with NFS. I just can't get it going. I am using luci > to configure this which, from what I have been reading, is somewhat > buggy in the postgres-8 arena. My first question is what components > of postgres should live on the shared storage? > A have also implemented an active-passive cluster for postgresql, here's what I suggest: -Do not use nfs with postgresql. It can lead to unpredictable results when a failure with the nfs mount occurs. For more info check the postgresql mailing list archives. -I don't know how the rgmanager's built-in postgresql support works, but at least using the "script" resource works just fine. -I am using gfs with postgresql and all but the database binaries are on the shared storage. -Be sure to test, test and test that your fencing is working! Regards Mikko -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosimo at streppone.it Fri Feb 1 08:23:31 2008 From: cosimo at streppone.it (Cosimo Streppone) Date: Fri, 01 Feb 2008 09:23:31 +0100 Subject: [Linux-cluster] postgres cluster with RHEL5 In-Reply-To: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> References: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> Message-ID: <47A2D703.9050008@streppone.it> Terry wrote: > > I am trying to get an active-passive postgres cluster going. I have a > shared storage with NFS. I just can't get it going. I am using luci > to configure this which, from what I have been reading, is somewhat > buggy in the postgres-8 arena. My first question is what components > of postgres should live on the shared storage? I configured pg on RHEL4 cluster, and I put only the database files and logs on the shared storage. In my case was a 4Gb FC-attached SAN. The pg binaries and config files were on both active and passive nodes. Probably if you put also your binaries on shared storage, you should be sure that pg won't start if something bad happens :-) I also wouldn't recommend NFS, but I never tried that myself with pg. Only had bad experiences with NFS shares hanging up processes for long. > config_file="/etc/postgresql/postgresql.conf" name="dssystem2" > postmaster_user="postgres" shutdown_wait="0"/> Also, in RHCS4 there was no "" resource, but I did that with a script resource, which is simple but kind-of worked (after fixing the infamous init scripts bug :) -- Cosimo From devrim at gunduz.org Fri Feb 1 08:29:03 2008 From: devrim at gunduz.org (=?iso-8859-9?Q?Devrim_G=DCND=DCZ?=) Date: Fri, 1 Feb 2008 00:29:03 -0800 (PST) Subject: [Linux-cluster] postgres cluster with RHEL5 In-Reply-To: <47A2D703.9050008@streppone.it> References: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> <47A2D703.9050008@streppone.it> Message-ID: Hi, On Fri, 1 Feb 2008, Cosimo Streppone wrote: > I also wouldn't recommend NFS, but I never > tried that myself with pg. Only had bad experiences > with NFS shares hanging up processes for long. See PostgreSQL archives -- NFS is considered *very* harmful for many apps, and also to PostgreSQL. Never ever use NFS:) -- Devrim G?ND?Z RHCE _ ASCII ribbon campaign ( ) devrim~gunduz.org against HTML e-mail X devrim~PostgreSQL.org / \ devrim.gunduz~linux.org.tr http://www.gunduz.org From nhuczp at gmail.com Fri Feb 1 08:30:28 2008 From: nhuczp at gmail.com (chenzp) Date: Fri, 1 Feb 2008 16:30:28 +0800 Subject: [Linux-cluster] look gfs2 resource have some problem Message-ID: <6cc6db6f0802010030u69a4ff5egc587ef8a3bea6b30@mail.gmail.com> hi all: I have some problem about gfs2 mds and cluster component. The problem as follows: Frist problem: I want known the cluster Component how to manager GFS2 MDS. Secondly problem: The other proble how to sync mds. thanks! carry.chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhramica at gmail.com Fri Feb 1 09:27:39 2008 From: abhramica at gmail.com (Abhra Paul) Date: Fri, 1 Feb 2008 14:57:39 +0530 Subject: [Linux-cluster] Storage problem in Linux Cluster Message-ID: <8e3fbac10802010127s511fa106w59302a27ec12161b@mail.gmail.com> Respected Users I have a problem in cluster. One user of this cluster needs huge amount of space. In this cluster one big partition(size 1TB) is mounted on /data . So I provide this amount of space for his program execution. At 1.30 PM he occupied 200GB of this storage(which is mounted on /data). Now when I am going to check this /data , about usage of space , using the command du -sh * . In o/p there is no entry for /data and courser is only blinking without showing total o/p as well as command prompt is not coming back. Also when I am issuing cd / same thing is happening . How I solve this problem. With regards Abhra Paul From pcaulfie at redhat.com Fri Feb 1 09:32:35 2008 From: pcaulfie at redhat.com (Patrick Caulfeld) Date: Fri, 01 Feb 2008 09:32:35 +0000 Subject: [Linux-cluster] DLM Problem In-Reply-To: <200813111252.477369@leena> References: <200813111252.477369@leena> Message-ID: <47A2E733.4000401@redhat.com> isplist at logicore.net wrote: >> Can you boot a single node, without any cluster software running, then >> do the 'netstat -tap'. Then start the cluster software and do it again. > > compdev# netstat -tap > Active Internet connections (servers and established) > Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name > tcp 0 0 *:amanda *:* LISTEN 2371/xinetd > tcp 0 0 *:netbios-ssn *:* LISTEN 2444/smbd > tcp 0 0 *:718 *:* LISTEN 2232/rpc.statd > tcp 0 0 *:sunrpc *:* LISTEN 2212/portmap > tcp 0 0 *:10000 *:* LISTEN 2885/perl > tcp 0 0 *:microsoft-ds *:* LISTEN 2444/smbd > tcp 0 0 *:http *:* LISTEN 2424/httpd > tcp 0 0 *:ssh *:* LISTEN 2356/sshd > tcp 0 0 *:https *:* LISTEN 242 /httpd > tcp 0 0 compdev:ssh ::ffff:192.168.1.200:2685 ESTABLISHED 3134/0 OK, so that tells me that nothing else is using the DLM port. >> If you get the error then, say so. > > I'll post the log at the end of this so as not to make it so hard to read. > But, it's the same error and nothing blocking that port. > >> Then boot another node, using the same procedure. > > This will take me a little, I'll post the results shortly. > >> Post the results here along with the cluster.conf ... also make sure you >> have the same version of EVERYTHING on both nodes ... that's software >> AND cluster.conf > > All machines are exactly the same versions of everything. I've taken nodes out > and left only a few nodes in place to simplify things. > cluster.conf looks fine too. I don't need the LVM log. Sorry if I wasn't clear. I wanted the syslog extracts for the DLM startup. I also wanted to see the netstat AFTER starting the dlm on the first node (and if there were any errors), then the same things (on both nodes) when the second node was added. Patrick From johannes.russek at io-consulting.net Fri Feb 1 10:24:46 2008 From: johannes.russek at io-consulting.net (jr) Date: Fri, 01 Feb 2008 11:24:46 +0100 Subject: [Linux-cluster] DLM Problem In-Reply-To: <2008131192949.627124@leena> References: <2008131192949.627124@leena> Message-ID: <1201861486.29126.102.camel@admc.win-rar.local> Am Donnerstag, den 31.01.2008, 19:29 -0600 schrieb isplist at logicore.net: > I've not looked into this yet so don't know what to edit or add. > > On one node, the drive is /dev/sda, but on some nodes, it'll be sdx because > some of the machines already have SCSI drives in them. Not hard to see, just > have to keep it in mind since I'm assuming each machine needs this file. > > I see a catch all ( sd*[!0-9] ) but don't know where I need to put say > /dev/sda. okay, i'll try to help as far as i can with my nonexistant udev rules. the catchall is there because you *don't* know which scsi disk it is, wether it is sda, sdb or whatever. however, the DISK does have a unique serial number (even raid slices do) or at least a WWN. so you basically need to find out which unique number your unique disk has (by using /sbin/scsi_id -p 0x83 -g -u -s /block/sda), tell that the udev system and let it create a symlink you name whatever you like. this way, every node get's a symlink with the same name for the same disk, whatever the state of it's device enumeration is. oh, any udev gurus.. i suppose udev uses page 0x83 rather then 0x80, is that correct? regards, johannes From bpkroth at wisc.edu Fri Feb 1 13:51:35 2008 From: bpkroth at wisc.edu (Brian Kroth) Date: Fri, 01 Feb 2008 07:51:35 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <1201861486.29126.102.camel@admc.win-rar.local> References: <2008131192949.627124@leena> <1201861486.29126.102.camel@admc.win-rar.local> Message-ID: <20080201135135.GA4812@omnius.hslc.wisc.edu> jr : > Am Donnerstag, den 31.01.2008, 19:29 -0600 schrieb isplist at logicore.net: > > > I've not looked into this yet so don't know what to edit or add. > > > > On one node, the drive is /dev/sda, but on some nodes, it'll be sdx because > > some of the machines already have SCSI drives in them. Not hard to see, just > > have to keep it in mind since I'm assuming each machine needs this file. > > > > I see a catch all ( sd*[!0-9] ) but don't know where I need to put say > > /dev/sda. > > okay, i'll try to help as far as i can with my nonexistant udev rules. > the catchall is there because you *don't* know which scsi disk it is, > wether it is sda, sdb or whatever. > however, the DISK does have a unique serial number (even raid slices do) > or at least a WWN. > so you basically need to find out which unique number your unique disk > has (by using /sbin/scsi_id -p 0x83 -g -u -s /block/sda), tell that the > udev system and let it create a symlink you name whatever you like. this > way, every node get's a symlink with the same name for the same disk, > whatever the state of it's device enumeration is. That's all correct. > oh, any udev gurus.. i suppose udev uses page 0x83 rather then 0x80, is > that correct? I'm no guru, but I've never had to specify -p before. The rest is what I've used: scsi_id -g -u -s /block/sd? Brian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2192 bytes Desc: not available URL: From npf-mlists at eurotux.com Fri Feb 1 15:34:45 2008 From: npf-mlists at eurotux.com (Nuno Fernandes) Date: Fri, 1 Feb 2008 15:34:45 +0000 Subject: [Linux-cluster] RHEL5 CLVMD hang In-Reply-To: <200801221707.51986.npf-mlists@eurotux.com> References: <200801171731.26493.npf-mlists@eurotux.com> <4795B3D3.7030606@redhat.com> <200801221707.51986.npf-mlists@eurotux.com> Message-ID: <200802011534.45693.npf-mlists@eurotux.com> Hi, CLVM is hung again. This time, the problem started when we restarted clvmd in one node (xen1). Xen2 started to report: Feb 1 15:26:34 xen2 kernel: dlm: recover_master_copy -53 103e7 Feb 1 15:26:34 xen2 kernel: dlm: recover_master_copy -53 10264 Feb 1 15:26:34 xen2 kernel: dlm: recover_master_copy -53 10008 Feb 1 15:26:34 xen2 kernel: dlm: recover_master_copy -53 10047 Feb 1 15:26:34 xen2 kernel: dlm: recover_master_copy -53 1020b ... Attached i send group_tool dump. All cluster nodes have dlm_recoverd blocked: .. 10191 ? D< 0:00 \_ [dlm_recoverd] .. but only xen2 is putting all that logs. # cman_tool services type level name id state fence 0 default 00010008 none [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] dlm 1 clvmd 00010001 none [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] # clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ xen1.dc.eurotux.pt 1 Online xen2.dc.eurotux.pt 2 Online, Local xen3.dc.eurotux.pt 3 Online xen4.dc.eurotux.pt 4 Online xen5.dc.eurotux.pt 5 Online xen6.dc.eurotux.pt 6 Online xen7.dc.eurotux.pt 7 Online xen8.dc.eurotux.pt 8 Online xen9.dc.eurotux.pt 9 Online xen10.dc.eurotux.pt 10 Online xen11.dc.eurotux.pt 11 Online xen12.dc.eurotux.pt 12 Online xen13.dc.eurotux.pt 13 Online xen14.dc.eurotux.pt 14 Online xen17.dc.eurotux.pt 15 Online xen18.dc.eurotux.pt 16 Online xen19.dc.eurotux.pt 17 Online xen20.dc.eurotux.pt 18 Online xen21.dc.eurotux.pt 19 Online xen22.dc.eurotux.pt 20 Online Any info on this? Thanks Nuno Fernandes On Tuesday 22 January 2008 17:07:51 Nuno Fernandes wrote: > On Tuesday 22 January 2008 09:13:55 Patrick Caulfeld wrote: > > Nuno Fernandes wrote: > > > On Monday 21 January 2008 15:58:38 Patrick Caulfeld wrote: > > >> echo 255 > /sys/kernel/config/dlm/cluster/log_debug > > > > > > echo 255 > /sys/kernel/config/dlm/cluster/log_debug > > > -bash: /sys/kernel/config/dlm/cluster/log_debug: Permission denied > > > > > > ls -la /sys/kernel/config/dlm/cluster/ > > > total 0 > > > drwxr-xr-x 4 root root 0 May 27 2007 . > > > drwxr-xr-x 3 root root 0 May 27 2007 .. > > > drwxr-xr-x 19 root root 0 Jan 17 16:36 comms > > > drwxr-xr-x 3 root root 0 Nov 27 14:48 spaces > > > > No debug options! you need to upgrade the kernel I'm afraid. It might > > even fix the bug ;-) > > > > Patrick > > Solved. Rebooted the whole cluster! :( > > Thanks > Nuno Fernandes > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- A non-text attachment was scrubbed... Name: group.dump.txt.gz Type: application/x-gzip Size: 13974 bytes Desc: not available URL: From lhh at redhat.com Fri Feb 1 16:06:43 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 01 Feb 2008 11:06:43 -0500 Subject: [Linux-cluster] How to set node timeout when using gulm? In-Reply-To: <940a2e6d0801311720m695a992bq99ec592a9644bc51@mail.gmail.com> References: <940a2e6d0801311720m695a992bq99ec592a9644bc51@mail.gmail.com> Message-ID: <1201882003.1759.105.camel@ayanami.boston.devel.redhat.com> On Thu, 2008-01-31 at 17:20 -0800, Vectorz Sigma wrote: > I'm aware of how to do this for CMAN but I'm running gulm. I can't > find information anywhere on how to do this. > > Anyone know? According to the gulm.5 man page: heartbeat_rate The rate at which the heartbeats are checked by the server in seconds. Two-thirds of this time is the rate at which the heartbeats are sent. Default is 15. allowed_misses How many consecutive heartbeats can be missed be- fore we mark the node expired. Default is 2. Remember that allowed_misses+1 is the "actual" node death time. So, with the defaults, that's 45 seconds - not 30. (important!) ... ... -- Lon From lhh at redhat.com Fri Feb 1 16:09:23 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 01 Feb 2008 11:09:23 -0500 Subject: [Linux-cluster] postgres cluster with RHEL5 In-Reply-To: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> References: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> Message-ID: <1201882163.1759.109.camel@ayanami.boston.devel.redhat.com> On Thu, 2008-01-31 at 23:10 -0600, Terry wrote: > name="database" recovery="relocate"> > > > > > That's not going to work; the dependencies are backwards (assuming postgresql is trying to use the fs and ip). Try: -- Lon From lhh at redhat.com Fri Feb 1 16:12:59 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 01 Feb 2008 11:12:59 -0500 Subject: [Linux-cluster] Storage problem in Linux Cluster In-Reply-To: <8e3fbac10802010127s511fa106w59302a27ec12161b@mail.gmail.com> References: <8e3fbac10802010127s511fa106w59302a27ec12161b@mail.gmail.com> Message-ID: <1201882379.1759.111.camel@ayanami.boston.devel.redhat.com> On Fri, 2008-02-01 at 14:57 +0530, Abhra Paul wrote: > Respected Users > > I have a problem in cluster. One user of this cluster needs huge > amount of space. In this cluster one big partition(size 1TB) is > mounted on /data . So I provide this amount of space for his program > execution. At 1.30 PM he occupied 200GB of this storage(which is > mounted on /data). Now when I am going to check this /data , about > usage of space , using the command du -sh * . In o/p there is no > entry for /data and courser is only blinking without showing total o/p > as well as command prompt is not coming back. Also when I am issuing > cd / same thing is happening . How I solve this problem. Could you check: cman_tool services -- Lon From td3201 at gmail.com Fri Feb 1 16:30:06 2008 From: td3201 at gmail.com (Terry) Date: Fri, 1 Feb 2008 10:30:06 -0600 Subject: [Linux-cluster] postgres cluster with RHEL5 In-Reply-To: <1201882163.1759.109.camel@ayanami.boston.devel.redhat.com> References: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> <1201882163.1759.109.camel@ayanami.boston.devel.redhat.com> Message-ID: <8ee061010802010830k270c3deaxcd2a5bdaa5d552c1@mail.gmail.com> On Feb 1, 2008 10:09 AM, Lon Hohberger wrote: > On Thu, 2008-01-31 at 23:10 -0600, Terry wrote: > > > > name="database" recovery="relocate"> > > > > > > > > > > > > That's not going to work; the dependencies are backwards (assuming > postgresql is trying to use the fs and ip). Try: > > name="database" recovery="relocate"> > > > > > > -- Lon I spoke incorrectly. The shared storage is an iSCSI volume with ext3. NFS is another cluster service. Sorry. From isplist at logicore.net Fri Feb 1 17:45:43 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Fri, 1 Feb 2008 11:45:43 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <20080131233956.GA4485@omnius.hslc.wisc.edu> Message-ID: <200821114543.838843@leena> I'm still getting very long delays before access. Is there some way of testing this to see if it's the storage or the GFS setup? Like a test I could run with the storage connected raw then with GFS? Perhaps I just kill the partition, run some test, format it as GFS again, mount it, test? Mike On Thu, 31 Jan 2008 17:39:56 -0600, Brian Kroth wrote: > Johannes Russek : > >>> >>> i think that's the file he sent you ;) (starting with "# /etc/udev..") >>> johannes >>> > Yes. > >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >> thinking about it, i think i should add to say that you need to edit that >> file for your needs (adjust the scsi id). i didn't use udev yet, the only >> place where i scsi_id etc together with dm-multipath. >> johannes >> > True, you will need to edit it. That example is coming from a gentoo > install, so it'll probably be different for you. > > Brian From td3201 at gmail.com Fri Feb 1 19:14:37 2008 From: td3201 at gmail.com (Terry) Date: Fri, 1 Feb 2008 13:14:37 -0600 Subject: [Linux-cluster] postgres cluster with RHEL5 In-Reply-To: <8ee061010802010830k270c3deaxcd2a5bdaa5d552c1@mail.gmail.com> References: <8ee061010801312110q25519afck1ba7776948cb6a44@mail.gmail.com> <1201882163.1759.109.camel@ayanami.boston.devel.redhat.com> <8ee061010802010830k270c3deaxcd2a5bdaa5d552c1@mail.gmail.com> Message-ID: <8ee061010802011114l11e566c4wa211c7d7da441066@mail.gmail.com> On Feb 1, 2008 10:30 AM, Terry wrote: > On Feb 1, 2008 10:09 AM, Lon Hohberger wrote: > > On Thu, 2008-01-31 at 23:10 -0600, Terry wrote: > > > > > > > name="database" recovery="relocate"> > > > > > > > > > > > > > > > > > > > That's not going to work; the dependencies are backwards (assuming > > postgresql is trying to use the fs and ip). Try: > > > > > name="database" recovery="relocate"> > > > > > > > > > > > > -- Lon > > I spoke incorrectly. The shared storage is an iSCSI volume with ext3. > NFS is another cluster service. Sorry. > Thanks for all of your replies. I think I have a catch 22 situation. Here is my (corrected) config: Here is a listing of /db00: [root at nfs00a ~]# ls /db00/ backups data pgstartup.log [root at nfs00a ~]# ls /db00/data/ base lost+found pg_hba.conf pg_log pg_subtrans pg_twophase pg_xlog postmaster.opts global pg_clog pg_ident.conf pg_multixact pg_tblspc PG_VERSION postgresql.conf You can see that my postgres config lives on this iscsi volume which needs to be mounted as part of the database service. When I attempt to start the database service, it acts like it starts, there are no logs but it doesn't start, nor does the /db00 volume get mounted. What am I missing? I want to stay as close to standard as possible with regards to postgres and directory locations so I created a symlink: lrwxrwxrwx 1 root root 5 Feb 1 12:32 /var/lib/pgsql -> /db00 I hope this paints a better picture of what I am doing and ultimately, where I am failing. Thanks all! From isplist at logicore.net Fri Feb 1 19:54:08 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Fri, 1 Feb 2008 13:54:08 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <20080131233956.GA4485@omnius.hslc.wisc.edu> Message-ID: <20082113548.625938@leena> Update for anyone watching :). I've eliminated GFS as being the slowdown at least. No ideas on the DLM problems but I'll get back to that after I figure out the slowdown. Seems that the storage controller has problems so I'll either change out the chassis or controllers if I can. Once I nuke the delay issues, I'll get vlm back on then GFS and see what happens. Mike On Thu, 31 Jan 2008 17:39:56 -0600, Brian Kroth wrote: > Johannes Russek : > >>> >>> i think that's the file he sent you ;) (starting with "# /etc/udev..") >>> johannes >>> > Yes. > >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >> thinking about it, i think i should add to say that you need to edit that >> file for your needs (adjust the scsi id). i didn't use udev yet, the only >> place where i scsi_id etc together with dm-multipath. >> johannes >> > True, you will need to edit it. That example is coming from a gentoo > install, so it'll probably be different for you. > > Brian From isplist at logicore.net Fri Feb 1 19:56:45 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Fri, 1 Feb 2008 13:56:45 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <47A2E733.4000401@redhat.com> Message-ID: <200821135645.507559@leena> > I don't need the LVM log. Sorry if I wasn't clear. I wanted the syslog > extracts for the DLM startup. I also wanted to see the netstat AFTER > starting the dlm on the first node (and if there were any errors), then > the same things (on both nodes) when the second node was added. Ok, I'll resolve the delay issues first then get back to this DLM issue. Is it possible that DLM or something was timing out because of the storage delay? Maybe it was screwing it up, not seeing the storage or not getting proper responses? Mike From isplist at logicore.net Fri Feb 1 22:17:49 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Fri, 1 Feb 2008 16:17:49 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <20080131233956.GA4485@omnius.hslc.wisc.edu> Message-ID: <200821161749.373315@leena> Ah, wonderful day knee deep in fibre channel. Does anyone know how I can test NFS speeds in a similar fashion to hdparm -tT? All the testing I've ever done was with hdparm so that's my reference point. Mike On Thu, 31 Jan 2008 17:39:56 -0600, Brian Kroth wrote: > Johannes Russek : > >>> >>> i think that's the file he sent you ;) (starting with "# /etc/udev..") >>> johannes >>> > Yes. > >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >> thinking about it, i think i should add to say that you need to edit that >> file for your needs (adjust the scsi id). i didn't use udev yet, the only >> place where i scsi_id etc together with dm-multipath. >> johannes >> > True, you will need to edit it. That example is coming from a gentoo > install, so it'll probably be different for you. > > Brian From johannes.russek at io-consulting.net Fri Feb 1 22:52:10 2008 From: johannes.russek at io-consulting.net (Johannes Russek) Date: Fri, 01 Feb 2008 23:52:10 +0100 Subject: [Linux-cluster] DLM Problem In-Reply-To: <200821161749.373315@leena> References: <200821161749.373315@leena> Message-ID: <47A3A29A.4010701@io-consulting.net> isplist at logicore.net schrieb: > Ah, wonderful day knee deep in fibre channel. > Does anyone know how I can test NFS speeds in a similar fashion to hdparm -tT? > All the testing I've ever done was with hdparm so that's my reference point. > > Mike > > Hi Mike, hdparm -tT is nice and easy but i'm afraid it's no reference point whatsoever. Out of my guts i'd say go for bonnie, both on your SAN and on the NFS shares. a search for bonnie and nfs on google to see if bonnie is fine with nfs showed this first: http://sourceforge.net/projects/nfstestmatrix/ looks nice :) go for bonnie! johannes From isplist at logicore.net Fri Feb 1 22:55:34 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Fri, 1 Feb 2008 16:55:34 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <47A3A29A.4010701@io-consulting.net> Message-ID: <200821165534.518857@leena> I looked at it before but it looked like I'd have to spend some time learning it as I was not able to find some example commands for basic drive speed stats. Mike On Fri, 01 Feb 2008 23:52:10 +0100, Johannes Russek wrote: > isplist at logicore.net schrieb: > >> Ah, wonderful day knee deep in fibre channel. >> Does anyone know how I can test NFS speeds in a similar fashion to hdparm >> -tT? >> All the testing I've ever done was with hdparm so that's my reference >> point. >> >> Mike >> >> > Hi Mike, > hdparm -tT is nice and easy but i'm afraid it's no reference point > whatsoever. Out of my guts i'd say go for bonnie, both on your SAN and > on the NFS shares. > a search for bonnie and nfs on google to see if bonnie is fine with nfs > showed this first: http://sourceforge.net/projects/nfstestmatrix/ > looks nice :) > go for bonnie! > johannes From johannes.russek at io-consulting.net Fri Feb 1 23:03:39 2008 From: johannes.russek at io-consulting.net (Johannes Russek) Date: Sat, 02 Feb 2008 00:03:39 +0100 Subject: [Linux-cluster] DLM Problem In-Reply-To: <200821165534.518857@leena> References: <200821165534.518857@leena> Message-ID: <47A3A54B.90904@io-consulting.net> isplist at logicore.net schrieb: > I looked at it before but it looked like I'd have to spend some time learning > it as I was not able to find some example commands for basic drive speed > stats. > > Mike > I can't test it right now, but i think it should just be bonnie -s 2000 the number being any size of testfile you want to check (probably something bigger then how much cache is between you and the disk. johannes From gordan at bobich.net Fri Feb 1 23:30:12 2008 From: gordan at bobich.net (Gordan Bobic) Date: Fri, 01 Feb 2008 23:30:12 +0000 Subject: [Linux-cluster] DLM Problem In-Reply-To: <200821161749.373315@leena> References: <200821161749.373315@leena> Message-ID: <47A3AB84.8090808@bobich.net> For more meaningful results, try iozone. Gordan isplist at logicore.net wrote: > Ah, wonderful day knee deep in fibre channel. > Does anyone know how I can test NFS speeds in a similar fashion to hdparm -tT? > All the testing I've ever done was with hdparm so that's my reference point. > > Mike > > > On Thu, 31 Jan 2008 17:39:56 -0600, Brian Kroth wrote: >> Johannes Russek : >> >>>> i think that's the file he sent you ;) (starting with "# /etc/udev..") >>>> johannes >>>> >> Yes. >> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> thinking about it, i think i should add to say that you need to edit that >>> file for your needs (adjust the scsi id). i didn't use udev yet, the only >>> place where i scsi_id etc together with dm-multipath. >>> johannes >>> >> True, you will need to edit it. That example is coming from a gentoo >> install, so it'll probably be different for you. >> >> Brian > > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From bpkroth at wisc.edu Sat Feb 2 04:36:39 2008 From: bpkroth at wisc.edu (Brian Kroth) Date: Fri, 01 Feb 2008 22:36:39 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <47A3AB84.8090808@bobich.net> References: <200821161749.373315@leena> <47A3AB84.8090808@bobich.net> Message-ID: <20080202043639.GA12660@odin.hslc.wisc.edu> Gordan Bobic : > For more meaningful results, try iozone. > > Gordan > yeah iozone -a is pretty easy. For anyone who's interested: some graphs from results I got a couple weeks ago while running it on 5 nodes at once: https://mywebspace.wisc.edu/bpkroth/web/fs-cluster/fs-cluster-iozone-results.xls The student I had put them together did it in MS Office, so they look nicer there, but they're still readable in ooffice. Brian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2192 bytes Desc: not available URL: From teemu.m2 at luukku.com Sat Feb 2 06:45:36 2008 From: teemu.m2 at luukku.com (m.. mm..) Date: Sat, 2 Feb 2008 08:45:36 +0200 (EET) Subject: [Linux-cluster] Cluster nodenames, question Message-ID: <1201934736227.teemu.m2.1600.j3lSZNC_LaD0TcWeA1d4uQ@luukku.com> Hi everyone! How about this RedHat ClusterSuite 5.1 configuration, if i have example 2 nodes test1 and test2: Network is configured like this: prodution network test1 ip=10.10.10.5 test2 ip=10.10.10.6 Private Heartbeat network test1hb ip=192.168.0.2 (machine= test1) test2hb ip=192.168.0.3 (-------= test2) What names should i use when i add cluster member to my cluster "test". Is'it test1 or sould i use test1hb name? -Reg.Teemu ................................................................... Luukku Plus paketilla p??set eroon tila- ja turvallisuusongelmista. Hanki Luukku Plus ja helpotat el?m??si. http://www.mtv3.fi/luukku From shawnlhood at gmail.com Sat Feb 2 07:35:03 2008 From: shawnlhood at gmail.com (Shawn Hood) Date: Sat, 2 Feb 2008 02:35:03 -0500 Subject: [Linux-cluster] Speeding up GFS on 1 node cluster Message-ID: I currently have several machines I'm about to cluster using RHCS, utilizing shared storage (XRAID w/ GFS). I currently have another existing XRAID which is mounted on one machine. I need to move all this data from this existing XRAID to the new GFS XRAID. Right now, I have a one-node cluster running. This node has both XRAIDs mounted. I am running an rsync job between them. The transfer is writing at about 25-35MB/s. Anyhow, I need to speed up this initial sync. Can I do this by changing to a non-clustered locking, as only one node is running? Part of this rsync is a huge set of small files. I'm assuming I could toy with gfs_tune to improve performance, as well. Any advice is appreciated. From isplist at logicore.net Sat Feb 2 18:48:05 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Sat, 2 Feb 2008 12:48:05 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: <47A3C96C.7070609@io-consulting.net> Message-ID: <20082212485.382562@leena> Another quick question... Since I was not able to solve my fc problem, I decided it was time to fire up my bluearc again. I have very fast nfs services now but am wondering if it my imagination that I've seen posts from some people talking about taking an nfs export and offering it up as a gfs volume? Also, any thoughts on speeds with nfs vs iSCSI client? I just want to understand what I'm getting into a little better. Guessing they are similar since they would be Ethernet speed dependant other than various protocol overheads. Mike From abhramica at gmail.com Mon Feb 4 06:16:16 2008 From: abhramica at gmail.com (Abhra Paul) Date: Mon, 4 Feb 2008 11:46:16 +0530 Subject: [Linux-cluster] Storage problem in Linux Cluster In-Reply-To: <1201882379.1759.111.camel@ayanami.boston.devel.redhat.com> References: <8e3fbac10802010127s511fa106w59302a27ec12161b@mail.gmail.com> <1201882379.1759.111.camel@ayanami.boston.devel.redhat.com> Message-ID: <8e3fbac10802032216i37c01e9em34e1c7b81e7bd0da@mail.gmail.com> On Feb 1, 2008 9:42 PM, Lon Hohberger wrote: > On Fri, 2008-02-01 at 14:57 +0530, Abhra Paul wrote: > > Respected Users > > > > I have a problem in cluster. One user of this cluster needs huge > > amount of space. In this cluster one big partition(size 1TB) is > > mounted on /data . So I provide this amount of space for his program > > execution. At 1.30 PM he occupied 200GB of this storage(which is > > mounted on /data). Now when I am going to check this /data , about > > usage of space , using the command du -sh * . In o/p there is no > > entry for /data and courser is only blinking without showing total o/p > > as well as command prompt is not coming back. Also when I am issuing > > cd / same thing is happening . How I solve this problem. > > Could you check: > cman_tool services > > -- Lon > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster ------------------------------------------------------------------------------------------------------------- I have checked cman_tool services. But in my system there is no cman_tool , as well as there is no man page entry for cman_tool. Now , how I will solve my problem. Please, help. With regards Abhra Paul From Alain.Moulle at bull.net Mon Feb 4 07:33:52 2008 From: Alain.Moulle at bull.net (Alain Moulle) Date: Mon, 04 Feb 2008 08:33:52 +0100 Subject: [Linux-cluster] CS5/ Question about behavior with a corrupted Quorum disk Message-ID: <47A6BFE0.3000700@bull.net> Hi Just for information, I wonder if this behavior is normal : I have a two-nodes cluster with a quorum disk, and the CS5 is started on both nodes with a service on each one. Quorum is working fine when I break the quorum disk format (with a mkfs on the device !) so that mkqisk -L returns none. The behavior is : the CS5 is always working fine as if nothing has happen. I wonder if it is only due to the heuristics or if this behavior is simply the std behavior of CS5 with regard to the quorum disk ? Thanks Regards Alain Moull? From pcaulfie at redhat.com Mon Feb 4 08:23:18 2008 From: pcaulfie at redhat.com (Patrick Caulfeld) Date: Mon, 04 Feb 2008 08:23:18 +0000 Subject: [Linux-cluster] Cluster nodenames, question In-Reply-To: <1201934736227.teemu.m2.1600.j3lSZNC_LaD0TcWeA1d4uQ@luukku.com> References: <1201934736227.teemu.m2.1600.j3lSZNC_LaD0TcWeA1d4uQ@luukku.com> Message-ID: <47A6CB76.2030601@redhat.com> m.. mm.. wrote: > Hi everyone! > > How about this RedHat ClusterSuite 5.1 configuration, if i have example 2 nodes test1 and test2: > > Network is configured like this: > prodution network > test1 ip=10.10.10.5 > test2 ip=10.10.10.6 > > Private Heartbeat network > test1hb ip=192.168.0.2 (machine= test1) > test2hb ip=192.168.0.3 (-------= test2) > > What names should i use when i add cluster member to my cluster "test". Is'it test1 or sould i use test1hb name? You should tell cluster suite the names of the heartbeat interfaces. test1hb etc. Patrick From teemu.m2 at luukku.com Mon Feb 4 11:41:45 2008 From: teemu.m2 at luukku.com (m.. mm..) Date: Mon, 4 Feb 2008 13:41:45 +0200 (EET) Subject: [Linux-cluster] Cluster nodenames, question Message-ID: <1202125305640.teemu.m2.32496.rJNAwaPzmJwHsgT57E5wHQ@luukku.com> Patrick Caulfeld kirjoitti 04.02.2008 kello 10:23: > m.. mm.. wrote: > > Hi everyone! > > > > How about this RedHat ClusterSuite 5.1 configuration, if i have example 2 > nodes test1 and test2: > > > > Network is configured like this: > > prodution network > > test1 ip=10.10.10.5 > > test2 ip=10.10.10.6 > > > > Private Heartbeat network > > test1hb ip=192.168.0.2 (machine= test1) > > test2hb ip=192.168.0.3 (-------= test2) > > > > What names should i use when i add cluster member to my cluster "test". > Is'it test1 or sould i use test1hb name? > > > You should tell cluster suite the names of the heartbeat interfaces. > test1hb etc. > > Patrick > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster Hi, But where? ................................................................... Luukku Plus paketilla p??set eroon tila- ja turvallisuusongelmista. Hanki Luukku Plus ja helpotat el?m??si. http://www.mtv3.fi/luukku From pcaulfie at redhat.com Mon Feb 4 11:59:38 2008 From: pcaulfie at redhat.com (Patrick Caulfeld) Date: Mon, 04 Feb 2008 11:59:38 +0000 Subject: [Linux-cluster] Cluster nodenames, question In-Reply-To: <1202125305640.teemu.m2.32496.rJNAwaPzmJwHsgT57E5wHQ@luukku.com> References: <1202125305640.teemu.m2.32496.rJNAwaPzmJwHsgT57E5wHQ@luukku.com> Message-ID: <47A6FE2A.1090708@redhat.com> m.. mm.. wrote: > Patrick Caulfeld kirjoitti 04.02.2008 kello 10:23: >> m.. mm.. wrote: >>> Hi everyone! >>> >>> How about this RedHat ClusterSuite 5.1 configuration, if i have example 2 >> nodes test1 and test2: >>> Network is configured like this: >>> prodution network >>> test1 ip=10.10.10.5 >>> test2 ip=10.10.10.6 >>> >>> Private Heartbeat network >>> test1hb ip=192.168.0.2 (machine= test1) >>> test2hb ip=192.168.0.3 (-------= test2) >>> >>> What names should i use when i add cluster member to my cluster "test". >> Is'it test1 or sould i use test1hb name? >> >> >> You should tell cluster suite the names of the heartbeat interfaces. >> test1hb etc. >> >> Patrick >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > > > Hi, > But where? > in /etc/cluster/cluster.conf Patrick From Santosh.Panigrahi at in.unisys.com Mon Feb 4 12:17:26 2008 From: Santosh.Panigrahi at in.unisys.com (Panigrahi, Santosh Kumar) Date: Mon, 4 Feb 2008 17:47:26 +0530 Subject: [Linux-cluster] Cluster nodenames, question In-Reply-To: <1202125305640.teemu.m2.32496.rJNAwaPzmJwHsgT57E5wHQ@luukku.com> References: <1202125305640.teemu.m2.32496.rJNAwaPzmJwHsgT57E5wHQ@luukku.com> Message-ID: U have to configure this in /etc/cluster/cluster.conf file. If you are new to RHCS better to use system-config-cluster or conga to configure the Redhat cluster. Thanks, --Santosh -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of m.. mm.. Sent: Monday, February 04, 2008 5:12 PM To: linux clustering Subject: Re: [Linux-cluster] Cluster nodenames, question Patrick Caulfeld kirjoitti 04.02.2008 kello 10:23: > m.. mm.. wrote: > > Hi everyone! > > > > How about this RedHat ClusterSuite 5.1 configuration, if i have example 2 > nodes test1 and test2: > > > > Network is configured like this: > > prodution network > > test1 ip=10.10.10.5 > > test2 ip=10.10.10.6 > > > > Private Heartbeat network > > test1hb ip=192.168.0.2 (machine= test1) > > test2hb ip=192.168.0.3 (-------= test2) > > > > What names should i use when i add cluster member to my cluster "test". > Is'it test1 or sould i use test1hb name? > > > You should tell cluster suite the names of the heartbeat interfaces. > test1hb etc. > > Patrick > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster Hi, But where? ................................................................... Luukku Plus paketilla p??set eroon tila- ja turvallisuusongelmista. Hanki Luukku Plus ja helpotat el?m??si. http://www.mtv3.fi/luukku -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster From shawnlhood at gmail.com Mon Feb 4 16:54:16 2008 From: shawnlhood at gmail.com (Shawn Hood) Date: Mon, 4 Feb 2008 11:54:16 -0500 Subject: [Linux-cluster] performace tuning Message-ID: Hey all, My company has gone live with a GFS cluster this morning. It is a 4 node RHEL4U6 cluster, running RHCS and GFS. It mounts an Apple 4.5TB XRAID configured as RAID5, whose physical volumes are combined into one large volume group. From this volume group, five striped LVMs (striped across the two physical volumes of the XRAID) were created. Five GFS filesystems were created, one on each logical volume. Even though there are currently four nodes, there are 12 journals for each filesystem to allow for planned cluster growth. Currently, each filesystem is mounted noatime, and tunebles quota_enforce and quota_account are set to 0. I have posted the results of gfs_tool gettune /hq-san/nlp/nlp_qa below. We have an application which depends heavily upon a find command that lists a number of files. It looks something like this: find $delta_home -name summary -maxdepth 2 -type f Its output consists of thousands of files that exist on /hq-san/nlp/nlp_qa. This command is CRAWLING at the moment. An ext3 filesystem would output hundreds of matches a second. This GFS filesystem is currently outputting 100-200/minutes. This is crippling one of our applications. Any advice on tuning this filesystem for this kind of access would be greatly appreciated. Output from a gfs_tool df /hq-san/nlp_qa: odin / # gfs_tool df /hq-san/nlp/nlp_qa /hq-san/nlp/nlp_qa: SB lock proto = "lock_dlm" SB lock table = "boson:nlp_qa" SB ondisk format = 1309 SB multihost format = 1401 Block size = 4096 Journals = 12 Resource Groups = 3009 Mounted lock proto = "lock_dlm" Mounted lock table = "boson:nlp_qa" Mounted host data = "" Journal number = 1 Lock module flags = Local flocks = FALSE Local caching = FALSE Oopses OK = FALSE Type Total Used Free use% ------------------------------------------------------------------------ inodes 15167101 15167101 0 100% metadata 868298 750012 118286 86% data 219476789 192088469 27388320 88% Output from a df -h: /dev/mapper/hq--san-cam_development 499G 201G 298G 41% /hq-san/nlp/cam_development /dev/mapper/hq--san-nlp_qa 899G 794G 105G 89% /hq-san/nlp/nlp_qa /dev/mapper/hq--san-svn_users 1.5T 1.3T 282G 82% /hq-san/nlp/svn_users /dev/mapper/hq--san-development 499G 373G 126G 75% /hq-san/nlp/development /dev/mapper/hq--san-prod_reports 1023G 680G 343G 67% /hq-san/nlp/prod_reports odin / # gfs_tool gettune /hq-san/nlp/nlp_qa ilimit1 = 100 ilimit1_tries = 3 ilimit1_min = 1 ilimit2 = 500 ilimit2_tries = 10 ilimit2_min = 3 demote_secs = 300 incore_log_blocks = 1024 jindex_refresh_secs = 60 depend_secs = 60 scand_secs = 5 recoverd_secs = 60 logd_secs = 1 quotad_secs = 5 inoded_secs = 15 glock_purge = 0 quota_simul_sync = 64 quota_warn_period = 10 atime_quantum = 3600 quota_quantum = 60 quota_scale = 1.0000 (1, 1) quota_enforce = 0 quota_account = 0 new_files_jdata = 0 new_files_directio = 0 max_atomic_write = 4194304 max_readahead = 262144 lockdump_size = 131072 stall_secs = 600 complain_secs = 10 reclaim_limit = 5000 entries_per_readdir = 32 prefetch_secs = 10 statfs_slots = 64 max_mhc = 10000 greedy_default = 100 greedy_quantum = 25 greedy_max = 250 rgrp_try_threshold = 100 statfs_fast = 0 seq_readahead = 0 Shawn From gordan at bobich.net Mon Feb 4 16:59:59 2008 From: gordan at bobich.net (gordan at bobich.net) Date: Mon, 4 Feb 2008 16:59:59 +0000 (GMT) Subject: [Linux-cluster] performace tuning In-Reply-To: References: Message-ID: Try mounting the FS with noatime,nodiratime,noquota. Gordan On Mon, 4 Feb 2008, Shawn Hood wrote: > Hey all, > > My company has gone live with a GFS cluster this morning. It is a 4 > node RHEL4U6 cluster, running RHCS and GFS. It mounts an Apple 4.5TB > XRAID configured as RAID5, whose physical volumes are combined into > one large volume group. From this volume group, five striped LVMs > (striped across the two physical volumes of the XRAID) were created. > Five GFS filesystems were created, one on each logical volume. Even > though there are currently four nodes, there are 12 journals for each > filesystem to allow for planned cluster growth. > > Currently, each filesystem is mounted noatime, and tunebles > quota_enforce and quota_account are set to 0. I have posted the > results of gfs_tool gettune /hq-san/nlp/nlp_qa below. We have an > application which depends heavily upon a find command that lists a > number of files. It looks something like this: > find $delta_home -name summary -maxdepth 2 -type f > > Its output consists of thousands of files that exist on > /hq-san/nlp/nlp_qa. This command is CRAWLING at the moment. An ext3 > filesystem would output hundreds of matches a second. This GFS > filesystem is currently outputting 100-200/minutes. This is crippling > one of our applications. Any advice on tuning this filesystem for > this kind of access would be greatly appreciated. > > Output from a gfs_tool df /hq-san/nlp_qa: > > odin / # gfs_tool df /hq-san/nlp/nlp_qa > /hq-san/nlp/nlp_qa: > SB lock proto = "lock_dlm" > SB lock table = "boson:nlp_qa" > SB ondisk format = 1309 > SB multihost format = 1401 > Block size = 4096 > Journals = 12 > Resource Groups = 3009 > Mounted lock proto = "lock_dlm" > Mounted lock table = "boson:nlp_qa" > Mounted host data = "" > Journal number = 1 > Lock module flags = > Local flocks = FALSE > Local caching = FALSE > Oopses OK = FALSE > > Type Total Used Free use% > ------------------------------------------------------------------------ > inodes 15167101 15167101 0 100% > metadata 868298 750012 118286 86% > data 219476789 192088469 27388320 88% > > > > Output from a df -h: > > /dev/mapper/hq--san-cam_development 499G 201G 298G 41% > /hq-san/nlp/cam_development > /dev/mapper/hq--san-nlp_qa 899G 794G 105G 89% /hq-san/nlp/nlp_qa > /dev/mapper/hq--san-svn_users 1.5T 1.3T 282G 82% /hq-san/nlp/svn_users > /dev/mapper/hq--san-development 499G 373G 126G 75% /hq-san/nlp/development > /dev/mapper/hq--san-prod_reports 1023G 680G 343G 67% /hq-san/nlp/prod_reports > > odin / # gfs_tool gettune /hq-san/nlp/nlp_qa > ilimit1 = 100 > ilimit1_tries = 3 > ilimit1_min = 1 > ilimit2 = 500 > ilimit2_tries = 10 > ilimit2_min = 3 > demote_secs = 300 > incore_log_blocks = 1024 > jindex_refresh_secs = 60 > depend_secs = 60 > scand_secs = 5 > recoverd_secs = 60 > logd_secs = 1 > quotad_secs = 5 > inoded_secs = 15 > glock_purge = 0 > quota_simul_sync = 64 > quota_warn_period = 10 > atime_quantum = 3600 > quota_quantum = 60 > quota_scale = 1.0000 (1, 1) > quota_enforce = 0 > quota_account = 0 > new_files_jdata = 0 > new_files_directio = 0 > max_atomic_write = 4194304 > max_readahead = 262144 > lockdump_size = 131072 > stall_secs = 600 > complain_secs = 10 > reclaim_limit = 5000 > entries_per_readdir = 32 > prefetch_secs = 10 > statfs_slots = 64 > max_mhc = 10000 > greedy_default = 100 > greedy_quantum = 25 > greedy_max = 250 > rgrp_try_threshold = 100 > statfs_fast = 0 > seq_readahead = 0 > > > Shawn > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > From lhh at redhat.com Mon Feb 4 17:18:12 2008 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 04 Feb 2008 12:18:12 -0500 Subject: [Linux-cluster] CS5/ Question about behavior with a corrupted Quorum disk In-Reply-To: <47A6BFE0.3000700@bull.net> References: <47A6BFE0.3000700@bull.net> Message-ID: <1202145492.1759.164.camel@ayanami.boston.devel.redhat.com> On Mon, 2008-02-04 at 08:33 +0100, Alain Moulle wrote: > Hi > > Just for information, I wonder if this behavior is normal : > I have a two-nodes cluster with a quorum disk, and the > CS5 is started on both nodes with a service on each one. > Quorum is working fine when I break the quorum disk format > (with a mkfs on the device !) so that mkqisk -L returns > none. It will keep *trying* to operate. > The behavior is : the CS5 is always working fine as if nothing > has happen. I wonder if it is only due to the heuristics or > if this behavior is simply the std behavior of CS5 with > regard to the quorum disk ? It /should/ throw warnings in the log for all the blocks that are corrupt (and it will probably annoy you ;) ). After 1 cycle, the blocks corresponding to active cluster nodes will have correct/current data on them, and life should continue, but reading the rest of the 16 node blocks should continue throwing warnings: [1533] warning: Error reading node ID block 3 [1533] warning: Error reading node ID block 4 [1533] warning: Error reading node ID block 5 [1533] warning: Error reading node ID block 6 [1533] warning: Error reading node ID block 7 ... [1533] warning: Error reading node ID block 16 (Granted, I used 'dd if=/dev/zero ...' instead mkfs) Qdiskd will not function if you restart it, however, and nodes will be unable to find the quorum disk after a reboot. The header of the quorum disk is not rewritten while qdiskd is running. You'll have to run mkqdisk to fix it - which should also work (but certainly isn't recommended!). This produced the following on the non-master node, but nothing significant on the master node: [1533] info: Node 1 shutdown [1533] debug: Making bid for master [1533] debug: Node 1 is marked master, but is dead. [1533] debug: Node 1 is marked master, but is dead. [1533] debug: Node 1 is marked master, but is dead. [1533] debug: Node 1 is UP [1533] info: Node 1 is the master Looking at the code, if a node dies between the time you clobber qdisk the quorum disk and the time qdiskd on that node writes a new block, qdiskd won't evict that node. Solution: Don't rub salt in cuts. Also, intentionally corrupting your quorum disk could result in the following: https://bugzilla.redhat.com/show_bug.cgi?id=430264 -- Lon From lhh at redhat.com Mon Feb 4 17:49:23 2008 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 04 Feb 2008 12:49:23 -0500 Subject: [Linux-cluster] CS5/ Question about behavior with a corrupted Quorum disk In-Reply-To: <1202145492.1759.164.camel@ayanami.boston.devel.redhat.com> References: <47A6BFE0.3000700@bull.net> <1202145492.1759.164.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202147363.1759.166.camel@ayanami.boston.devel.redhat.com> On Mon, 2008-02-04 at 12:18 -0500, Lon Hohberger wrote: > On Mon, 2008-02-04 at 08:33 +0100, Alain Moulle wrote: > > Hi > > > > Just for information, I wonder if this behavior is normal : > > I have a two-nodes cluster with a quorum disk, and the > > CS5 is started on both nodes with a service on each one. > > Quorum is working fine when I break the quorum disk format > > (with a mkfs on the device !) so that mkqisk -L returns > > none. > > It will keep *trying* to operate. > > > The behavior is : the CS5 is always working fine as if nothing > > has happen. I wonder if it is only due to the heuristics or > > if this behavior is simply the std behavior of CS5 with > > regard to the quorum disk ? > > It /should/ throw warnings in the log for all the blocks that are > corrupt (and it will probably annoy you ;) ). After 1 cycle, the blocks > corresponding to active cluster nodes will have correct/current data on > them, and life should continue, but reading the rest of the 16 node > blocks should continue throwing warnings: > > [1533] warning: Error reading node ID block 3 > [1533] warning: Error reading node ID block 4 > [1533] warning: Error reading node ID block 5 > [1533] warning: Error reading node ID block 6 > [1533] warning: Error reading node ID block 7 > ... > [1533] warning: Error reading node ID block 16 > > (Granted, I used 'dd if=/dev/zero ...' instead mkfs) > > Qdiskd will not function if you restart it, however, and nodes will be > unable to find the quorum disk after a reboot. The header of the quorum > disk is not rewritten while qdiskd is running. You'll have to run > mkqdisk to fix it - which should also work (but certainly isn't > recommended!). Whoops - "should also work while the cluster is running (but certainly isn't recommended)" -- Lon From abhramica at gmail.com Tue Feb 5 06:03:05 2008 From: abhramica at gmail.com (Abhra Paul) Date: Tue, 5 Feb 2008 11:33:05 +0530 Subject: [Linux-cluster] Cluster is slow Message-ID: <8e3fbac10802042203j79785a84pf62d9818e964b202@mail.gmail.com> Hi, I am using a cluster of 10 nodes. In node7,8,9,10 dirac is running for 4 days. This dirac is also required huge amount of space. Now when I am visiting individual node and issuing top ,o/p is not showing and cursor is only blinking without any message or anything. How I can solve this problem. With regards Abhra Paul From jparsons at redhat.com Tue Feb 5 06:13:17 2008 From: jparsons at redhat.com (jim parsons) Date: Tue, 05 Feb 2008 01:13:17 -0500 Subject: [Linux-cluster] Cluster is slow In-Reply-To: <8e3fbac10802042203j79785a84pf62d9818e964b202@mail.gmail.com> References: <8e3fbac10802042203j79785a84pf62d9818e964b202@mail.gmail.com> Message-ID: <1202191997.5859.3.camel@localhost.localdomain> On Tue, 2008-02-05 at 11:33 +0530, Abhra Paul wrote: > Hi, > > I am using a cluster of 10 nodes. In node7,8,9,10 dirac is > running for 4 days. This dirac is also required huge amount of space. > Now when I am visiting individual node and issuing top ,o/p is not > showing and cursor is only blinking without any message or anything. > How I can solve this problem. > Can you provide some ancillary info, such as what version of cluster software you are running on what hardware? Could you post your cluster.conf file? Is this a new cluster, or has this cluster been around for awhile working OK and this is a sudden problem? -J From abhramica at gmail.com Tue Feb 5 07:37:53 2008 From: abhramica at gmail.com (Abhra Paul) Date: Tue, 5 Feb 2008 13:07:53 +0530 Subject: [Linux-cluster] Cluster is slow In-Reply-To: <1202191997.5859.3.camel@localhost.localdomain> References: <8e3fbac10802042203j79785a84pf62d9818e964b202@mail.gmail.com> <1202191997.5859.3.camel@localhost.localdomain> Message-ID: <8e3fbac10802042337w726e8973v6f0f909f99a527ca@mail.gmail.com> On Feb 5, 2008 11:43 AM, jim parsons wrote: > > On Tue, 2008-02-05 at 11:33 +0530, Abhra Paul wrote: > > Hi, > > > > I am using a cluster of 10 nodes. In node7,8,9,10 dirac is > > running for 4 days. This dirac is also required huge amount of space. > > Now when I am visiting individual node and issuing top ,o/p is not > > showing and cursor is only blinking without any message or anything. > > How I can solve this problem. > > > Can you provide some ancillary info, such as what version of cluster > software you are running on what hardware? Could you post your > cluster.conf file? Is this a new cluster, or has this cluster been > around for awhile working OK and this is a sudden problem? > > -J > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > ------------------------------------------------------------------------------------------------------------- As a cluster software I am using PBS package(pbs_server, pbs_mom,pbs_schedl), and to submit the job I am using Moab Access Potral job schedular. But I dont know the version of those clustering software. Please tell me how I find the version. In my system there is no cluster.conf file, I have searched by whereis command. This cluster is approx 9 months old. This problem I am facing first time. H/W configuration is as follows: -- Total 12 nodes, one is master and others are slave. -- I am using Quad-core intel processor -- In slave node there is 12G Ram , in master 24G. -- Infinyban , GHz switch It is a BeoWoulf cluster. Also tell me where I can find the cluster configuration file. From marcos at digitaltecnologia.info Tue Feb 5 13:39:42 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Tue, 5 Feb 2008 08:39:42 -0500 Subject: [Linux-cluster] XEN VM Cluster Message-ID: I'm trying to do a two node cluster. I want start a virtual machine at vserver1 and reallocate it to vserver2 if node 1 stop. The VM is installed in a shared HP storage. This access is made with two Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter. If I start it manually which xm create its ok in both servers. The partition is a LVM in /dev/VG2/LV01. I install virtual machine directly in this partition. The names vserver1 e vserv2 are in the /etc/hosts. The cluster start but vserver2 join and after appear "Member Left". I think that my problem is in fence device. I don't have any equipament. I have only the servers, storage, NICs and Emulex. I didn't find any manual for this situation. +--- SWITCH ---+ | | | | +----------+ +----------+ | XEN | | XEN | | maq1 | | | | | | | | vserver1 | | vserver2 | +----------+ +----------+ | | | | +----------------+ | HP Storage | +----------------+ My cluster.conf is: ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 From bfilipek at crscold.com Tue Feb 5 14:26:16 2008 From: bfilipek at crscold.com (Brad Filipek) Date: Tue, 5 Feb 2008 08:26:16 -0600 Subject: [Linux-cluster] EXT3 file system runs slow only right after it is mounted Message-ID: <9C01E18EF3BC2448A3B1A4812EB87D02474D@SRVEDI.upark.crscold.com> Hello, I have a 2 node cluster connected via fibre to a SAN. This is an active/passive cluster and an EXT3 file system is mounted when the cluster services start on the active node. If we try to access data on this mount immediately after it is mounted, it takes an extremely long time. However, after about an hour, it runs extremely fast and is fine for good (until we un-mount and re-mount it, then it is slow again). So my question is, when you mount an ext3 partition (without any mount options), does it do something like scan the whole partition to see what is on the disk? This is the only thing I can think of that makes sense. The mount is 500GB in size. Thanks, Brad Confidentiality Notice: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email reply or by telephone and immediately delete this message and any attachments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From isplist at logicore.net Tue Feb 5 15:38:08 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Tue, 5 Feb 2008 09:38:08 -0600 Subject: [Linux-cluster] NDMP Message-ID: <2008259388.487694@leena> Anyone aware of a good open source linux based NDMP server for an attached tape robot such as the Qualstar TLS series? Mike From Alexandre.Racine at mhicc.org Tue Feb 5 16:38:36 2008 From: Alexandre.Racine at mhicc.org (Alexandre Racine) Date: Tue, 5 Feb 2008 11:38:36 -0500 Subject: [Linux-cluster] noatime References: <200812910934.067039@leena> Message-ID: (bump) But, adding noatime to the fstab did not work in my case. Is there another way for the noatime? Alexandre Racine 514-461-1300 poste 3304 alexandre.racine at mhicc.org -----Original Message----- From: linux-cluster-bounces at redhat.com on behalf of Alexandre Racine Sent: Tue 2008-01-29 11:37 To: linux clustering Subject: RE: [Linux-cluster] noatime Hi, noatime will reduce traffic since if it's not there, each time you touch a file (read, write) it will change the value of the field "last access". But, just adding it to the fstab did not work in my case. Is there another way for the noatime? Has for your isplist, look here http://sourceware.org/cluster/faq.html#gfs_slowaftermount Alexandre Racine 514-461-1300 poste 3304 alexandre.racine at mhicc.org -----Original Message----- From: linux-cluster-bounces at redhat.com on behalf of isplist at logicore.net Sent: Tue 2008-01-29 11:09 To: linux clustering Subject: Re: [Linux-cluster] noatime Were you trying this for a specific reason? I tried noatime to see if I could get a faster initial response time from the GFS volume. Every time I access it, there is a long delay initially, then things are fine until the next access. On Tue, 29 Jan 2008 09:26:19 -0500, Alexandre Racine wrote: > Hi all, > > If I put in my fstab file: > /dev/sdc5 /home gfs noatime > > > Will it actually mount without atime? And how can I confirm this? > > Thanks. > > > Alexandre Racine > 514-461-1300 poste 3304 > alexandre.racine at mhicc.org -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3280 bytes Desc: not available URL: From bpkroth at wisc.edu Tue Feb 5 16:43:53 2008 From: bpkroth at wisc.edu (Brian Kroth) Date: Tue, 05 Feb 2008 10:43:53 -0600 Subject: [Linux-cluster] noatime In-Reply-To: References: <200812910934.067039@leena> Message-ID: <20080205164353.GA9669@wisc.edu> You could try something like this: mount -onoatime,remount /dev/sdX /mnt/Y I'd like to see your fstab entries. Brian Alexandre Racine : > (bump) > > But, adding noatime to the fstab did not work in my case. > > Is there another way for the noatime? > > > > Alexandre Racine > 514-461-1300 poste 3304 > alexandre.racine at mhicc.org > > > > -----Original Message----- > From: linux-cluster-bounces at redhat.com on behalf of Alexandre Racine > Sent: Tue 2008-01-29 11:37 > To: linux clustering > Subject: RE: [Linux-cluster] noatime > > Hi, > > noatime will reduce traffic since if it's not there, each time you touch a file (read, write) it will change the value of the field "last access". > > But, just adding it to the fstab did not work in my case. > > Is there another way for the noatime? > > > > Has for your isplist, look here http://sourceware.org/cluster/faq.html#gfs_slowaftermount > > > Alexandre Racine > 514-461-1300 poste 3304 > alexandre.racine at mhicc.org > > > > -----Original Message----- > From: linux-cluster-bounces at redhat.com on behalf of isplist at logicore.net > Sent: Tue 2008-01-29 11:09 > To: linux clustering > Subject: Re: [Linux-cluster] noatime > > Were you trying this for a specific reason? I tried noatime to see if I could > get a faster initial response time from the GFS volume. Every time I access > it, there is a long delay initially, then things are fine until the next > access. > > > On Tue, 29 Jan 2008 09:26:19 -0500, Alexandre Racine wrote: > > Hi all, > > > > If I put in my fstab file: > > /dev/sdc5 /home gfs noatime > > > > > > Will it actually mount without atime? And how can I confirm this? > > > > Thanks. > > > > > > Alexandre Racine > > 514-461-1300 poste 3304 > > alexandre.racine at mhicc.org > > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2192 bytes Desc: not available URL: From gordan at bobich.net Tue Feb 5 16:45:15 2008 From: gordan at bobich.net (gordan at bobich.net) Date: Tue, 5 Feb 2008 16:45:15 +0000 (GMT) Subject: [Linux-cluster] noatime In-Reply-To: References: <200812910934.067039@leena> Message-ID: IIRC, it only works on the clean mount. It won't work with mount -o remount The only other thing I can think of is that your netfs/gfs init script is broken. Gordan On Tue, 5 Feb 2008, Alexandre Racine wrote: > (bump) > > But, adding noatime to the fstab did not work in my case. > > Is there another way for the noatime? > > > > Alexandre Racine > 514-461-1300 poste 3304 > alexandre.racine at mhicc.org > > > > -----Original Message----- > From: linux-cluster-bounces at redhat.com on behalf of Alexandre Racine > Sent: Tue 2008-01-29 11:37 > To: linux clustering > Subject: RE: [Linux-cluster] noatime > > Hi, > > noatime will reduce traffic since if it's not there, each time you touch a file (read, write) it will change the value of the field "last access". > > But, just adding it to the fstab did not work in my case. > > Is there another way for the noatime? > > > > Has for your isplist, look here http://sourceware.org/cluster/faq.html#gfs_slowaftermount > > > Alexandre Racine > 514-461-1300 poste 3304 > alexandre.racine at mhicc.org > > > > -----Original Message----- > From: linux-cluster-bounces at redhat.com on behalf of isplist at logicore.net > Sent: Tue 2008-01-29 11:09 > To: linux clustering > Subject: Re: [Linux-cluster] noatime > > Were you trying this for a specific reason? I tried noatime to see if I could > get a faster initial response time from the GFS volume. Every time I access > it, there is a long delay initially, then things are fine until the next > access. > > > On Tue, 29 Jan 2008 09:26:19 -0500, Alexandre Racine wrote: >> Hi all, >> >> If I put in my fstab file: >> /dev/sdc5 /home gfs noatime >> >> >> Will it actually mount without atime? And how can I confirm this? >> >> Thanks. >> >> >> Alexandre Racine >> 514-461-1300 poste 3304 >> alexandre.racine at mhicc.org > > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > > From lhh at redhat.com Tue Feb 5 19:20:49 2008 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 05 Feb 2008 14:20:49 -0500 Subject: [Linux-cluster] EXT3 file system runs slow only right after it is mounted In-Reply-To: <9C01E18EF3BC2448A3B1A4812EB87D02474D@SRVEDI.upark.crscold.com> References: <9C01E18EF3BC2448A3B1A4812EB87D02474D@SRVEDI.upark.crscold.com> Message-ID: <1202239249.21504.2.camel@ayanami.boston.devel.redhat.com> On Tue, 2008-02-05 at 08:26 -0600, Brad Filipek wrote: > Hello, > > I have a 2 node cluster connected via fibre to a SAN. This is an > active/passive cluster and an EXT3 file system is mounted when the > cluster services start on the active node. If we try to access data on > this mount immediately after it is mounted, it takes an extremely long > time. However, after about an hour, it runs extremely fast and is fine > for good (until we un-mount and re-mount it, then it is slow again). > So my question is, when you mount an ext3 partition (without any mount > options), does it do something like scan the whole partition to see > what is on the disk? This is the only thing I can think of that makes > sense. The mount is 500GB in size. Did you specify force_fsck or something? -- Lon From marcos at digitaltecnologia.info Tue Feb 5 19:42:02 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Tue, 5 Feb 2008 14:42:02 -0500 Subject: [Linux-cluster] Sharing GFS Partition Message-ID: <72de961c26684084aa85d3fac994ee9b@Mail61.safesecureweb.com> How could I share a partition in a storage with tow nodes running a service? I create a partition and use mkfs mkfs.gfs -r2048 -p lock_dlm -t cluster1:gfsaluno -j 8 /dev/VGALUNO/LVAluno mount -t gfs /dev/VGALUNO/LVAluno /storage/aluno/ /sbin/mount.gfs: node not a member of the default fence domain /sbin/mount.gfs: error mounting lockproto lock_dlm My cluster.conf ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 From lhh at redhat.com Tue Feb 5 19:54:48 2008 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 05 Feb 2008 14:54:48 -0500 Subject: [Linux-cluster] Sharing GFS Partition In-Reply-To: <72de961c26684084aa85d3fac994ee9b@Mail61.safesecureweb.com> References: <72de961c26684084aa85d3fac994ee9b@Mail61.safesecureweb.com> Message-ID: <1202241288.21504.5.camel@ayanami.boston.devel.redhat.com> On Tue, 2008-02-05 at 14:42 -0500, Marcos Ferreira da Silva wrote: > How could I share a partition in a storage with tow nodes running a service? > > I create a partition and use mkfs > mkfs.gfs -r2048 -p lock_dlm -t cluster1:gfsaluno -j 8 /dev/VGALUNO/LVAluno > > mount -t gfs /dev/VGALUNO/LVAluno /storage/aluno/ > > /sbin/mount.gfs: node not a member of the default fence domain > /sbin/mount.gfs: error mounting lockproto lock_dlm > chkconfig --level 2345 fenced on (for starters). You *must* have fencing. To make two services share it: -- Lon From isplist at logicore.net Tue Feb 5 20:52:08 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Tue, 5 Feb 2008 14:52:08 -0600 Subject: [Linux-cluster] DLM Problem In-Reply-To: Message-ID: <20082514528.743959@leena> Who was it that said it was as simple as; >yum install iscsi-target Whew, if only it were that simple :). So far, I'm in some sort of hell. I've tried all sorts of ways to get this to install, nothing has worked. Now I'm trying to install this on a CentOS5 machine with no luck. First, I tried the most basic; # yum install iscsi-target Loading "installonlyn" plugin Setting up Install Process Setting up repositories Reading repository metadata in from local files Parsing package install arguments Nothing to do # yum install iscsitarget Loading "installonlyn" plugin Setting up Install Process Setting up repositories Reading repository metadata in from local files Parsing package install arguments Nothing to do Then I downloaded several versions and have been trying various combinations. The closest I've gotten so far is the following. # yum install dkms-iscsi_trgt-0.4.13-2fc5.i386.rpm iscsitarget-0.4.13-2fc5.i386.rpm dkms-1.09-3.at.noarch.rpm Running Transaction Installing: dkms ######################### [1/3] Installing: dkms-iscsi_trgt ######################### [2/3] Creating symlink /var/dkms/iscsi_trgt/0.4.13/source -> /usr/src/iscsi_trgt-0.4.13 DKMS: Add Completed. Preparing kernel 2.6.18-53.1.6.el5 for module build: (This is not compiling a kernel, only just preparing kernel symbols) Storing current .config to be restored when complete Running Generic preparation routine make mrproper....(bad exit status: 2) Warning! Cannot find a .config file to prepare your kernel with. Try using the --config option to specify where one can be found. Your build will likely fail because of this. make prepare-all....(bad exit status: 2) make oldconfig....(bad exit status: 2) / Building module: cleaning build area.... make KERNELRELEASE=2.6.18-53.1.6.el5 -C /lib/modules/2.6.18-53.1.6.el5/build SUBDIRS=/var/dkms/iscsi_trgt/0.4.13/build modules....(bad exit status: 2) Error! Bad return status from module build for kernel: 2.6.18-53.1.6.el5 Consult the make.log in the build directory /var/dkms/iscsi_trgt/0.4.13/build/ for more information. Error! Could not locate iscsi_trgt.ko for module iscsi_trgt in the DKMS tree. You must run a dkms build for kernel 2.6.18-53.1.6.el5 first. Installing: iscsitarget ######################### [3/3] Installed: dkms.noarch 0:1.09-3.at dkms-iscsi_trgt.i386 0:0.4.13-2fc5 iscsitarget.i386 0:0.4.13-2fc5 Complete! From jbrassow at redhat.com Tue Feb 5 20:56:36 2008 From: jbrassow at redhat.com (Jonathan Brassow) Date: Tue, 5 Feb 2008 14:56:36 -0600 Subject: [Linux-cluster] Storage problem in Linux Cluster In-Reply-To: <8e3fbac10802032216i37c01e9em34e1c7b81e7bd0da@mail.gmail.com> References: <8e3fbac10802010127s511fa106w59302a27ec12161b@mail.gmail.com> <1201882379.1759.111.camel@ayanami.boston.devel.redhat.com> <8e3fbac10802032216i37c01e9em34e1c7b81e7bd0da@mail.gmail.com> Message-ID: <87F62DE3-628F-4C89-AF9E-BD66462F260C@redhat.com> On Feb 4, 2008, at 12:16 AM, Abhra Paul wrote: > On Feb 1, 2008 9:42 PM, Lon Hohberger wrote: >> On Fri, 2008-02-01 at 14:57 +0530, Abhra Paul wrote: >>> Respected Users >>> >>> I have a problem in cluster. One user of this cluster needs huge >>> amount of space. In this cluster one big partition(size 1TB) is >>> mounted on /data . So I provide this amount of space for his program >>> execution. At 1.30 PM he occupied 200GB of this storage(which is >>> mounted on /data). Now when I am going to check this /data , about >>> usage of space , using the command du -sh * . In o/p there is no >>> entry for /data and courser is only blinking without showing total >>> o/p >>> as well as command prompt is not coming back. Also when I am issuing >>> cd / same thing is happening . How I solve this problem. >> >> Could you check: >> cman_tool services >> >> -- Lon >> > ------------------------------------------------------------------------------------------------------------- > > I have checked cman_tool services. But in my system there is no > cman_tool , as well as there is no man page entry for cman_tool. Now , > how I will solve my problem. Please, help. > > > With regards > > Abhra Paul What kind of cluster are you using? If you don't have cman_tool, then you are likely not using rgmanager... or GFS... or cluster products that I am familiar with. However, is there anything useful in /var/ log/messages? Has the machine crashed? hung? etc brassow From isplist at logicore.net Tue Feb 5 21:03:18 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Tue, 5 Feb 2008 15:03:18 -0600 Subject: [Linux-cluster] DLM Problem - Final Solution In-Reply-To: <1201861486.29126.102.camel@admc.win-rar.local> Message-ID: <20082515318.174544@leena> Just an update on this way too long thread for anyone who ends up reading the whole thing. I very much appreciate everyone's help on this, some good things have come from it, mostly, totally simplifying my longer term approach. I think it start with Gordan's suggestion of building an aggregator and from there, it's expanded into other things for me based on the many suggestions from everyone on this and a couple of other threads. I still don't know why I was having the strange GFS delays when first connecting or using a GFS volume. I thought it might be the storage device itself but that was not the case after a lot more testing. Talking about NFS got me pulling out my file servers and looking at them again. Turns out that it becomes my solution. at least for now. I might use GFS for one or two things but it's not going to be in production at this point. I'm finding that my NFS server will be perfect for sharing and serving up static web materials and that the aggregator suggestion ends up being the ongoing storage solution for media and other such data. I am going to limit my FC use to connecting storage to storage servers such as the NFS and aggregator servers. This will de-complicate my setup drastically and allow me to simply use a load balanced front end while not having to maintain a cluster of any sort. I am getting the same functionality for the most part, for the needs I have which are LAMP based services. I have several qualstar tape drives which have NDMP built which I will be hanging off of the FC network using bridges. For slower long term storage, I am trying to find a Linux solution which allows me to build an NDMP server which I can attach tape drives to, allowing the server to act as a NDMP front end to the tape drive/s. From yanv at xandros.com Tue Feb 5 21:27:31 2008 From: yanv at xandros.com (Yan Vinogradov) Date: Tue, 5 Feb 2008 16:27:31 -0500 Subject: [Linux-cluster] problem with deleting a node from a cluster Message-ID: <16468889.20851202246851369.JavaMail.root@ottmail> I have a cluster of 3 nodes (RHEL5u1). Two are online and the third one is offline. I remove the offline one by propogating an updated version of cluster.conf by executing ccs_tool update command, then cman version command, and then I manually remove the cluster.conf from the offline node. Problem is after all these steps are performed clustat command on the remaining nodes in the cluster still shows the cluster as containing 3 nodes, including the one that I have just deleted. Thanks! Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcos at digitaltecnologia.info Tue Feb 5 21:43:28 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Tue, 5 Feb 2008 16:43:28 -0500 Subject: [Linux-cluster] Sharing GFS Partition Message-ID: <1615ab867c794d99bd702ccebd73f828@Mail61.safesecureweb.com> The cluster start but I couldn't mount. The process fenced is started. 21399 ? Ssl 0:00 /sbin/ccsd 21421 ? Ss 0:00 /sbin/groupd 21429 ? Ss 0:00 /sbin/fenced 21435 ? Ss 0:00 /sbin/dlm_controld 21441 ? Ss 0:00 /sbin/gfs_controld 21831 ? Ssl 0:00 clvmd -T20 [root at vserver1 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserver1.teste.br 1 Online, Local, rgmanager vserver2.teste.br 2 Offline Service Name Owner (Last) State ------- ---- ----- ------ ----- service:teste (none) stopped [root at vserver1 ~]# cman_tool services type level name id state fence 0 default 00010001 JOIN_START_WAIT [1] dlm 1 clvmd 00020001 none [1] dlm 1 rgmanager 00030001 none [1] My cluster.conf: The service "teste" don't start. [root at vserver1 cluster]# mount -t gfs /dev/VGALUNO/LVAluno /storage/aluno/ /sbin/mount.gfs: node not a member of the default fence domain /sbin/mount.gfs: error mounting lockproto lock_dlm [root at vserver1 cluster]# lsmod | grep dlm lock_dlm 56009 1 gfs2 522861 1 lock_dlm dlm 153185 13 lock_dlm configfs 62301 2 dlm ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 -------- Mensagem Original -------- > De: Lon Hohberger > Enviado: ter?a-feira, 5 de fevereiro de 2008 17:55 > Para: marcos at digitaltecnologia.info, linux clustering > Assunto: SPAM-LOW: Re: [Linux-cluster] Sharing GFS Partition > > On Tue, 2008-02-05 at 14:42 -0500, Marcos Ferreira da Silva wrote: > > How could I share a partition in a storage with tow nodes running a service? > > > > I create a partition and use mkfs > > mkfs.gfs -r2048 -p lock_dlm -t cluster1:gfsaluno -j 8 /dev/VGALUNO/LVAluno > > > > mount -t gfs /dev/VGALUNO/LVAluno /storage/aluno/ > > > > /sbin/mount.gfs: node not a member of the default fence domain > > /sbin/mount.gfs: error mounting lockproto lock_dlm > > > > chkconfig --level 2345 fenced on > > (for starters). You *must* have fencing. > > To make two services share it: > > > > > > > > > > > > -- Lon From lhh at redhat.com Tue Feb 5 21:46:49 2008 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 05 Feb 2008 16:46:49 -0500 Subject: [Linux-cluster] problem with deleting a node from a cluster In-Reply-To: <16468889.20851202246851369.JavaMail.root@ottmail> References: <16468889.20851202246851369.JavaMail.root@ottmail> Message-ID: <1202248009.21504.7.camel@ayanami.boston.devel.redhat.com> On Tue, 2008-02-05 at 16:27 -0500, Yan Vinogradov wrote: > I have a cluster of 3 nodes (RHEL5u1). Two are online and the third > one is offline. I remove the offline one by propogating an updated > version of cluster.conf by executing ccs_tool update command, then > cman version command, and then I manually remove the cluster.conf from > the offline node. Problem is after all these steps are performed > clustat command on the remaining nodes in the cluster still shows the > cluster as containing 3 nodes, including the one that I have just > deleted. Could be a bug. Does it say 'estranged' in the output, or not? -- Lon From raycharles_man at yahoo.com Wed Feb 6 03:55:58 2008 From: raycharles_man at yahoo.com (Ray Charles) Date: Tue, 5 Feb 2008 19:55:58 -0800 (PST) Subject: [Linux-cluster] XEN VM Cluster In-Reply-To: Message-ID: <384334.37325.qm@web32112.mail.mud.yahoo.com> Hi Marcos, Here are 2 links I found usefull when doing what you're trying. http://www.linux.com/articles/55773 http://sources.redhat.com/cluster/wiki/VMClusterCookbook I would also say be sure to give some attenttion to the multicast messages between the hosts, make sure that multicast is passing throught the switch. Seemed to be a gotcha for me, and there's docs on that part floating around. -Ray --- Marcos Ferreira da Silva wrote: > I'm trying to do a two node cluster. > I want start a virtual machine at vserver1 and > reallocate it to vserver2 if node 1 stop. > The VM is installed in a shared HP storage. > This access is made with two Emulex Corporation > Zephyr-X LightPulse Fibre Channel Host Adapter. > If I start it manually which xm create its ok in > both servers. > The partition is a LVM in /dev/VG2/LV01. > I install virtual machine directly in this > partition. > The names vserver1 e vserv2 are in the /etc/hosts. > The cluster start but vserver2 join and after appear > "Member Left". > > I think that my problem is in fence device. > I don't have any equipament. > I have only the servers, storage, NICs and Emulex. > > I didn't find any manual for this situation. > > > +--- SWITCH ---+ > | | > | | > +----------+ +----------+ > | XEN | | XEN | > | maq1 | | | > | | | | > | vserver1 | | vserver2 | > +----------+ +----------+ > | | > | | > +----------------+ > | HP Storage | > +----------------+ > > > > My cluster.conf is: > > > name="cluster1"> > post_fail_delay="0" post_join_delay="3"/> > key_file="/etc/cluster/fence_xvm.key"/> > > name="vserver1.teste.br" nodeid="1" votes="1"> > > > domain="teste.br" name="maq1"/> > > > > name="vserver2.teste.br" nodeid="2" votes="1"> > > > > > > name="maq1" auth="none"/> > > > > name="fodMaq1" ordered="1" restricted="1"> > name="vserver1.teste.br" priority="1"/> > name="vserver2.teste.br" priority="2"/> > > > > exclusive="0" name="maq1" path="/etc/xen" > recovery="relocate"/> > > > > > ------------------------------ > Marcos Ferreira da Silva > Digital Tecnologia > Uberlandia-MG > (34) 9154-0150 / 3226-2534 > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping From fog at t.is Wed Feb 6 07:22:32 2008 From: fog at t.is (=?iso-8859-1?Q?Finnur_=D6rn_Gu=F0mundsson_-_TM_Software?=) Date: Wed, 6 Feb 2008 07:22:32 -0000 Subject: [Linux-cluster] DLM Problem In-Reply-To: <20082514528.743959@leena> References: <20082514528.743959@leena> Message-ID: <3DDA6E3E456E144DA3BB0A62A7F7F77901CAC88B@SKYHQAMX08.klasi.is> Do you have kernel-devel installed ? Bgrds, Finnur -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of isplist at logicore.net Sent: 5. febr?ar 2008 20:52 To: linux clustering Subject: Re: [Linux-cluster] DLM Problem Who was it that said it was as simple as; >yum install iscsi-target Whew, if only it were that simple :). So far, I'm in some sort of hell. I've tried all sorts of ways to get this to install, nothing has worked. Now I'm trying to install this on a CentOS5 machine with no luck. First, I tried the most basic; # yum install iscsi-target Loading "installonlyn" plugin Setting up Install Process Setting up repositories Reading repository metadata in from local files Parsing package install arguments Nothing to do # yum install iscsitarget Loading "installonlyn" plugin Setting up Install Process Setting up repositories Reading repository metadata in from local files Parsing package install arguments Nothing to do Then I downloaded several versions and have been trying various combinations. The closest I've gotten so far is the following. # yum install dkms-iscsi_trgt-0.4.13-2fc5.i386.rpm iscsitarget-0.4.13-2fc5.i386.rpm dkms-1.09-3.at.noarch.rpm Running Transaction Installing: dkms ######################### [1/3] Installing: dkms-iscsi_trgt ######################### [2/3] Creating symlink /var/dkms/iscsi_trgt/0.4.13/source -> /usr/src/iscsi_trgt-0.4.13 DKMS: Add Completed. Preparing kernel 2.6.18-53.1.6.el5 for module build: (This is not compiling a kernel, only just preparing kernel symbols) Storing current .config to be restored when complete Running Generic preparation routine make mrproper....(bad exit status: 2) Warning! Cannot find a .config file to prepare your kernel with. Try using the --config option to specify where one can be found. Your build will likely fail because of this. make prepare-all....(bad exit status: 2) make oldconfig....(bad exit status: 2) / Building module: cleaning build area.... make KERNELRELEASE=2.6.18-53.1.6.el5 -C /lib/modules/2.6.18-53.1.6.el5/build SUBDIRS=/var/dkms/iscsi_trgt/0.4.13/build modules....(bad exit status: 2) Error! Bad return status from module build for kernel: 2.6.18-53.1.6.el5 Consult the make.log in the build directory /var/dkms/iscsi_trgt/0.4.13/build/ for more information. Error! Could not locate iscsi_trgt.ko for module iscsi_trgt in the DKMS tree. You must run a dkms build for kernel 2.6.18-53.1.6.el5 first. Installing: iscsitarget ######################### [3/3] Installed: dkms.noarch 0:1.09-3.at dkms-iscsi_trgt.i386 0:0.4.13-2fc5 iscsitarget.i386 0:0.4.13-2fc5 Complete! -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster From marcos at digitaltecnologia.info Wed Feb 6 09:28:22 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Wed, 6 Feb 2008 04:28:22 -0500 Subject: [Linux-cluster] XEN VM Cluster Message-ID: <8a2786b195ac4e778af0830a177e25d7@Mail61.safesecureweb.com> Thanks for the docs. In the VMClusterCookbook I don't understand where is the directory /guests. Is a shared partition in a storage? How could I configure this shared partition to be access for both machines? Have I use a "disk in a file" like the doc? Or could I use a real partition in a shared storage (partitioon with no filesystem only created by lvm)? ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 -------- Mensagem Original -------- > De: Ray Charles > Enviado: quarta-feira, 6 de fevereiro de 2008 1:56 > Para: marcos at digitaltecnologia.info, linux clustering > Assunto: Re: [Linux-cluster] XEN VM Cluster > > Hi Marcos, > > Here are 2 links I found usefull when doing what > you're trying. > > http://www.linux.com/articles/55773 > > http://sources.redhat.com/cluster/wiki/VMClusterCookbook > > I would also say be sure to give some attenttion to > the multicast messages between the hosts, make sure > that multicast is passing throught the switch. Seemed > to be a gotcha for me, and there's docs on that part > floating around. > > -Ray > > > > --- Marcos Ferreira da Silva > wrote: > > > I'm trying to do a two node cluster. > > I want start a virtual machine at vserver1 and > > reallocate it to vserver2 if node 1 stop. > > The VM is installed in a shared HP storage. > > This access is made with two Emulex Corporation > > Zephyr-X LightPulse Fibre Channel Host Adapter. > > If I start it manually which xm create its ok in > > both servers. > > The partition is a LVM in /dev/VG2/LV01. > > I install virtual machine directly in this > > partition. > > The names vserver1 e vserv2 are in the /etc/hosts. > > The cluster start but vserver2 join and after appear > > "Member Left". > > > > I think that my problem is in fence device. > > I don't have any equipament. > > I have only the servers, storage, NICs and Emulex. > > > > I didn't find any manual for this situation. > > > > > > +--- SWITCH ---+ > > | | > > | | > > +----------+ +----------+ > > | XEN | | XEN | > > | maq1 | | | > > | | | | > > | vserver1 | | vserver2 | > > +----------+ +----------+ > > | | > > | | > > +----------------+ > > | HP Storage | > > +----------------+ > > > > > > > > My cluster.conf is: > > > > > > > name="cluster1"> > > > post_fail_delay="0" post_join_delay="3"/> > > > key_file="/etc/cluster/fence_xvm.key"/> > > > > > name="vserver1.teste.br" nodeid="1" votes="1"> > > > > > > > domain="teste.br" name="maq1"/> > > > > > > > > > name="vserver2.teste.br" nodeid="2" votes="1"> > > > > > > > > > > > > > name="maq1" auth="none"/> > > > > > > > > > name="fodMaq1" ordered="1" restricted="1"> > > > name="vserver1.teste.br" priority="1"/> > > > name="vserver2.teste.br" priority="2"/> > > > > > > > > > exclusive="0" name="maq1" path="/etc/xen" > > recovery="relocate"/> > > > > > > > > > > ------------------------------ > > Marcos Ferreira da Silva > > Digital Tecnologia > > Uberlandia-MG > > (34) 9154-0150 / 3226-2534 > > > > > > > > -- > > Linux-cluster mailing list > > Linux-cluster at redhat.com > > > https://www.redhat.com/mailman/listinfo/linux-cluster > > > > > > ____________________________________________________________________________________ > Looking for last minute shopping deals? > Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping From johannes.russek at io-consulting.net Wed Feb 6 10:33:02 2008 From: johannes.russek at io-consulting.net (jr) Date: Wed, 06 Feb 2008 11:33:02 +0100 Subject: [Linux-cluster] XEN VM Cluster In-Reply-To: <8a2786b195ac4e778af0830a177e25d7@Mail61.safesecureweb.com> References: <8a2786b195ac4e778af0830a177e25d7@Mail61.safesecureweb.com> Message-ID: <1202293983.32498.32.camel@admc.win-rar.local> Hi Marcos, Am Mittwoch, den 06.02.2008, 04:28 -0500 schrieb Marcos Ferreira da Silva: > Thanks for the docs. > > In the VMClusterCookbook I don't understand where is the directory /guests. > Is a shared partition in a storage? > How could I configure this shared partition to be access for both machines? it should be anything that is available on both machines and shared among them, i have a small GFS partition for that on my vm cluster, but i guess you can use NFS for that too. > > Have I use a "disk in a file" like the doc? that's nothing cluster specific but xen specific. it means that you use a file as a "blockdevice" for your guest domain. (check xen docs) > Or could I use a real partition in a shared storage (partitioon with no filesystem only created by lvm)? even though the cookbooks suggested file images on a shared filesystem, i simply used luns i carved from my SAN, which are then used as "phy" blockdevices by the guests. Because i'm using multipathing on all machines and thus have the luns in a consistent naming scheme on all machines, it works pretty well for migration and failover. enjoy, johannes From marcos at digitaltecnologia.info Wed Feb 6 11:01:09 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Wed, 6 Feb 2008 06:01:09 -0500 Subject: [Linux-cluster] XEN VM Cluster Message-ID: <1e3d27e97a62428e9c581714cb2ebb96@Mail61.safesecureweb.com> How could configure a partition to share a VM config for two machines? Could you send me your cluster.conf for I compare with I want to do? Then I need to have a shared partition to put the VMs config, that will be access by other machines, and a physical (LVM in a storage) to put the real machine. Is it correct? When I start a VM in a node 1 it will start in a physical device. If I disconnect the node 1, will the vm migrate to node 2? Will the clients connections lose? I'm use a HP Storage and a two servers with multipath with emulex fiber channel. ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 -------- Mensagem Original -------- > De: jr > Enviado: quarta-feira, 6 de fevereiro de 2008 8:35 > Para: marcos at digitaltecnologia.info, linux clustering > Assunto: SPAM-LOW: Re: [Linux-cluster] XEN VM Cluster > > Hi Marcos, > > Am Mittwoch, den 06.02.2008, 04:28 -0500 schrieb Marcos Ferreira da > Silva: > > Thanks for the docs. > > > > In the VMClusterCookbook I don't understand where is the directory /guests. > > Is a shared partition in a storage? > > How could I configure this shared partition to be access for both machines? > > it should be anything that is available on both machines and shared > among them, i have a small GFS partition for that on my vm cluster, but > i guess you can use NFS for that too. > > > > > Have I use a "disk in a file" like the doc? > > that's nothing cluster specific but xen specific. it means that you use > a file as a "blockdevice" for your guest domain. (check xen docs) > > > Or could I use a real partition in a shared storage (partitioon with no filesystem only created by lvm)? > > even though the cookbooks suggested file images on a shared filesystem, > i simply used luns i carved from my SAN, which are then used as "phy" > blockdevices by the guests. Because i'm using multipathing on all > machines and thus have the luns in a consistent naming scheme on all > machines, it works pretty well for migration and failover. > enjoy, > johannes From johannes.russek at io-consulting.net Wed Feb 6 11:07:15 2008 From: johannes.russek at io-consulting.net (jr) Date: Wed, 06 Feb 2008 12:07:15 +0100 Subject: [Linux-cluster] XEN VM Cluster In-Reply-To: <1e3d27e97a62428e9c581714cb2ebb96@Mail61.safesecureweb.com> References: <1e3d27e97a62428e9c581714cb2ebb96@Mail61.safesecureweb.com> Message-ID: <1202296035.32498.37.camel@admc.win-rar.local> > How could configure a partition to share a VM config for two machines? > Could you send me your cluster.conf for I compare with I want to do? no need for my cluster.conf. just use a GFS partition and it will be fine. (don't forget to put it into fstab) > > Then I need to have a shared partition to put the VMs config, that will be access by other machines, and a physical (LVM in a storage) to put the real machine. > Is it correct? i don't know what you mean by "real machine", but your guests not only need the config, they will also need some storage for their system. that's where you need a storage that's connected to your nodes, wether it's luns, lvm lvs or image files, no matter. just keep in mind that if you are using image files, you need to place them on GFS so that every node in your cluster can access them the same. > > When I start a VM in a node 1 it will start in a physical device. > If I disconnect the node 1, will the vm migrate to node 2? > Will the clients connections lose? it's just failover, which means that if the cluster sees a problem with one of the nodes, the other node will take over it's services, which basically means that the vms will be started on the other node. that does mean that your clients will get disconnected. > > I'm use a HP Storage and a two servers with multipath with emulex fiber channel. should be fine. johannes From marcos at digitaltecnologia.info Wed Feb 6 11:34:45 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Wed, 6 Feb 2008 06:34:45 -0500 Subject: [Linux-cluster] XEN VM Cluster Message-ID: I have a problem when I'm trying to mount the partition. Where is my mistake? [root at vserver1 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserver1.teste.br 1 Online, Local, rgmanager vserver2.teste.br 2 Offline Service Name Owner (Last) State ------- ---- ----- ------ ----- service:teste (none) stopped [root at vserver1 ~]# cat /etc/cluster/cluster.conf [root at vserver1 ~]# mount -t gfs /dev/VGALUNO/LVAluno /storage/aluno/ /sbin/mount.gfs: node not a member of the default fence domain /sbin/mount.gfs: error mounting lockproto lock_dlm [root at vserver1 ~]# lsmod | grep dlm lock_dlm 56009 1 gfs2 522861 1 lock_dlm dlm 153185 13 lock_dlm configfs 62301 2 dlm [root at vserver1 ~]# gfs_edit /dev/VGALUNO/LVAluno Block #10 of 31FF000 (superblock) (1 of 4) 00010000 01161970 00000001 00000000 00000000 [...p............] 00010010 00000064 00000000 0000051D 00000579 [...d...........y] 00010020 00000000 00001000 0000000C 00000010 [................] 00010030 00000000 00000032 00000000 00000032 [.......2.......2] 00010040 00000000 00000033 00000000 00000033 [.......3.......3] 00010050 00000000 00000036 00000000 00000036 [.......6.......6] 00010060 6C6F636B 5F646C6D 00000000 00000000 [lock_dlm........] 00010070 00000000 00000000 00000000 00000000 [................] 00010080 00000000 00000000 00000000 00000000 [................] 00010090 00000000 00000000 00000000 00000000 [................] 000100A0 636C7573 74657231 3A676673 616C756E [cluster1:gfsalun] 000100B0 6F000000 00000000 00000000 00000000 [o...............] ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 -------- Mensagem Original -------- > De: jr > Enviado: quarta-feira, 6 de fevereiro de 2008 9:07 > Para: marcos at digitaltecnologia.info, linux clustering > Assunto: SPAM-LOW: Re: [Linux-cluster] XEN VM Cluster > > > How could configure a partition to share a VM config for two machines? > > Could you send me your cluster.conf for I compare with I want to do? > > no need for my cluster.conf. just use a GFS partition and it will be > fine. (don't forget to put it into fstab) > > > > > Then I need to have a shared partition to put the VMs config, that will be access by other machines, and a physical (LVM in a storage) to put the real machine. > > Is it correct? > > i don't know what you mean by "real machine", but your guests not only > need the config, they will also need some storage for their system. > that's where you need a storage that's connected to your nodes, wether > it's luns, lvm lvs or image files, no matter. just keep in mind that if > you are using image files, you need to place them on GFS so that every > node in your cluster can access them the same. > > > > > When I start a VM in a node 1 it will start in a physical device. > > If I disconnect the node 1, will the vm migrate to node 2? > > Will the clients connections lose? > > it's just failover, which means that if the cluster sees a problem with > one of the nodes, the other node will take over it's services, which > basically means that the vms will be started on the other node. > that does mean that your clients will get disconnected. > > > > > I'm use a HP Storage and a two servers with multipath with emulex fiber channel. > > should be fine. > > johannes From bfilipek at crscold.com Wed Feb 6 13:38:29 2008 From: bfilipek at crscold.com (Brad Filipek) Date: Wed, 6 Feb 2008 07:38:29 -0600 Subject: [Linux-cluster] EXT3 file system runs slow only right after itis mounted References: <9C01E18EF3BC2448A3B1A4812EB87D02474D@SRVEDI.upark.crscold.com> <1202239249.21504.2.camel@ayanami.boston.devel.redhat.com> Message-ID: <9C01E18EF3BC2448A3B1A4812EB87D02474E@SRVEDI.upark.crscold.com> Lon, No I did not specify any mount options when creating the resource. Any ideas on what would help? Brad Filipek ________________________________ From: linux-cluster-bounces at redhat.com on behalf of Lon Hohberger Sent: Tue 2/5/2008 1:20 PM To: linux clustering Subject: Re: [Linux-cluster] EXT3 file system runs slow only right after itis mounted On Tue, 2008-02-05 at 08:26 -0600, Brad Filipek wrote: > Hello, > > I have a 2 node cluster connected via fibre to a SAN. This is an > active/passive cluster and an EXT3 file system is mounted when the > cluster services start on the active node. If we try to access data on > this mount immediately after it is mounted, it takes an extremely long > time. However, after about an hour, it runs extremely fast and is fine > for good (until we un-mount and re-mount it, then it is slow again). > So my question is, when you mount an ext3 partition (without any mount > options), does it do something like scan the whole partition to see > what is on the disk? This is the only thing I can think of that makes > sense. The mount is 500GB in size. Did you specify force_fsck or something? -- Lon -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster Confidentiality Notice: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email reply or by telephone and immediately delete this message and any attachments. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4865 bytes Desc: not available URL: From breeves at redhat.com Wed Feb 6 13:47:42 2008 From: breeves at redhat.com (Bryn M. Reeves) Date: Wed, 06 Feb 2008 13:47:42 +0000 Subject: [Linux-cluster] EXT3 file system runs slow only right after itis mounted In-Reply-To: <9C01E18EF3BC2448A3B1A4812EB87D02474E@SRVEDI.upark.crscold.com> References: <9C01E18EF3BC2448A3B1A4812EB87D02474D@SRVEDI.upark.crscold.com> <1202239249.21504.2.camel@ayanami.boston.devel.redhat.com> <9C01E18EF3BC2448A3B1A4812EB87D02474E@SRVEDI.upark.crscold.com> Message-ID: <47A9BA7E.6020503@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Brad Filipek wrote: > Lon, > > No I did not specify any mount options when creating the resource. Any ideas on what would help? > > > Brad Filipek What's the host doing to the file system just after the mount? If it's doing lots of directory traversal/stat (e.g. find, du, ls -R /mount/path etc.) then it's going to be hammering the disks reading in piles of metadata. If the nodes have a reasonable amount of RAM this'll all end up cached, making later accesses faster. If I run a find over one of my large data file systems immediately after mounting it'll take an order of magnitude or two longer than the same operation run against the same file system after a few hours of "normal use". If you're just reading in data files (and they fit into RAM), then you might just be seeing the effects of the page cache. Regards, Bryn. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFHqbp+6YSQoMYUY94RAvnXAJ9Q/v4ZM2rhkIRDuL8LSbOec4kDqQCfb7++ F0BbA3EL2Vtr33H5BaHfeJI= =NNEc -----END PGP SIGNATURE----- From jakub.suchy at enlogit.cz Wed Feb 6 14:29:34 2008 From: jakub.suchy at enlogit.cz (Jakub Suchy) Date: Wed, 6 Feb 2008 15:29:34 +0100 Subject: [Linux-cluster] LVM Snapshots Message-ID: <20080206142934.GA24364@localhost> Hi, i am currently designing a cluster of XEN machines, having two nodes and one shared storage (SAS). We have Recovery Point Objective of 15 minutes, so I am thinking about putting XEN machine on a LVM partition and making a snapshot of this partition every 15 minutes. Do anybody know any caveats of this? I wonder how fast would this be, i am afraid that every 15 minutes is a little bit too often. XEN machine will be Sybase server. Thanks, Jakub -- Jakub Such? GSM: +420 - 777 817 949 Enlogit s.r.o, U Cukrovaru 509/4, 400 07 ?st? nad Labem tel.: +420 - 474 745 159, fax: +420 - 474 745 160 e-mail: info at enlogit.cz, web: http://www.enlogit.cz From gordan at bobich.net Wed Feb 6 14:36:24 2008 From: gordan at bobich.net (gordan at bobich.net) Date: Wed, 6 Feb 2008 14:36:24 +0000 (GMT) Subject: [Linux-cluster] LVM Snapshots In-Reply-To: <20080206142934.GA24364@localhost> References: <20080206142934.GA24364@localhost> Message-ID: On Wed, 6 Feb 2008, Jakub Suchy wrote: > i am currently designing a cluster of XEN machines, having two nodes and > one shared storage (SAS). We have Recovery Point Objective of 15 > minutes, so I am thinking about putting XEN machine on a LVM partition > and making a snapshot of this partition every 15 minutes. > Do anybody know any caveats of this? I wonder how fast would this be, > i am afraid that every 15 minutes is a little bit too often. XEN machine > will be Sybase server. The caveat is that snapshot != backup. Not even close. For a start, the underlying FS will be about as consistent as you'd expect it to be if you just power cycled the machine while it's working. A journalled FS will work around the need to fsck it, but your DB files (and in fact, any open files) are likely to end up not being internally consistent. You'd need to run check/repair on all your DB tables, for example. That may or may not be an acceptable solution for you, because it is not guaranteed that you will not lose any data. Other than that, snapshotting frequency isn't a problem. It's just an undo log, not a complete copy. Bear in mind, though, that the more snapshots you concurrently have, the more times you multiply the writes, so the disk I/O preformance will degrade if you are running too many simultaneously. Gordan From isplist at logicore.net Wed Feb 6 15:31:58 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Wed, 6 Feb 2008 09:31:58 -0600 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <3DDA6E3E456E144DA3BB0A62A7F7F77901CAC88B@SKYHQAMX08.klasi.is> Message-ID: <20082693158.849919@leena> On Wed, 6 Feb 2008 07:22:32 -0000, Finnur ?rn Gu?mundsson - TM Software wrote: > Do you have kernel-devel installed ? Yes, I do. That's one of the first things I did on the machine. Is it the fact that I'm trying to use CentOS in this case maybe? I don't like it as much as RHEL but I didn't have RHEL5 on network so had a CD problem while trying to install it. --- Now I'm trying to install this on a CentOS5 machine with no luck. First, I tried the most basic; # yum install iscsi-target Loading "installonlyn" plugin Setting up Install Process Setting up repositories Reading repository metadata in from local files Parsing package install arguments Nothing to do # yum install iscsitarget Loading "installonlyn" plugin Setting up Install Process Setting up repositories Reading repository metadata in from local files Parsing package install arguments Nothing to do Then I downloaded several versions and have been trying various combinations. The closest I've gotten so far is the following. # yum install dkms-iscsi_trgt-0.4.13-2fc5.i386.rpm iscsitarget-0.4.13-2fc5.i386.rpm dkms-1.09-3.at.noarch.rpm Running Transaction Installing: dkms ######################### [1/3] Installing: dkms-iscsi_trgt ######################### [2/3] Creating symlink /var/dkms/iscsi_trgt/0.4.13/source -> /usr/src/iscsi_trgt-0.4.13 DKMS: Add Completed. Preparing kernel 2.6.18-53.1.6.el5 for module build: (This is not compiling a kernel, only just preparing kernel symbols) Storing current .config to be restored when complete Running Generic preparation routine make mrproper....(bad exit status: 2) Warning! Cannot find a .config file to prepare your kernel with. Try using the --config option to specify where one can be found. Your build will likely fail because of this. make prepare-all....(bad exit status: 2) make oldconfig....(bad exit status: 2) / Building module: cleaning build area.... make KERNELRELEASE=2.6.18-53.1.6.el5 -C /lib/modules/2.6.18-53.1.6.el5/build SUBDIRS=/var/dkms/iscsi_trgt/0.4.13/build modules....(bad exit status: 2) Error! Bad return status from module build for kernel: 2.6.18-53.1.6.el5 Consult the make.log in the build directory /var/dkms/iscsi_trgt/0.4.13/build/ for more information. Error! Could not locate iscsi_trgt.ko for module iscsi_trgt in the DKMS tree. You must run a dkms build for kernel 2.6.18-53.1.6.el5 first. Installing: iscsitarget ######################### [3/3] Installed: dkms.noarch 0:1.09-3.at dkms-iscsi_trgt.i386 0:0.4.13-2fc5 iscsitarget.i386 0:0.4.13-2fc5 Complete! From breeves at redhat.com Wed Feb 6 15:48:47 2008 From: breeves at redhat.com (Bryn M. Reeves) Date: Wed, 06 Feb 2008 15:48:47 +0000 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <20082693158.849919@leena> References: <20082693158.849919@leena> Message-ID: <47A9D6DF.1060302@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 isplist at logicore.net wrote: > On Wed, 6 Feb 2008 07:22:32 -0000, Finnur ?rn Gu?mundsson - TM Software wrote: >> Do you have kernel-devel installed ? > > Yes, I do. That's one of the first things I did on the machine. Is it the fact > that I'm trying to use CentOS in this case maybe? I don't like it as much as > RHEL but I didn't have RHEL5 on network so had a CD problem while trying to > install it. > > --- > > Now I'm trying to install this on a CentOS5 machine with no luck. > > First, I tried the most basic; > > # yum install iscsi-target > Loading "installonlyn" plugin > Setting up Install Process > Setting up repositories > Reading repository metadata in from local files > Parsing package install arguments > Nothing to do In RHEL land, this would be in the "Cluster Storage" repo I believe. I had a poke around in the CentOS mirrors and couldn't see the equivalent (their repo structure seems quite different; extras, CentOS+ etc.). Regards, Bryn. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFHqdbf6YSQoMYUY94RAm3VAKCfTY0eI5HPHIWtRcWDeJTXyfz8wACgtf1q REmX6ThZAZXjVABc4Lj11k0= =uHmD -----END PGP SIGNATURE----- From isplist at logicore.net Wed Feb 6 15:54:21 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Wed, 6 Feb 2008 09:54:21 -0600 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <47A9D6DF.1060302@redhat.com> Message-ID: <20082695421.474368@leena> > In RHEL land, this would be in the "Cluster Storage" repo I believe. I > had a poke around in the CentOS mirrors and couldn't see the equivalent > (their repo structure seems quite different; extras, CentOS+ etc.). Yes, it wasn't in the repo, I found it using rpmfind then installed from the command line. Mike From breeves at redhat.com Wed Feb 6 16:01:06 2008 From: breeves at redhat.com (Bryn M. Reeves) Date: Wed, 06 Feb 2008 16:01:06 +0000 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <20082695421.474368@leena> References: <20082695421.474368@leena> Message-ID: <47A9D9C2.4040405@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 isplist at logicore.net wrote: >> In RHEL land, this would be in the "Cluster Storage" repo I believe. I >> had a poke around in the CentOS mirrors and couldn't see the equivalent >> (their repo structure seems quite different; extras, CentOS+ etc.). > > Yes, it wasn't in the repo, I found it using rpmfind then installed from the > command line. I think the iscsi-target from RHEL's cluster storage and the iscsitarget from rpmfind are quite different beasts; if you're following instructions for the former you might find they don't apply to the package you've installed (there are a number of different iSCSI target implementation for Linux today). Regards, Bryn. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFHqdnC6YSQoMYUY94RAu5TAKCRRv1+Ie4jrRT7MuUJXk1ejfs8tgCeMOl6 UnDuKilHAlY8mcHYpdNLWsM= =8Lt2 -----END PGP SIGNATURE----- From isplist at logicore.net Wed Feb 6 16:08:55 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Wed, 6 Feb 2008 10:08:55 -0600 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <47A9D9C2.4040405@redhat.com> Message-ID: <20082610855.818722@leena> > I think the iscsi-target from RHEL's cluster storage and the iscsitarget > from rpmfind are quite different beasts; if you're following > instructions for the former you might find they don't apply to the > package you've installed (there are a number of different iSCSI target > implementation for Linux today). I'll have to spend more time looking into this also then. I set up rsync so that I could have a local repo but perhaps iscsi-target is part of another repo. On RHEL, I tried yum install iscsi-target and yum install iscsitarget with 'Nothing to do' as a result. That's why I'm guessing that it's part of another repo. Mike From johannes.russek at io-consulting.net Wed Feb 6 16:11:33 2008 From: johannes.russek at io-consulting.net (jr) Date: Wed, 06 Feb 2008 17:11:33 +0100 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <47A9D9C2.4040405@redhat.com> References: <20082695421.474368@leena> <47A9D9C2.4040405@redhat.com> Message-ID: <1202314293.32498.47.camel@admc.win-rar.local> actually, the centos thing is called scsi-target-utils: scsi-target-utils.x86_64 0.0-0.20070620snap.el5 base it's in base, so no extra repos needed. stay away from rpmfind if you don't want to screw your centos updatepath. regards, johannes Am Mittwoch, den 06.02.2008, 16:01 +0000 schrieb Bryn M. Reeves: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > isplist at logicore.net wrote: > >> In RHEL land, this would be in the "Cluster Storage" repo I believe. I > >> had a poke around in the CentOS mirrors and couldn't see the equivalent > >> (their repo structure seems quite different; extras, CentOS+ etc.). > > > > Yes, it wasn't in the repo, I found it using rpmfind then installed from the > > command line. > > I think the iscsi-target from RHEL's cluster storage and the iscsitarget > from rpmfind are quite different beasts; if you're following > instructions for the former you might find they don't apply to the > package you've installed (there are a number of different iSCSI target > implementation for Linux today). > > Regards, > Bryn. > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.7 (GNU/Linux) > Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org > > iD8DBQFHqdnC6YSQoMYUY94RAu5TAKCRRv1+Ie4jrRT7MuUJXk1ejfs8tgCeMOl6 > UnDuKilHAlY8mcHYpdNLWsM= > =8Lt2 > -----END PGP SIGNATURE----- > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From bugsie.linux at gmail.com Wed Feb 6 16:10:54 2008 From: bugsie.linux at gmail.com (bugsie) Date: Thu, 07 Feb 2008 00:10:54 +0800 Subject: [Linux-cluster] problem with deleting a node from a cluster In-Reply-To: <1202248009.21504.7.camel@ayanami.boston.devel.redhat.com> References: <16468889.20851202246851369.JavaMail.root@ottmail> <1202248009.21504.7.camel@ayanami.boston.devel.redhat.com> Message-ID: <47A9DC0E.9010406@gmail.com> Hi Yan, Is the configuration version greater than all the previous version? Cheers, bugsie Lon Hohberger wrote: > On Tue, 2008-02-05 at 16:27 -0500, Yan Vinogradov wrote: > >> I have a cluster of 3 nodes (RHEL5u1). Two are online and the third >> one is offline. I remove the offline one by propogating an updated >> version of cluster.conf by executing ccs_tool update command, then >> cman version command, and then I manually remove the cluster.conf from >> the offline node. Problem is after all these steps are performed >> clustat command on the remaining nodes in the cluster still shows the >> cluster as containing 3 nodes, including the one that I have just >> deleted. >> > > Could be a bug. Does it say 'estranged' in the output, or not? > > -- Lon > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > From isplist at logicore.net Wed Feb 6 16:20:34 2008 From: isplist at logicore.net (isplist at logicore.net) Date: Wed, 6 Feb 2008 10:20:34 -0600 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <1202314293.32498.47.camel@admc.win-rar.local> Message-ID: <200826102034.316800@leena> On Wed, 06 Feb 2008 17:11:33 +0100, jr wrote: > actually, the centos thing is called scsi-target-utils: > scsi-target-utils.x86_64 0.0-0.20070620snap.el5 base Yup, installed immediately. Why do the different distro's use different names? Worse, why do sites like rpm.pbone.net (sorry, not rpmfind) show even more varieties? Is it because authors upload theirs with various names? Mike > it's in base, so no extra repos needed. > stay away from rpmfind if you don't want to screw your centos > updatepath. > regards, > johannes > > Am Mittwoch, den 06.02.2008, 16:01 +0000 schrieb Bryn M. Reeves: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> isplist at logicore.net wrote: >>>> In RHEL land, this would be in the "Cluster Storage" repo I believe. I >>>> had a poke around in the CentOS mirrors and couldn't see the >>>> equivalent >>>> (their repo structure seems quite different; extras, CentOS+ etc.). >>>> >>> Yes, it wasn't in the repo, I found it using rpmfind then installed >>> from the >>> command line. >>> >> I think the iscsi-target from RHEL's cluster storage and the iscsitarget >> from rpmfind are quite different beasts; if you're following >> instructions for the former you might find they don't apply to the >> package you've installed (there are a number of different iSCSI target >> implementation for Linux today). >> >> Regards, >> Bryn. >> >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v1.4.7 (GNU/Linux) >> Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org >> >> iD8DBQFHqdnC6YSQoMYUY94RAu5TAKCRRv1+Ie4jrRT7MuUJXk1ejfs8tgCeMOl6 >> UnDuKilHAlY8mcHYpdNLWsM= >> =8Lt2 >> -----END PGP SIGNATURE----- >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From breeves at redhat.com Wed Feb 6 16:23:10 2008 From: breeves at redhat.com (Bryn M. Reeves) Date: Wed, 06 Feb 2008 16:23:10 +0000 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <1202314293.32498.47.camel@admc.win-rar.local> References: <20082695421.474368@leena> <47A9D9C2.4040405@redhat.com> <1202314293.32498.47.camel@admc.win-rar.local> Message-ID: <47A9DEEE.1060304@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 jr wrote: > actually, the centos thing is called scsi-target-utils: > > scsi-target-utils.x86_64 0.0-0.20070620snap.el5 base Heh, what do you know - it's the same in RHEL (in Cluster-Storage): http://rhn.redhat.com/errata/RHEA-2007-0713.html I was looking for the wrong package name. :) > it's in base, so no extra repos needed. > stay away from rpmfind if you don't want to screw your centos Sage words. Regards, Bryn. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFHqd7u6YSQoMYUY94RAj5QAJwJjWEPnbul4HF8cxXZnrX6w9iA8gCgiFvg TIB62QoZIgitMeKU9ecF9ZE= =fm8T -----END PGP SIGNATURE----- From breeves at redhat.com Wed Feb 6 16:26:50 2008 From: breeves at redhat.com (Bryn M. Reeves) Date: Wed, 06 Feb 2008 16:26:50 +0000 Subject: [Linux-cluster] iSCSI Target Driver Install Problem In-Reply-To: <200826102034.316800@leena> References: <200826102034.316800@leena> Message-ID: <47A9DFCA.1010101@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 isplist at logicore.net wrote: > On Wed, 06 Feb 2008 17:11:33 +0100, jr wrote: >> actually, the centos thing is called scsi-target-utils: >> scsi-target-utils.x86_64 0.0-0.20070620snap.el5 base > > Yup, installed immediately. Why do the different distro's use different names? > Worse, why do sites like rpm.pbone.net (sorry, not rpmfind) show even more > varieties? Is it because authors upload theirs with various names? > > Mike No - there are actually several different Linux iSCSI target implementations. Ietd, Ardis, UNH, tgt, ... The scsi-target-utils package uses the stgt project code and relies on the kernel features that were merged into the mainline kernel in 2.6.20: http://stgt.berlios.de/ Regards, Bryn. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFHqd/K6YSQoMYUY94RAspSAJwNxHRTuYMlOs9VB7WmYOUBMtR3nACgsE63 f7InlwU4xx+ImMYdWptPn5Q= =1mUS -----END PGP SIGNATURE----- From lhh at redhat.com Wed Feb 6 17:20:41 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 06 Feb 2008 12:20:41 -0500 Subject: [Linux-cluster] Re: problem deleting a node from a cluster In-Reply-To: <23589450.23361202311274793.JavaMail.root@ottmail> References: <23589450.23361202311274793.JavaMail.root@ottmail> Message-ID: <1202318441.21504.57.camel@ayanami.boston.devel.redhat.com> On Wed, 2008-02-06 at 10:21 -0500, Yan Vinogradov wrote: > Could be a bug. Does it say 'estranged' in the output, or not? > > -- Lon > > Hi Lon, yes, it does refer to the third node as estranged. Does it > tell you anything? > Clustat reports "Estranged" whenever it asks cman for a node list, and the corresponding node does not show up in cluster.conf. In your case, it means that cman still has state about the node despite the configuration change in ccsd. I'm not sure cman "purges" nodes that were dead (but part of cluster.conf) that later are removed from cluster.conf. Chrissie might know. -- Lon From lhh at redhat.com Wed Feb 6 17:36:02 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 06 Feb 2008 12:36:02 -0500 Subject: [Linux-cluster] Sharing GFS Partition In-Reply-To: <1615ab867c794d99bd702ccebd73f828@Mail61.safesecureweb.com> References: <1615ab867c794d99bd702ccebd73f828@Mail61.safesecureweb.com> Message-ID: <1202319362.21504.67.camel@ayanami.boston.devel.redhat.com> On Tue, 2008-02-05 at 16:43 -0500, Marcos Ferreira da Silva wrote: > The cluster start but I couldn't mount. > > The process fenced is started. > 21399 ? Ssl 0:00 /sbin/ccsd > 21421 ? Ss 0:00 /sbin/groupd > 21429 ? Ss 0:00 /sbin/fenced > 21435 ? Ss 0:00 /sbin/dlm_controld > 21441 ? Ss 0:00 /sbin/gfs_controld > 21831 ? Ssl 0:00 clvmd -T20 > > So, what's happening is that the node you have is trying to fence the other node, but no fencing is configured, so it fails and retries (forever!). If you're running RHEL5 or from CVS/head or some other recent version of cman, you can try this manual override to make fencing complete: while ! [ -e "/var/run/cluster/fenced_override" ]; do sleep 1 done echo vserver2.teste.br > /var/run/cluster/fenced_override ... but if you are using Xen, the best thing to do is look here for information about how to set up fencing so this doesn't happen in the future: http://sources.redhat.com/cluster/wiki/VMClusterCookbook Also, the FAQ article on CMAN may be helpful for understanding what's going on: http://sources.redhat.com/cluster/wiki/FAQ/CMAN -- Lon From lhh at redhat.com Wed Feb 6 18:45:00 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 06 Feb 2008 13:45:00 -0500 Subject: [Linux-cluster] Re: problem deleting a node from a cluster In-Reply-To: <1202318441.21504.57.camel@ayanami.boston.devel.redhat.com> References: <23589450.23361202311274793.JavaMail.root@ottmail> <1202318441.21504.57.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202323500.21504.71.camel@ayanami.boston.devel.redhat.com> On Wed, 2008-02-06 at 12:20 -0500, Lon Hohberger wrote: > On Wed, 2008-02-06 at 10:21 -0500, Yan Vinogradov wrote: > > > Could be a bug. Does it say 'estranged' in the output, or not? > > > > -- Lon > > > > Hi Lon, yes, it does refer to the third node as estranged. Does it > > tell you anything? > > > > Clustat reports "Estranged" whenever it asks cman for a node list, and > the corresponding node does not show up in cluster.conf. > > In your case, it means that cman still has state about the node despite > the configuration change in ccsd. > > I'm not sure cman "purges" nodes that were dead (but part of > cluster.conf) that later are removed from cluster.conf. > > Chrissie might know. To be simple, I could also just 'hide' the node in the output of clustat if the node's "offline" + "estranged"... would that work for you? ;) -- Lon From yanv at xandros.com Wed Feb 6 19:04:29 2008 From: yanv at xandros.com (Yan Vinogradov) Date: Wed, 6 Feb 2008 14:04:29 -0500 Subject: [Linux-cluster] Re: problem deleting a node from a cluster Message-ID: <1843832.25571202324669002.JavaMail.root@ottmail> ----- Original Message ----- From: Lon Hohberger Sent: Wed, 2/6/2008 1:45pm To: linux clustering Cc: Yan Vinogradov Subject: Re: [Linux-cluster] Re: problem deleting a node from a cluster On Wed, 2008-02-06 at 12:20 -0500, Lon Hohberger wrote: > On Wed, 2008-02-06 at 10:21 -0500, Yan Vinogradov wrote: > > > Could be a bug.??Does it say 'estranged' in the output, or not? > > > > -- Lon > > > > Hi Lon, yes, it does refer to the third node as estranged. Does it > > tell you anything? > > > > Clustat reports "Estranged" whenever it asks cman for a node list, and > the corresponding node does not show up in cluster.conf. > > In your case, it means that cman still has state about the node despite > the configuration change in ccsd. > > I'm not sure cman "purges" nodes that were dead (but part of > cluster.conf) that later are removed from cluster.conf. > > Chrissie might know. To be simple, I could also just 'hide' the node in the output of clustat if the node's "offline" + "estranged"... would that work for you? ;) -- Lon Sure! What do you mean by 'hiding' the offline node? Yan From marcos at digitaltecnologia.info Wed Feb 6 19:17:43 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Wed, 06 Feb 2008 17:17:43 -0200 Subject: [Linux-cluster] XEN VM Cluster In-Reply-To: <1202296035.32498.37.camel@admc.win-rar.local> References: <1e3d27e97a62428e9c581714cb2ebb96@Mail61.safesecureweb.com> <1202296035.32498.37.camel@admc.win-rar.local> Message-ID: <1202325463.3128.24.camel@matriz.digitaltecnologia.info> I solved the mount gfs filesystem. I forgot to run "fence_ack_manual -n vserver1.teste.br". Now I can mount the gfs partition. Now I have other problem. One node appear offline to other node. When I use command line to migrate online the vm to node 2 it's ok. And of node 2 to node 1, too. Appear this in messages file. Feb 6 17:16:21 vserver1 fenced[5362]: fencing node "vserver2.uniube.br" Feb 6 17:16:21 vserver1 fenced[5362]: fence "vserver2.uniube.br" failed VSERVER1 [root at vserver1 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserver1.uniube.br 1 Online, Local, rgmanager vserver2.uniube.br 2 Offline Service Name Owner (Last) State ------- ---- ----- ------ ----- vm:admin vserver1.uniube.br started vm:aluno (none) disabled service:testeAdmin (none) stopped service:testeAluno (none) stopped VSERVER2 [root at vserver2 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserver1.uniube.br 1 Offline vserver2.uniube.br 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- vm:admin (none) stopped vm:aluno (none) disabled service:testeAdmin vserver2.uniube.br started service:testeAluno vserver2.uniube.br started My cluster.conf: Em Qua, 2008-02-06 ?s 12:07 +0100, jr escreveu: > > How could configure a partition to share a VM config for two machines? > > Could you send me your cluster.conf for I compare with I want to do? > > no need for my cluster.conf. just use a GFS partition and it will be > fine. (don't forget to put it into fstab) > > > > > Then I need to have a shared partition to put the VMs config, that will be access by other machines, and a physical (LVM in a storage) to put the real machine. > > Is it correct? > > i don't know what you mean by "real machine", but your guests not only > need the config, they will also need some storage for their system. > that's where you need a storage that's connected to your nodes, wether > it's luns, lvm lvs or image files, no matter. just keep in mind that if > you are using image files, you need to place them on GFS so that every > node in your cluster can access them the same. > > > > > When I start a VM in a node 1 it will start in a physical device. > > If I disconnect the node 1, will the vm migrate to node 2? > > Will the clients connections lose? > > it's just failover, which means that if the cluster sees a problem with > one of the nodes, the other node will take over it's services, which > basically means that the vms will be started on the other node. > that does mean that your clients will get disconnected. > > > > > I'm use a HP Storage and a two servers with multipath with emulex fiber channel. > > should be fine. > > johannes > -- _____________________________ Marcos Ferreira da Silva DiGital Tecnologia Uberl?ndia - MG (34) 9154-0150 / 3226-2534 From lhh at redhat.com Wed Feb 6 19:21:41 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 06 Feb 2008 14:21:41 -0500 Subject: [Linux-cluster] Re: problem deleting a node from a cluster In-Reply-To: <1843832.25571202324669002.JavaMail.root@ottmail> References: <1843832.25571202324669002.JavaMail.root@ottmail> Message-ID: <1202325701.21504.74.camel@ayanami.boston.devel.redhat.com> On Wed, 2008-02-06 at 14:04 -0500, Yan Vinogradov wrote: > To be simple, I could also just 'hide' the node in the output of clustat > if the node's "offline" + "estranged"... would that work for you? ;) > > -- Lon > > Sure! What do you mean by 'hiding' the offline node? Not displaying them in the output. That is, once you remove the node from cluster.conf and remove it from the cluster, clustat shouldn't display it, even if it knows that the node existed at one time... -- Lon From marcos at digitaltecnologia.info Wed Feb 6 19:21:58 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Wed, 06 Feb 2008 17:21:58 -0200 Subject: [Linux-cluster] Sharing GFS Partition - SOLVED In-Reply-To: <1202319362.21504.67.camel@ayanami.boston.devel.redhat.com> References: <1615ab867c794d99bd702ccebd73f828@Mail61.safesecureweb.com> <1202319362.21504.67.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202325718.3128.28.camel@matriz.digitaltecnologia.info> Thanks for all. The problem was solved. I opened other topic with subject "XEN VM Cluster" and I had help. Em Qua, 2008-02-06 ?s 12:36 -0500, Lon Hohberger escreveu: > On Tue, 2008-02-05 at 16:43 -0500, Marcos Ferreira da Silva wrote: > > The cluster start but I couldn't mount. > > > > The process fenced is started. > > 21399 ? Ssl 0:00 /sbin/ccsd > > 21421 ? Ss 0:00 /sbin/groupd > > 21429 ? Ss 0:00 /sbin/fenced > > 21435 ? Ss 0:00 /sbin/dlm_controld > > 21441 ? Ss 0:00 /sbin/gfs_controld > > 21831 ? Ssl 0:00 clvmd -T20 > > > > > > So, what's happening is that the node you have is trying to fence the > other node, but no fencing is configured, so it fails and retries > (forever!). If you're running RHEL5 or from CVS/head or some other > recent version of cman, you can try this manual override to make fencing > complete: > > while ! [ -e "/var/run/cluster/fenced_override" ]; do > sleep 1 > done > echo vserver2.teste.br > /var/run/cluster/fenced_override > > ... but if you are using Xen, the best thing to do is look here for > information about how to set up fencing so this doesn't happen in the > future: > > http://sources.redhat.com/cluster/wiki/VMClusterCookbook > > Also, the FAQ article on CMAN may be helpful for understanding what's > going on: > > http://sources.redhat.com/cluster/wiki/FAQ/CMAN > > -- Lon > From yanv at xandros.com Wed Feb 6 19:25:01 2008 From: yanv at xandros.com (Yan Vinogradov) Date: Wed, 6 Feb 2008 14:25:01 -0500 Subject: [Linux-cluster] Re: problem deleting a node from a cluster In-Reply-To: <1202325701.21504.74.camel@ayanami.boston.devel.redhat.com> Message-ID: <663105.26031202325901042.JavaMail.root@ottmail> ----- Original Message ----- From: Lon Hohberger Sent: Wed, 2/6/2008 2:21pm To: Yan Vinogradov Cc: linux clustering Subject: RE: [Linux-cluster] Re: problem deleting a node from a cluster On Wed, 2008-02-06 at 14:04 -0500, Yan Vinogradov wrote: > To be simple, I could also just 'hide' the node in the output of clustat > if the node's "offline" + "estranged"... would that work for you? ;) > > -- Lon > > Sure! What do you mean by 'hiding' the offline node? Not displaying them in the output.??That is, once you remove the node from cluster.conf and remove it from the cluster, clustat shouldn't display it, even if it knows that the node existed at one time... -- Lon Lon, that would be great! Thanks, Yan From lhh at redhat.com Wed Feb 6 19:46:10 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 06 Feb 2008 14:46:10 -0500 Subject: [Linux-cluster] [PATCH] Don't show offline + estranged members In-Reply-To: <663105.26031202325901042.JavaMail.root@ottmail> References: <663105.26031202325901042.JavaMail.root@ottmail> Message-ID: <1202327170.21504.79.camel@ayanami.boston.devel.redhat.com> This patch prevents the 'clustat' utility from showing members which: * are not part of the configuration, and * offline/dead/etc. -- Lon Index: clustat.c =================================================================== RCS file: /cvs/cluster/cluster/rgmanager/src/utils/clustat.c,v retrieving revision 1.38 diff -u -r1.38 clustat.c --- clustat.c 10 Dec 2007 18:11:56 -0000 1.38 +++ clustat.c 6 Feb 2008 19:34:35 -0000 @@ -719,6 +719,10 @@ void txt_member_state(cman_node_t *node, int nodesize) { + /* If it's down and not in cluster.conf, don't show it */ + if ((node->cn_member & (FLAG_NOCFG | FLAG_UP)) == FLAG_NOCFG) + return; + printf(" %-*.*s ", nodesize, nodesize, node->cn_name); printf("%4d ", node->cn_nodeid); @@ -754,6 +758,10 @@ void xml_member_state(cman_node_t *node) { + /* If it's down and not in cluster.conf, don't show it */ + if ((node->cn_member & (FLAG_NOCFG | FLAG_UP)) == FLAG_NOCFG) + return; + printf(" \n", From Alexandre.Racine at mhicc.org Wed Feb 6 21:31:57 2008 From: Alexandre.Racine at mhicc.org (Alexandre Racine) Date: Wed, 6 Feb 2008 16:31:57 -0500 Subject: [Linux-cluster] noatime References: <200812910934.067039@leena> Message-ID: Gordan, >The only other thing I can think of is that your netfs/gfs init script is >broken. Are you talking about the /etc/init.d/gfs script? Currently, I added this script in the startup. But should it start automatitly just like a nfs mount in the fstab? Thanks. This is my GFS part in my fstab (for Brian) ------------- # # iSCSI volumes (GFS) # /dev/sdd5 /home gfs noatime -------------- Alexandre Racine 514-461-1300 poste 3304 alexandre.racine at mhicc.org -----Original Message----- From: linux-cluster-bounces at redhat.com on behalf of gordan at bobich.net Sent: Tue 2008-02-05 11:45 To: linux clustering Subject: RE: [Linux-cluster] noatime IIRC, it only works on the clean mount. It won't work with mount -o remount The only other thing I can think of is that your netfs/gfs init script is broken. Gordan On Tue, 5 Feb 2008, Alexandre Racine wrote: > (bump) > > But, adding noatime to the fstab did not work in my case. > > Is there another way for the noatime? > > > > Alexandre Racine > 514-461-1300 poste 3304 > alexandre.racine at mhicc.org > > > > -----Original Message----- > From: linux-cluster-bounces at redhat.com on behalf of Alexandre Racine > Sent: Tue 2008-01-29 11:37 > To: linux clustering > Subject: RE: [Linux-cluster] noatime > > Hi, > > noatime will reduce traffic since if it's not there, each time you touch a file (read, write) it will change the value of the field "last access". > > But, just adding it to the fstab did not work in my case. > > Is there another way for the noatime? > > > > Has for your isplist, look here http://sourceware.org/cluster/faq.html#gfs_slowaftermount > > > Alexandre Racine > 514-461-1300 poste 3304 > alexandre.racine at mhicc.org > > > > -----Original Message----- > From: linux-cluster-bounces at redhat.com on behalf of isplist at logicore.net > Sent: Tue 2008-01-29 11:09 > To: linux clustering > Subject: Re: [Linux-cluster] noatime > > Were you trying this for a specific reason? I tried noatime to see if I could > get a faster initial response time from the GFS volume. Every time I access > it, there is a long delay initially, then things are fine until the next > access. > > > On Tue, 29 Jan 2008 09:26:19 -0500, Alexandre Racine wrote: >> Hi all, >> >> If I put in my fstab file: >> /dev/sdc5 /home gfs noatime >> >> >> Will it actually mount without atime? And how can I confirm this? >> >> Thanks. >> >> >> Alexandre Racine >> 514-461-1300 poste 3304 >> alexandre.racine at mhicc.org > > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > > -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3800 bytes Desc: not available URL: From bpkroth at wisc.edu Wed Feb 6 23:26:17 2008 From: bpkroth at wisc.edu (Brian Kroth) Date: Wed, 06 Feb 2008 17:26:17 -0600 Subject: [Linux-cluster] noatime In-Reply-To: References: <200812910934.067039@leena> Message-ID: <20080206232617.GB5771@odin.hslc.wisc.edu> Alexandre Racine : > Gordan, > > >The only other thing I can think of is that your netfs/gfs init script is > >broken. > > > Are you talking about the /etc/init.d/gfs script? Currently, I added this script in the startup. But should it start automatitly just like a nfs mount in the fstab? > > Thanks. Look in the script itself and make sure that it's making use of the options you define in your fstab. I had to fix the old one on gentoo before it worked as expected. Brian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2192 bytes Desc: not available URL: From prakash.c-k at hp.com Thu Feb 7 07:03:28 2008 From: prakash.c-k at hp.com (Kesavan, Prakash C) Date: Thu, 7 Feb 2008 12:33:28 +0530 Subject: [Linux-cluster] Help : CONFIGURING REDHAT CLUSTER WITH CONGA-RHEL5(active /Passive) In-Reply-To: <663105.26031202325901042.JavaMail.root@ottmail> Message-ID: <7496B97B2FDFD443B185F157079888B103D51410@bgeexc05.asiapacific.cpqcorp.net> Hi All, I am in a project in which we are suppose to configure redhat cluster. The following is the clients requirement. They want to configure a redhat cluster suit for there three tire web application(Apache,jboss,mysql,).They need to configure the server in active passive mode. We are planning to configure mysql active in one machine and passive in other machine then jboss active in one machine and passive in other machine. (not sure whether it is possible). We have a san storage also. Could any body have done the set up earlier the same with two node, request to share the knowledge. The main constraint now is 1) "How to setup the active passive setup in conga" 2) "What is the stability of the configuration with conga any limitation compairing with the method using files" 3) Any body have any other method of configuring the setup. using conventional method using files. 4) Fencing configuration with ILO.power switch. Script for password sharing..etc... Thanks in advance Prakash C Kesavan From singh.rajeshwar at gmail.com Thu Feb 7 07:32:06 2008 From: singh.rajeshwar at gmail.com (Rajesh singh) Date: Thu, 7 Feb 2008 13:02:06 +0530 Subject: [Linux-cluster] Help : CONFIGURING REDHAT CLUSTER WITH CONGA-RHEL5(active /Passive) In-Reply-To: <7496B97B2FDFD443B185F157079888B103D51410@bgeexc05.asiapacific.cpqcorp.net> References: <663105.26031202325901042.JavaMail.root@ottmail> <7496B97B2FDFD443B185F157079888B103D51410@bgeexc05.asiapacific.cpqcorp.net> Message-ID: Hi Kesavan, Take a simple approach: a) Configure MySQL and Active Passive mode b) Use JBoss clustering (it is inbuilt with product) feature to cluster JBoss c) If you are not comfortable with conga, use system-config-cluster to configure cluster. regards On Feb 7, 2008 12:33 PM, Kesavan, Prakash C wrote: > > Hi All, > > I am in a project in which we are suppose to configure redhat cluster. > > The following is the clients requirement. > > They want to configure a redhat cluster suit for there three tire web > application(Apache,jboss,mysql,).They need to configure the server in > active passive mode. > > We are planning to configure mysql active in one machine and passive in > other machine then jboss active in one machine and passive in other > machine. (not sure whether it is possible). > > We have a san storage also. > > Could any body have done the set up earlier the same with two node, > request to share the knowledge. > > The main constraint now is 1) "How to setup the active passive setup in > conga" > 2) "What is the stability of the > configuration with conga any limitation compairing with the > method using files" > 3) Any body have any other method > of configuring the setup. using conventional method using > files. > 4) Fencing configuration with > ILO.power switch. Script for password sharing..etc... > > > > Thanks in advance > Prakash C Kesavan > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prakash.c-k at hp.com Thu Feb 7 07:34:58 2008 From: prakash.c-k at hp.com (Kesavan, Prakash C) Date: Thu, 7 Feb 2008 13:04:58 +0530 Subject: [Linux-cluster] Help : CONFIGURING REDHAT CLUSTER WITHCONGA-RHEL5(active /Passive) In-Reply-To: Message-ID: <7496B97B2FDFD443B185F157079888B103D5144F@bgeexc05.asiapacific.cpqcorp.net> thanks a lot. -prakash ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Rajesh singh Sent: Thursday, February 07, 2008 1:02 PM To: linux clustering Subject: Re: [Linux-cluster] Help : CONFIGURING REDHAT CLUSTER WITHCONGA-RHEL5(active /Passive) Hi Kesavan, Take a simple approach: a) Configure MySQL and Active Passive mode b) Use JBoss clustering (it is inbuilt with product) feature to cluster JBoss c) If you are not comfortable with conga, use system-config-cluster to configure cluster. regards On Feb 7, 2008 12:33 PM, Kesavan, Prakash C wrote: Hi All, I am in a project in which we are suppose to configure redhat cluster. The following is the clients requirement. They want to configure a redhat cluster suit for there three tire web application(Apache,jboss,mysql,).They need to configure the server in active passive mode. We are planning to configure mysql active in one machine and passive in other machine then jboss active in one machine and passive in other machine. (not sure whether it is possible). We have a san storage also. Could any body have done the set up earlier the same with two node, request to share the knowledge. The main constraint now is 1) "How to setup the active passive setup in conga" 2) "What is the stability of the configuration with conga any limitation compairing with the method using files" 3) Any body have any other method of configuring the setup. using conventional method using files. 4) Fencing configuration with ILO.power switch. Script for password sharing..etc... Thanks in advance Prakash C Kesavan -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcaulfie at redhat.com Thu Feb 7 08:54:02 2008 From: pcaulfie at redhat.com (Christine Caulfield) Date: Thu, 07 Feb 2008 08:54:02 +0000 Subject: [Linux-cluster] Re: problem deleting a node from a cluster In-Reply-To: <1202323500.21504.71.camel@ayanami.boston.devel.redhat.com> References: <23589450.23361202311274793.JavaMail.root@ottmail> <1202318441.21504.57.camel@ayanami.boston.devel.redhat.com> <1202323500.21504.71.camel@ayanami.boston.devel.redhat.com> Message-ID: <47AAC72A.7060203@redhat.com> Lon Hohberger wrote: > On Wed, 2008-02-06 at 12:20 -0500, Lon Hohberger wrote: >> On Wed, 2008-02-06 at 10:21 -0500, Yan Vinogradov wrote: >> >>> Could be a bug. Does it say 'estranged' in the output, or not? >>> >>> -- Lon >>> >>> Hi Lon, yes, it does refer to the third node as estranged. Does it >>> tell you anything? >>> >> Clustat reports "Estranged" whenever it asks cman for a node list, and >> the corresponding node does not show up in cluster.conf. >> >> In your case, it means that cman still has state about the node despite >> the configuration change in ccsd. >> >> I'm not sure cman "purges" nodes that were dead (but part of >> cluster.conf) that later are removed from cluster.conf. >> >> Chrissie might know. That's right, cman never removes a node from its list. That's chiefly because it can't easily tell whether it might suddenly appear again, which was mainly the case in RHEL4 where cman was in the kernel and didn't have easy access to cluster.conf. Actually, in RHEL5 it could be possibly to purge a node from cman, it is something I've considered and might have a look at in the future, if people think it's worthwhile. Chrissie From asim.husanovic at gmail.com Thu Feb 7 15:14:20 2008 From: asim.husanovic at gmail.com (Asim Husanovic) Date: Thu, 7 Feb 2008 16:14:20 +0100 Subject: [Linux-cluster] Monitoring shared disk Message-ID: <3c1234d0802070714y21d88c88x1569bcb88b898238@mail.gmail.com> Hi How to identify shared disk/volumes? How to collect the cluster FS information? How to display shared disks on node/s? Which node/s? ------ How to run RH CS on SLES 10 without any SP, or on openSUSE 10.x WR Asim From bpkroth at wisc.edu Thu Feb 7 15:28:42 2008 From: bpkroth at wisc.edu (Brian Kroth) Date: Thu, 07 Feb 2008 09:28:42 -0600 Subject: [Linux-cluster] Monitoring shared disk In-Reply-To: <3c1234d0802070714y21d88c88x1569bcb88b898238@mail.gmail.com> References: <3c1234d0802070714y21d88c88x1569bcb88b898238@mail.gmail.com> Message-ID: <20080207152842.GC30561@wisc.edu> Here are some quick examples. There are almost certainly other ways to do it. Asim Husanovic : > Hi > > How to identify shared disk/volumes? scsi_id > How to collect the cluster FS information? gfs_tool > How to display shared disks on node/s? Which node/s? gfs_tool list cman_tool services You can wrap all of these inside snmp scripts/oids or use nagios passive checks if you want to monitor them remotely/automatically. Brian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2192 bytes Desc: not available URL: From gordan at bobich.net Thu Feb 7 15:32:26 2008 From: gordan at bobich.net (gordan at bobich.net) Date: Thu, 7 Feb 2008 15:32:26 +0000 (GMT) Subject: [Linux-cluster] 2-node tie-breaking Message-ID: Hi, I've got a slightly peculiar problem. 2-node cluster acting as a load balanced fail-over router. 3 NICs: public, private, cluster. Cluster NICs are connected with a cross-over cable, the other two are on switches. The cluster NIC is only used for DRBD/GFS/DLM and associated things. The failure mode that I'm trying to account for is the one of the cluster NIC failing on one machine. On the public and privace networks, both machines can still see everything (including each other). That means that a tie-breaker based on other visible things will not work. So, which machine gets fenced in the case of the cluster NIC failure (or more likely, if the x-over cable falls out)? Is there a sane, tested way to handle this condition? It would be quite embarrasing if both, otherwise fully functional nodes, decided to shut fence the other one off by shutting it down. Thanks. Gordan From asim.husanovic at gmail.com Thu Feb 7 15:40:51 2008 From: asim.husanovic at gmail.com (Asim Husanovic) Date: Thu, 7 Feb 2008 16:40:51 +0100 Subject: [Linux-cluster] Monitoring shared disk In-Reply-To: <20080207152842.GC30561@wisc.edu> References: <3c1234d0802070714y21d88c88x1569bcb88b898238@mail.gmail.com> <20080207152842.GC30561@wisc.edu> Message-ID: <3c1234d0802070740x990ed92o61be4ca385755b64@mail.gmail.com> Thanks. Asim On Feb 7, 2008 4:28 PM, Brian Kroth wrote: > Here are some quick examples. There are almost certainly other ways to do > it. > > Asim Husanovic : > > Hi > > > > How to identify shared disk/volumes? > > scsi_id > > > How to collect the cluster FS information? > > gfs_tool > > > How to display shared disks on node/s? Which node/s? > > gfs_tool list > cman_tool services > > > You can wrap all of these inside snmp scripts/oids or use nagios passive > checks if you want to monitor them remotely/automatically. > > Brian > From bpkroth at wisc.edu Thu Feb 7 15:44:25 2008 From: bpkroth at wisc.edu (Brian Kroth) Date: Thu, 07 Feb 2008 09:44:25 -0600 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: References: Message-ID: <20080207154424.GE30561@wisc.edu> A simple way might be to check for that case specifically and assign a preference to the node you want to win. For instance: if $can_ping_internal_nodes \ && $can_ping_external_nodes \ && ! $can_ping_cluster_node; then if [ $HOSTNAME == node2 ]; then self_fence else steal_cluster_nodes_resources fi fi I know that you can do this via clever scoring in heartbeat [1], but I'm not sure about rgmanager. [1] linux-ha.org Brian gordan at bobich.net : > Hi, > > I've got a slightly peculiar problem. 2-node cluster acting as a load > balanced fail-over router. 3 NICs: public, private, cluster. > Cluster NICs are connected with a cross-over cable, the other two are on > switches. The cluster NIC is only used for DRBD/GFS/DLM and associated > things. > > The failure mode that I'm trying to account for is the one of the cluster > NIC failing on one machine. On the public and privace networks, both > machines can still see everything (including each other). That means that a > tie-breaker based on other visible things will not work. > > So, which machine gets fenced in the case of the cluster NIC failure (or > more likely, if the x-over cable falls out)? > > Is there a sane, tested way to handle this condition? It would be quite > embarrasing if both, otherwise fully functional nodes, decided to shut > fence the other one off by shutting it down. > > Thanks. > > Gordan > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2192 bytes Desc: not available URL: From gordan at bobich.net Thu Feb 7 15:52:11 2008 From: gordan at bobich.net (gordan at bobich.net) Date: Thu, 7 Feb 2008 15:52:11 +0000 (GMT) Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <20080207154424.GE30561@wisc.edu> References: <20080207154424.GE30561@wisc.edu> Message-ID: Thanks for that. How could this be hooked from cluster.conf? I wasn't planning to use the heartbeat stuff. Gordan On Thu, 7 Feb 2008, Brian Kroth wrote: > A simple way might be to check for that case specifically and assign a > preference to the node you want to win. For instance: > > if $can_ping_internal_nodes \ > && $can_ping_external_nodes \ > && ! $can_ping_cluster_node; then > > if [ $HOSTNAME == node2 ]; then > self_fence > else > steal_cluster_nodes_resources > fi > fi > > I know that you can do this via clever scoring in heartbeat [1], but I'm > not sure about rgmanager. > > [1] linux-ha.org > > Brian > > gordan at bobich.net : >> Hi, >> >> I've got a slightly peculiar problem. 2-node cluster acting as a load >> balanced fail-over router. 3 NICs: public, private, cluster. >> Cluster NICs are connected with a cross-over cable, the other two are on >> switches. The cluster NIC is only used for DRBD/GFS/DLM and associated >> things. >> >> The failure mode that I'm trying to account for is the one of the cluster >> NIC failing on one machine. On the public and privace networks, both >> machines can still see everything (including each other). That means that a >> tie-breaker based on other visible things will not work. >> >> So, which machine gets fenced in the case of the cluster NIC failure (or >> more likely, if the x-over cable falls out)? >> >> Is there a sane, tested way to handle this condition? It would be quite >> embarrasing if both, otherwise fully functional nodes, decided to shut >> fence the other one off by shutting it down. >> >> Thanks. >> >> Gordan >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > From fog at t.is Thu Feb 7 17:24:51 2008 From: fog at t.is (=?iso-8859-1?Q?Finnur_=D6rn_Gu=F0mundsson_-_TM_Software?=) Date: Thu, 7 Feb 2008 17:24:51 -0000 Subject: [Linux-cluster] NFS clustering - how to use fsid= options for the exports Message-ID: <3DDA6E3E456E144DA3BB0A62A7F7F77901CACD9D@SKYHQAMX08.klasi.is> Hi, I have setup a active-standby NFS cluster. Everything is working like it should, except for one thing. In HP ServiceGuard you use /etc/exports on each node in a active-standby cluster to manage exports, and there you can use the fsid=X for each filesystem you want to export (helps with stale nfs errors after a failover). In RHCS i cannot seem to get the server to correctly use the options ... Here is my service: Would this not be the correct way ? Is there a other way to do this except for manually using /etc/exports and just have RHCS manage the IP address transfer beteween the nodes? K?r kve?ja / Best Regards, Finnur ?rn Gu?mundsson Network Engineer - Network Operations fog at t.is TM Software Ur?arhvarf 6, IS-203 K?pavogur, Iceland Tel: +354 545 3000 - fax +354 545 3610 www.tm-software.is This e-mail message and any attachments are confidential and may be privileged. TM Software e-mail disclaimer: www.tm-software.is/disclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpeterso at redhat.com Thu Feb 7 19:19:34 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Thu, 07 Feb 2008 13:19:34 -0600 Subject: [Linux-cluster] New Cluster Wiki and FAQ Message-ID: <1202411974.7223.42.camel@technetium.msp.redhat.com> Hi People, I'm happy to announce that, thanks to the ingenuity of Lon Hohberger, we now have a cluster wiki. It's still in its infancy, but since it's a wiki, it should be easier to update, and it should allow you to add / modify / improve the content as well. The cluster wiki is at: http://sources.redhat.com/cluster/wiki Lon has also migrated the Cluster FAQ to a wiki page: http://sources.redhat.com/cluster/wiki/FAQ Hopefully all the contents are there from the old FAQ. Let me know if you have problems with it. To help keep spam from invading the wiki, if you want to change any of the pages, you have to first set up an account: A simple name, password and email address. >From now on I plan to keep the wiki up to date and let the old cluster FAQ fade away. Regards, Bob Peterson Red Hat GFS/GFS2 From lhh at redhat.com Thu Feb 7 19:35:37 2008 From: lhh at redhat.com (Lon H. Hohberger) Date: Thu, 07 Feb 2008 14:35:37 -0500 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: References: Message-ID: <1202412937.2938.55.camel@localhost.localdomain> On Thu, 2008-02-07 at 15:32 +0000, gordan at bobich.net wrote: > Hi, > > I've got a slightly peculiar problem. 2-node cluster acting as a load > balanced fail-over router. 3 NICs: public, private, cluster. > Cluster NICs are connected with a cross-over cable, the other two are on > switches. The cluster NIC is only used for DRBD/GFS/DLM and associated > things. > > The failure mode that I'm trying to account for is the one of the cluster > NIC failing on one machine. On the public and privace networks, both > machines can still see everything (including each other). That means that > a tie-breaker based on other visible things will not work. > > So, which machine gets fenced in the case of the cluster NIC failure (or > more likely, if the x-over cable falls out? ... whichever gets fenced first ;) 1. You can do a clever heuristic using qdiskd if you wanted, for example: * assign an IP on the private cluster network and make rgmanager manage it as a service (even though it doesn't do anything). Make sure to *disable* monitor_link, or rgmanager will stop the service! * make a script to check for: * ethernet link of the private interface, and * if that fails, ping the service IP address * if that fails, we are *dead*; give up and -do not- try to fence If you put the IP as part of the "most critical" service that rgmanager's running, then the operator of that service will continue running while the other node is not allowed to continue running. Because the first check is whether we have a cluster link - and the *second* check is the ping of the service, Something like this... ... ... Script might something like: #!/bin/sh DEVICE=eth3 PINGIP=10.1.1.2 # # Ensure the device is there! # ethtool $DEVICE || exit 1 # # Check for link # ethtool $DEVICE | grep -q "Link detected.*yes" if [ $? -eq 0 ]; then exit 0 fi # # XXX Work around signal bug for now. # ping_func() { declare retries=0 declare PID ping -c3 -t2 $1 & PID=`jobs -p` while [ $retries -lt 2 ]; do sleep 1 ((retries++)) kill -n 0 $PID &> /dev/null if [ $? -eq 1 ]; then wait $PID return $? fi done kill -9 $PID return 1 } # # Ping service ip address. # ping_func $1 exit $? ------------------------------- Disadvantage is that it's hard to start the cluster w/o both nodes online without some sort of override. 2. You can do something like Brian said, too - e.g. "if I am the right host and the link isn't up, I win": #!/bin/sh DEVICE=eth3 OTHER_NODE_PUBLIC_IP="192.168.1.2" # # Ensure the device is there! # ethtool $DEVICE || exit 1 # # Check for link # ethtool $DEVICE | grep -q "Link detected.*yes" if [ $? -eq 0 ]; then exit 0 fi # # XXX Work around signal bug for now. # ping_func() { declare retries=0 declare PID ping -c3 -t2 $1 & PID=`jobs -p` while [ $retries -lt 2 ]; do sleep 1 ((retries++)) kill -n 0 $PID &> /dev/null if [ $? -eq 1 ]; then wait $PID return $? fi done kill -9 $PID return 1 } # # Ok, no link on private net # ping_func $OTHER_NODE_PUBLIC_IP if [ $? -eq 0 ]; then [ "`uname -n`" == "node1" ] exit $? fi # # Other node is down and we're not - # we win # exit 0 ------------------------------- 3. Another simple way to do it is to use a fake "fencing agent" to introduce a delay: (where /bin/sleep-10 is something like: #!/bin/sh sleep 10 exit 0 ) Reference that agent as part of -one- node's fencing, and that node will lose by default. This way, you don't have to set up qdiskd. You could do the same thing by just editing the fencing agent directly on that node, as well - in which case, you wouldn't have to edit cluster.conf at all. -- Lon From dsharp at ivytech.edu Thu Feb 7 19:38:35 2008 From: dsharp at ivytech.edu (Doug Sharp) Date: Thu, 7 Feb 2008 14:38:35 -0500 Subject: [Linux-cluster] NFS clustering - how to use fsid= options for theexports References: <3DDA6E3E456E144DA3BB0A62A7F7F77901CACD9D@SKYHQAMX08.klasi.is> Message-ID: <02CC84960E52F745860DFF29ADC62091025E5477@MSEXCHNG-02.ivytech.local> > Would this not be the correct way ? Is there a other way to do > this except for manually using /etc/exports and just have RHCS > manage the IP address transfer beteween the nodes? http://sources.redhat.com/cluster/doc/nfscookbook.pdf See the description of "Managed NFS" for a way to manage NFS completely within cluster.conf Doug -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2781 bytes Desc: not available URL: From lhh at redhat.com Thu Feb 7 19:43:52 2008 From: lhh at redhat.com (Lon H. Hohberger) Date: Thu, 07 Feb 2008 14:43:52 -0500 Subject: [Linux-cluster] New Cluster Wiki and FAQ In-Reply-To: <1202411974.7223.42.camel@technetium.msp.redhat.com> References: <1202411974.7223.42.camel@technetium.msp.redhat.com> Message-ID: <1202413432.2938.58.camel@localhost.localdomain> On Thu, 2008-02-07 at 13:19 -0600, Bob Peterson wrote: > Hopefully all the contents are there from the old FAQ. Let me know if > you have problems with it. To help keep spam from invading the wiki, > if you want to change any of the pages, you have to first set up an > account: A simple name, password and email address. Note - when creating your account, please use 'FirstLast' so we can track pages and such more easily :) ex - LonHohberger BobPeterson etc. -- Lon From gordan at bobich.net Thu Feb 7 20:18:09 2008 From: gordan at bobich.net (Gordan Bobic) Date: Thu, 07 Feb 2008 20:18:09 +0000 Subject: [Linux-cluster] Pseudo-Fencing for 2-node DRBD+GFS Message-ID: <47AB6781.4060201@bobich.net> Hi, Thanks for the replies on the other thread. I'm still assimilating them. Meanwhile, however, I am faced with a different problem. It turns out that the servers I've got to prototype this with only have MegaRAC 780 (a.k.a. DRAC 2) management cards in them. The problem with this is that, to the best of my ability to tell, these don't seem to provide a sanely usable interface to allow for the machine to be powered down via the command line (Windows only management utility - I can't even get it to ping, only arping, and nmap isn't coming up with anything!) So, I need a poor man's pseudo-fencing hack that will protect the shared file system by stopping the DRBD replication between the hosts (range of options available including using iptables to firewall the cluster NIC, or just downing the interface completely, as it's not used for anything else). The theoretical assumption (I don't really have a choice...) is that if one of the machines fails, it will fail completely and not attempt to fight over the resources that need to be failed over (in this case just IP addresses). Is there a minimum API that a fencing script has to implement to be usable via cluster.conf? If so, can anyone please point me at some documentation for it? Would this be a reasonable way to implement fencing in a 2-node case? Thanks. Gordan From gordan at bobich.net Thu Feb 7 21:02:10 2008 From: gordan at bobich.net (Gordan Bobic) Date: Thu, 07 Feb 2008 21:02:10 +0000 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <1202412937.2938.55.camel@localhost.localdomain> References: <1202412937.2938.55.camel@localhost.localdomain> Message-ID: <47AB71D2.2040201@bobich.net> Lon H. Hohberger wrote: [...] > 3. Another simple way to do it is to use a fake "fencing agent" to > introduce a delay: > > > > (where /bin/sleep-10 is something like: > #!/bin/sh > sleep 10 > exit 0 > ) > > Reference that agent as part of -one- node's fencing, and that node will > lose by default. This way, you don't have to set up qdiskd. You could > do the same thing by just editing the fencing agent directly on that > node, as well - in which case, you wouldn't have to edit cluster.conf at > all. Funnily enough, this is exactly what I was just thinking about. Thanks. :-) Unfortunately, things just got a bit more complicated for me, because it looks like my fencing device won't work. :-( Instead, it may have to end up being something as bodgy as ifdown-ing the cluster interface on the surviving node - in the 2 node scenario, that ought to be as good a using a managed switch based fencing device. The file systems will diverge if both nodes keep running, but neither fork will be corrupted. Thinking about this a bit more, a tie-breaking IP ping may need to be implemented on the public and private NICs. On the public side, the tie-breaker would need to be the next router along. Are there hooks for implementing such a thing as a simple ping script, or heartbeat or similar that can be used to achieve this? Thanks. Gordan From lhh at redhat.com Thu Feb 7 21:56:06 2008 From: lhh at redhat.com (Lon H. Hohberger) Date: Thu, 07 Feb 2008 16:56:06 -0500 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <47AB71D2.2040201@bobich.net> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> Message-ID: <1202421366.2938.70.camel@localhost.localdomain> On Thu, 2008-02-07 at 21:02 +0000, Gordan Bobic wrote: > Thinking about this a bit more, a tie-breaking IP ping may need to be > implemented on the public and private NICs. On the public side, the > tie-breaker would need to be the next router along. Are there hooks for > implementing such a thing as a simple ping script, or heartbeat or > similar that can be used to achieve this? Hmm, the problem is that using qdisk requires... well, a shared disk (that doesn't need pesky things like network communications...). Well, here's a cheap 'out' ;) Ebay item # 250213910258 8 port WTI NPS for $70 + $15 s/h. -- Lon From harry.sutton at hp.com Thu Feb 7 22:08:22 2008 From: harry.sutton at hp.com (Sutton, Harry (MSE)) Date: Thu, 07 Feb 2008 17:08:22 -0500 Subject: [Linux-cluster] Two-node cluster unpatched B doesn't see patched A Message-ID: <47AB8156.6010205@hp.com> The most recent set of patches for RHCS, comprising: RHBA-2008:0093 dlm-kernel bug fix update RHBA-2008:0092 cman-kernel bug fix update RHBA-2008:0060 cman bug fix update RHBA-2008:0095 gnbd-kernel bug fix update RHBA-2008:0096 GFS-kernel bug fix update RHSA-2008:0055 Important: kernel security and bug fix update has resulted in a problem in my two-node (production) cluster. Let me explain ;-) I have a three-node test cluster where I install all patches before rolling them into my (two-node) production cluster; I know, I know, they're not the same, and that's the only difference I can see in what has happened here (a first in two years). In the three-node cluster (which, just to complicate things, only had two active nodes at the time), I rolled these patches through the two nodes without taking the whole cluster down. That is: 1. Stop all cluster services on Node A. Disable auto-start using chkconfig off . Services stop successfully, Node A leaves the cluster, Node B continues running all shared cluster services (GFS, Fibre-channel-connected shared storage, HP MSA1000). 2. Patch Node A, reboot to new kernel, re-install HP-supplied QLogic driver, edit /etc/modprobe.conf for failover settings, rebuild initrd file for QLogic drivers, reboot, re-enable auto-start of cluster services, reboot once more and the cluster re-forms. 3. Repeat Steps 1 and 2 for Node B 4. Cluster is restored to normal operation, both nodes fully patched. On my production cluster, which uses a Quorum Disk in place of the third node, I completed steps 1 and 2 on Node A, but the cluster did NOT reform. cman sends out its advertisement, and I can see that Node B receives it (by looking at the tcpdump traces), but Node B never responds. So: before I take down Node B (which is currently the only one running my production services), can someone either (a) explain why the cluster is not re-forming, or (b) assure me that by restoring both systems to the same patch level, the cluster WILL reform properly? (Which begs the question: why did my test cluster survive the patch process and my production cluster didn't? Same versions of everything......) Thanks in advance, and best regards, /Harry Sutton, RHCA Hewlett-Packard Company -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6255 bytes Desc: S/MIME Cryptographic Signature URL: From gordan at bobich.net Thu Feb 7 22:19:56 2008 From: gordan at bobich.net (Gordan Bobic) Date: Thu, 07 Feb 2008 22:19:56 +0000 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <1202421366.2938.70.camel@localhost.localdomain> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> Message-ID: <47AB840C.8040905@bobich.net> Lon H. Hohberger wrote: > On Thu, 2008-02-07 at 21:02 +0000, Gordan Bobic wrote: > >> Thinking about this a bit more, a tie-breaking IP ping may need to be >> implemented on the public and private NICs. On the public side, the >> tie-breaker would need to be the next router along. Are there hooks for >> implementing such a thing as a simple ping script, or heartbeat or >> similar that can be used to achieve this? > > Hmm, the problem is that using qdisk requires... well, a shared disk > (that doesn't need pesky things like network communications...). Sure, I understand that. But since there are only 2 nodes, fencing by downing the local cluster interface would at least stop the file system from getting hosed. > Well, here's a cheap 'out' ;) > > Ebay item # 250213910258 > > 8 port WTI NPS for $70 + $15 s/h. Indeed, that would be tempting if it wasn't on the wrong side of the atlantic. A more local search for similar things doesn't seem to come up with anything. :-( Gordan From jakub.suchy at enlogit.cz Thu Feb 7 22:31:08 2008 From: jakub.suchy at enlogit.cz (Jakub Suchy) Date: Thu, 7 Feb 2008 23:31:08 +0100 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <47AB840C.8040905@bobich.net> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> Message-ID: <20080207223108.GA15215@localhost> > Indeed, that would be tempting if it wasn't on the wrong side of the > atlantic. A more local search for similar things doesn't seem to come up > with anything. :-( Have you considered something like this? http://www.oracle.com/technology/pub/articles/hunter_rac10gr2.html Build Your Own Oracle RAC 10g Release 2 Cluster on Linux and FireWire forget those Oracle things and see information about low-cost shared disk. i didn't try, i am not sure if it works, but it's interesting :-) Jakub -- Jakub Such? GSM: +420 - 777 817 949 Enlogit s.r.o, U Cukrovaru 509/4, 400 07 ?st? nad Labem tel.: +420 - 474 745 159, fax: +420 - 474 745 160 e-mail: info at enlogit.cz, web: http://www.enlogit.cz From jakub.suchy at enlogit.cz Thu Feb 7 22:39:51 2008 From: jakub.suchy at enlogit.cz (Jakub Suchy) Date: Thu, 7 Feb 2008 23:39:51 +0100 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <47AB840C.8040905@bobich.net> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> Message-ID: <20080207223951.GB15215@localhost> >> Well, here's a cheap 'out' ;) >> Ebay item # 250213910258 >> 8 port WTI NPS for $70 + $15 s/h. > > Indeed, that would be tempting if it wasn't on the wrong side of the > atlantic. A more local search for similar things doesn't seem to come up > with anything. :-( Another "cheap" power switch may be: http://www.hw-group.com/products/ip_watchdog/index_lite_en.html Made by a Czech Republic (= right side of the atlantic :) company, allows power switching of two servers. Requires you to write a fencing script and connect power output cables from this device to power switches in the servers. Again - never tried, but it's around 3188 Czech crowns, which is approx. $187. Jakub -- Jakub Such? From peter at widexs.nl Thu Feb 7 23:07:27 2008 From: peter at widexs.nl (Peter Bosgraaf) Date: Fri, 8 Feb 2008 00:07:27 +0100 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <20080207223951.GB15215@localhost> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> Message-ID: <20080207230727.GE63223@widexs.nl> Hi, On Thu, Feb 07, 2008 at 11:39:51PM +0100, Jakub Suchy wrote: > >> Well, here's a cheap 'out' ;) > >> Ebay item # 250213910258 > >> 8 port WTI NPS for $70 + $15 s/h. > > > > Indeed, that would be tempting if it wasn't on the wrong side of the > > atlantic. A more local search for similar things doesn't seem to come up > > with anything. :-( *Don't flame me, i didn't want to spam the list with commerial stuff but thought this might help you out* If your interested in a powerswitch solution i would recommend you to look at the solutions from www.lubenco.nl They have 8 port (C13) 10 amp powerswitches with load monitoring and snmp configuration support for about 117 euro a pice. (new) If your willing to buy in bulk the price even drops well below 100 euro. Cheers, Peter From gordan at bobich.net Thu Feb 7 23:24:55 2008 From: gordan at bobich.net (Gordan Bobic) Date: Thu, 07 Feb 2008 23:24:55 +0000 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <20080207223951.GB15215@localhost> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> Message-ID: <47AB9347.6050601@bobich.net> Jakub Suchy wrote: >>> Well, here's a cheap 'out' ;) >>> Ebay item # 250213910258 >>> 8 port WTI NPS for $70 + $15 s/h. >> Indeed, that would be tempting if it wasn't on the wrong side of the >> atlantic. A more local search for similar things doesn't seem to come up >> with anything. :-( > > Another "cheap" power switch may be: > > http://www.hw-group.com/products/ip_watchdog/index_lite_en.html > > Made by a Czech Republic (= right side of the atlantic :) company, > allows power switching of two servers. Requires you to write a fencing > script and connect power output cables from this device to power > switches in the servers. Again - never tried, but it's around 3188 Czech > crowns, which is approx. $187. Yeah, but that's not that far off a couple of cheap UPS-es with basic serial interfaces. I could cross-connect them so that Server-A is powered off UPS-A, and is controlling UPS-B via serial/USB. Same in reverse on the other machine. That way Server-A can power off Server-B and vice versa. The upshot being that at least that way I have UPS capability. But it'd still be nice to have a software only "might work most of the time" solution. It should work at least in the cases where preservation of the GFS is the main concern, as all the surviving nodes should be able to firewall off the node they think has died. Resource fail-over might still be an issue, but at least the cluster itself would stay safe and functional. Gordan From doseyg at r-networks.net Fri Feb 8 05:01:57 2008 From: doseyg at r-networks.net (Glen Dosey) Date: Fri, 08 Feb 2008 00:01:57 -0500 Subject: [Linux-cluster] GFS2 loses data under kernel 2.6.24... Message-ID: <1202446917.5942.19.camel@eclipse.office.r-networks.net> I experienced this today at work on a RHEL5 system and have verified it today at home on Fedora 8. Perhaps I am doing something foolish .... I have a fully patched RHEL5 x86_64 system which works fine with the Red Hat supplied cluster stuff, except the NFS server performance is abysmal (~640Mb/s NFS). After pulling my hair trying to fix NFS I decided to just grab the latest kernel which fixed the problem (~980Mb/s NFS). But it introduced another much more serious problem, which I've duplicated on my FC8 x86_64 system at home. I already have all the cman/clvmd/openais/gfs[2]-utils packages installed through the package manager. I downloaded kernel 2.6.24 from kernel.org and did a straight `make -j4 rpm ` and installed the resulting rpm in both instances. Both systems worked fine with RHEL/Fedora kernels, but here's what happens under 2.6.24 [root at eclipse test]# dd if=/dev/zero of=test3.dd bs=512M count=1 1+0 records in 1+0 records out 536870912 bytes (537 MB) copied, 7.95285 s, 67.5 MB/s [root at eclipse test]# ll total 2101312 -rw-r--r-- 1 root root 0 2008-02-07 23:25 test2.dd -rw-r--r-- 1 root root 536870912 2008-02-07 23:42 test3.dd -rw-r--r-- 1 root root 1073741824 2008-02-07 22:54 test.dd [root at eclipse test]# cd .. [root at eclipse mnt]# umount /mnt/test/ [root at eclipse mnt]# mount /mnt/test/ [root at eclipse mnt]# mount | grep test /dev/mapper/disk00-test on /mnt/test type gfs2 (rw,hostdata=jid=0:id=524289:first=1) [root at eclipse mnt]# cd /mnt/test/ [root at eclipse test]# ll total 2101312 -rw-r--r-- 1 root root 0 2008-02-07 23:25 test2.dd -rw-r--r-- 1 root root 0 2008-02-07 23:42 test3.dd -rw-r--r-- 1 root root 1073741824 2008-02-07 22:54 test.dd Files that have data just go zero size after an umount and remount. I've tried a variety of file sizes and tried it with file containing data as well (not all zeros). This worked under the RHEL kernels, so is there something I'm doing wrong ? Both systems are running cman and are a quorate 2 node cluster (where the second node doesn't exist). At work it's a 1TB shared filesystem but here at home it's just a local disk, so there's nothing else with any access to it. If someone could maybe point out what I'm doing wrong I'd appreciate it, or just let me know this won't work for whatever reason. I haven't even touched on getting the GFS1 modules to build into this. Thanks, Glen From swhiteho at redhat.com Fri Feb 8 09:30:59 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Fri, 08 Feb 2008 09:30:59 +0000 Subject: [Linux-cluster] GFS2 loses data under kernel 2.6.24... In-Reply-To: <1202446917.5942.19.camel@eclipse.office.r-networks.net> References: <1202446917.5942.19.camel@eclipse.office.r-networks.net> Message-ID: <1202463059.22038.421.camel@quoit> Hi, Does this patch fix it for you? http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=9656b2c14c6ee0806c90a6be41dec71117fc8f50 or you can just upgrade to the latest upstream Linus kernel. It was a result of the write_end function not working in exactly the same way as the older commit_write used to, Steve. On Fri, 2008-02-08 at 00:01 -0500, Glen Dosey wrote: > I experienced this today at work on a RHEL5 system and have verified it > today at home on Fedora 8. Perhaps I am doing something foolish .... > > I have a fully patched RHEL5 x86_64 system which works fine with the Red > Hat supplied cluster stuff, except the NFS server performance is abysmal > (~640Mb/s NFS). After pulling my hair trying to fix NFS I decided to > just grab the latest kernel which fixed the problem (~980Mb/s NFS). But > it introduced another much more serious problem, which I've duplicated > on my FC8 x86_64 system at home. > > I already have all the cman/clvmd/openais/gfs[2]-utils packages > installed through the package manager. I downloaded kernel 2.6.24 from > kernel.org and did a straight `make -j4 rpm ` and installed the > resulting rpm in both instances. Both systems worked fine with > RHEL/Fedora kernels, but here's what happens under 2.6.24 > > [root at eclipse test]# dd if=/dev/zero of=test3.dd bs=512M count=1 > 1+0 records in > 1+0 records out > 536870912 bytes (537 MB) copied, 7.95285 s, 67.5 MB/s > [root at eclipse test]# ll > total 2101312 > -rw-r--r-- 1 root root 0 2008-02-07 23:25 test2.dd > -rw-r--r-- 1 root root 536870912 2008-02-07 23:42 test3.dd > -rw-r--r-- 1 root root 1073741824 2008-02-07 22:54 test.dd > [root at eclipse test]# cd .. > [root at eclipse mnt]# umount /mnt/test/ > [root at eclipse mnt]# mount /mnt/test/ > [root at eclipse mnt]# mount | grep test > /dev/mapper/disk00-test on /mnt/test type gfs2 > (rw,hostdata=jid=0:id=524289:first=1) > [root at eclipse mnt]# cd /mnt/test/ > [root at eclipse test]# ll > total 2101312 > -rw-r--r-- 1 root root 0 2008-02-07 23:25 test2.dd > -rw-r--r-- 1 root root 0 2008-02-07 23:42 test3.dd > -rw-r--r-- 1 root root 1073741824 2008-02-07 22:54 test.dd > > Files that have data just go zero size after an umount and remount. I've > tried a variety of file sizes and tried it with file containing data as > well (not all zeros). This worked under the RHEL kernels, so is there > something I'm doing wrong ? > > Both systems are running cman and are a quorate 2 node cluster (where > the second node doesn't exist). At work it's a 1TB shared filesystem but > here at home it's just a local disk, so there's nothing else with any > access to it. > > If someone could maybe point out what I'm doing wrong I'd appreciate it, > or just let me know this won't work for whatever reason. I haven't even > touched on getting the GFS1 modules to build into this. > > Thanks, > Glen > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From celso at webbertek.com.br Fri Feb 8 13:18:20 2008 From: celso at webbertek.com.br (Celso K. Webber) Date: Fri, 8 Feb 2008 11:18:20 -0200 Subject: [Linux-cluster] RHCS 5.1 latest packages, 2-node cluster, doesn't come up with only 1 node Message-ID: <20080208130449.M21432@webbertek.com.br> Hello all, I'm having a situation here that might be a bug, or maybe it's some mistake from my part. * Scenario: 2-node cluster on Dell PE-2950 servers, Dell MD-3000 storage (SAS direct-attach), using IPMI-Lan as fencing devices, 2 NICs on each server (public and heartbeat networks), using Qdisk in the shared storage * Problem: if I shutdown one node and keep it shut down, and then reboot the other node, although CMAN comes up after 5 minutes or so, rgmanager does not start. I remember having this same problem with RHCS 4.4, but it was solved by upgrading to 4.5. But with RHCS 4.4 CMAN didn't come up, with my setup in RHCS 5.1 CMAN comes up after giving up waiting for the other node, but rgmanager doesn't, so services get not started. This is bad in an unattended situation. Here are some steps and details I've collected from the machine (sorry for a so long message): * Shutdown node1 * Reboot node2 - after boot, took around 5 minutes in the "start fencing" message - reported a startup FAIL for the "cman" service after this period of time * Boot completed * Logged in: - clustat reported inquorate and quorum disk as "offline": [root at mrp02 ~]# clustat msg_open: No such file or directory Member Status: Inquorate Member Name ID Status ------ ---- ---- ------ node1 1 Offline node2 2 Online, Local /dev/sdc1 0 Offline * After a few seconds, clustat reported quorate and quorum disk as "online": [root at mrp02 ~]# clustat msg_open: No such file or directory Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node1 1 Offline node2 2 Online, Local /dev/sdc1 0 Online, Quorum Disk * Logs in /var/log/messages showed that after qdiskd assumed "master role", cman reported regaining quorum: Feb 7 20:06:59 mrp02 qdiskd[5854]: Assuming master role Feb 7 20:07:00 mrp02 ccsd[5694]: Cluster is not quorate. Refusing connection. Feb 7 20:07:00 mrp02 ccsd[5694]: Error while processing connect: Connection refused Feb 7 20:07:00 mrp02 ccsd[5694]: Cluster is not quorate. Refusing connection. Feb 7 20:07:00 mrp02 ccsd[5694]: Error while processing connect: Connection refused Feb 7 20:07:00 mrp02 openais[5714]: [CMAN ] quorum regained, resuming activity Feb 7 20:07:01 mrp02 clurgmgrd[7523]: Quorum formed, starting ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -> Note that rgmanager started after quorum was regained, but seemed not to work anymore later on (please see below). Feb 7 20:07:01 mrp02 kernel: dlm: no local IP address has been set Feb 7 20:07:01 mrp02 kernel: dlm: cannot start dlm lowcomms -107 * Noticed that in "clustat" there as an error message: -> msg_open: No such file or directory * Checked rgmanager to see if it was related: [root at mrp02 ~]# chkconfig --list rgmanager rgmanager 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root at mrp02 ~]# service rgmanager status clurgmgrd dead but pid file exists * Since rgmanager did not come back by itself, restarted it manually: [root at mrp02 init.d]# service rgmanager restart Starting Cluster Service Manager: dlm: Using TCP for communications [ OK ] * This time clustat did not show the "msg_open" error anymore: [root at mrp02 init.d]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node1 1 Offline node2 2 Online, Local /dev/sdc1 0 Online, Quorum Disk * It seems to me that in case of cman regaining quorum after a lost quorum, at least in a initial "no quorum" state, rgmanager is not "woke up" * This setup had no services configured, so I repeated the test configuring a simple start/stop/status service using the "crond" service as an example, same results * Copy /etc/cluster/cluster.conf: -> Notice: I'm using Qdiskd with "always ok" heuristics, since the customer does not have a "always-on" IP tiebraker device to use with a "ping" command as heuristics. Could someone tell me with this is a expected behaviour? Shouldn't rgmanager start up automatically in this case? Thank you all, Celso. -- *Celso Kopp Webber* celso at webbertek.com.br *Webbertek - Opensource Knowledge* (41) 8813-1919 - celular (41) 4063-8448, ramal 102 - fixo -- Esta mensagem foi verificada pelo sistema de antiv?rus e acredita-se estar livre de perigo. From celso at webbertek.com.br Fri Feb 8 13:25:09 2008 From: celso at webbertek.com.br (Celso K. Webber) Date: Fri, 8 Feb 2008 11:25:09 -0200 Subject: [Linux-cluster] RHCS 5.1 latest packages, 2-node cluster, doesn't come up with only 1 node In-Reply-To: <20080208130449.M21432@webbertek.com.br> References: <20080208130449.M21432@webbertek.com.br> Message-ID: <20080208132257.M22787@webbertek.com.br> Hello, I forgot to add some versioning information from the Cluster packages, here they are: * Main cluster packages: cman-2.0.73-1.el5_1.1.x86_64.rpm openais-0.80.3-7.el5.x86_64.rpm perl-Net-Telnet-3.03-5.noarch.rpm * Admin tools packages: Cluster_Administration-en-US-5.1.0-7.noarch.rpm cluster-cim-0.10.0-5.el5_1.1.x86_64.rpm cluster-snmp-0.10.0-5.el5_1.1.x86_64.rpm luci-0.10.0-6.el5.x86_64.rpm modcluster-0.10.0-5.el5_1.1.x86_64.rpm rgmanager-2.0.31-1.el5.x86_64.rpm ricci-0.10.0-6.el5.x86_64.rpm system-config-cluster-1.0.50-1.3.noarch.rpm tog-pegasus-2.6.1-2.el5_1.1.*.rpm oddjob-*.rpm Thank you, Celso. On Fri, 8 Feb 2008 11:18:20 -0200, Celso K. Webber wrote > Hello all, > > I'm having a situation here that might be a bug, or maybe it's some mistake > from my part. > > * Scenario: 2-node cluster on Dell PE-2950 servers, Dell MD-3000 > storage (SAS direct-attach), using IPMI-Lan as fencing devices, 2 > NICs on each server > (public and heartbeat networks), using Qdisk in the shared storage > > * Problem: if I shutdown one node and keep it shut down, and then > reboot the other node, although CMAN comes up after 5 minutes or so, > rgmanager does not start. > > I remember having this same problem with RHCS 4.4, but it was solved > by upgrading to 4.5. But with RHCS 4.4 CMAN didn't come up, with my > setup in RHCS > 5.1 CMAN comes up after giving up waiting for the other node, but rgmanager > doesn't, so services get not started. This is bad in an unattended situation. > > Here are some steps and details I've collected from the machine > (sorry for a so long message): > > * Shutdown node1 > > * Reboot node2 > - after boot, took around 5 minutes in the "start fencing" message > - reported a startup FAIL for the "cman" service after this period > of time > > * Boot completed > > * Logged in: > - clustat reported inquorate and quorum disk as "offline": > [root at mrp02 ~]# clustat > msg_open: No such file or directory > Member Status: Inquorate > > Member Name ID Status > ------ ---- ---- ------ > node1 1 Offline > node2 2 Online, Local > /dev/sdc1 0 Offline > > * After a few seconds, clustat reported quorate and quorum disk as "online": > [root at mrp02 ~]# clustat > msg_open: No such file or directory > Member Status: Quorate > > Member Name ID Status > ------ ---- ---- ------ > node1 1 Offline > node2 2 Online, Local > /dev/sdc1 0 Online, Quorum Disk > > * Logs in /var/log/messages showed that after qdiskd assumed "master > role", cman reported regaining quorum: > Feb 7 20:06:59 mrp02 qdiskd[5854]: Assuming master role > Feb 7 20:07:00 mrp02 ccsd[5694]: Cluster is not quorate. Refusing connection. > > Feb 7 20:07:00 mrp02 ccsd[5694]: Error while processing connect: Connection > refused > > Feb 7 20:07:00 mrp02 ccsd[5694]: Cluster is not quorate. Refusing connection. > > Feb 7 20:07:00 mrp02 ccsd[5694]: Error while processing connect: Connection > refused > > Feb 7 20:07:00 mrp02 openais[5714]: [CMAN ] quorum regained, > resuming activity > Feb 7 20:07:01 mrp02 clurgmgrd[7523]: Quorum formed, starting > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > -> Note that rgmanager started after quorum was regained, but > seemed not to work anymore later on (please see below). > Feb 7 20:07:01 mrp02 kernel: dlm: no local IP address has been set > Feb 7 20:07:01 mrp02 kernel: dlm: cannot start dlm lowcomms -107 > > * Noticed that in "clustat" there as an error message: > -> msg_open: No such file or directory > > * Checked rgmanager to see if it was related: > [root at mrp02 ~]# chkconfig --list rgmanager > rgmanager 0:off 1:off 2:on 3:on 4:on 5:on 6:off > [root at mrp02 ~]# service rgmanager status > clurgmgrd dead but pid file exists > > * Since rgmanager did not come back by itself, restarted it manually: > [root at mrp02 init.d]# service rgmanager restart > Starting Cluster Service Manager: dlm: Using TCP for communications > [ OK ] > > * This time clustat did not show the "msg_open" error anymore: > [root at mrp02 init.d]# clustat > Member Status: Quorate > > Member Name ID Status > ------ ---- ---- ------ > node1 1 Offline > node2 2 Online, Local > /dev/sdc1 0 Online, Quorum Disk > > * It seems to me that in case of cman regaining quorum after a lost > quorum, at least in a initial "no quorum" state, rgmanager is not > "woke up" > > * This setup had no services configured, so I repeated the test > configuring a simple start/stop/status service using the "crond" > service as an example, same results > > * Copy /etc/cluster/cluster.conf: > -> Notice: I'm using Qdiskd with "always ok" heuristics, since the > customer does not have a "always-on" IP tiebraker device to use with > a "ping" command as heuristics. config_version="4" name="clu_mrp"> label="clu_mrp" min_score="1" tko="30" votes="1"> interval="2" program="/bin/true" score="1"/> > > > > > > > > > > > > > > > > > > > login="root" name="node1-ipmi" passwd="xxx"/> > login="root" name="node2-ipmi" passwd="xxx"/> > > > > > > > > Could someone tell me with this is a expected behaviour? Shouldn't rgmanager > start up automatically in this case? > > Thank you all, > > Celso. > > -- > *Celso Kopp Webber* > > celso at webbertek.com.br > > *Webbertek - Opensource Knowledge* > (41) 8813-1919 - celular > (41) 4063-8448, ramal 102 - fixo > > -- > Esta mensagem foi verificada pelo sistema de antiv?rus e > acredita-se estar livre de perigo. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -- *Celso Kopp Webber* celso at webbertek.com.br *Webbertek - Opensource Knowledge* (41) 8813-1919 - celular (41) 4063-8448, ramal 102 - fixo -- Esta mensagem foi verificada pelo sistema de antiv?rus e acredita-se estar livre de perigo. From lhh at redhat.com Fri Feb 8 15:12:57 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 08 Feb 2008 10:12:57 -0500 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <20080207230727.GE63223@widexs.nl> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> <20080207230727.GE63223@widexs.nl> Message-ID: <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> On Fri, 2008-02-08 at 00:07 +0100, Peter Bosgraaf wrote: > Hi, > > On Thu, Feb 07, 2008 at 11:39:51PM +0100, Jakub Suchy wrote: > > >> Well, here's a cheap 'out' ;) > > >> Ebay item # 250213910258 > > >> 8 port WTI NPS for $70 + $15 s/h. > > > > > > Indeed, that would be tempting if it wasn't on the wrong side of the > > > atlantic. A more local search for similar things doesn't seem to come up > > > with anything. :-( > > *Don't flame me, i didn't want to spam the list with commerial stuff but > thought this might help you out* > > If your interested in a powerswitch solution i would recommend you to look > at the solutions from www.lubenco.nl > > They have 8 port (C13) 10 amp powerswitches with load monitoring and > snmp configuration support for about 117 euro a pice. (new) If your > willing to buy in bulk the price even drops well below 100 euro. Guys, this is good stuff - the problem is we need to make an agent that can talk to them before we can really solve Gordan's problem. Does anyone want to volunteer? I'll write up a fence agent API "howto" today ... -- Lon From johannes.russek at io-consulting.net Fri Feb 8 15:15:51 2008 From: johannes.russek at io-consulting.net (jr) Date: Fri, 08 Feb 2008 16:15:51 +0100 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> <20080207230727.GE63223@widexs.nl> <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202483751.3388.30.camel@admc.win-rar.local> > Guys, this is good stuff - the problem is we need to make an agent that > can talk to them before we can really solve Gordan's problem. Does > anyone want to volunteer? I'll write up a fence agent API "howto" > today ... > > -- Lon well, if there would be any kind of documentation on their webpage that tells anything about how to control it, it'd be simple.. but there just isn't (or i couldn't find). johannes > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From gordan at bobich.net Fri Feb 8 15:32:27 2008 From: gordan at bobich.net (gordan at bobich.net) Date: Fri, 8 Feb 2008 15:32:27 +0000 (GMT) Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> <20080207230727.GE63223@widexs.nl> <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> Message-ID: On Fri, 8 Feb 2008, Lon Hohberger wrote: >>>>> Well, here's a cheap 'out' ;) >>>>> Ebay item # 250213910258 >>>>> 8 port WTI NPS for $70 + $15 s/h. >>>> >>>> Indeed, that would be tempting if it wasn't on the wrong side of the >>>> atlantic. A more local search for similar things doesn't seem to come up >>>> with anything. :-( >> >> *Don't flame me, i didn't want to spam the list with commerial stuff but >> thought this might help you out* >> >> If your interested in a powerswitch solution i would recommend you to look >> at the solutions from www.lubenco.nl >> >> They have 8 port (C13) 10 amp powerswitches with load monitoring and >> snmp configuration support for about 117 euro a pice. (new) If your >> willing to buy in bulk the price even drops well below 100 euro. > > Guys, this is good stuff - the problem is we need to make an agent that > can talk to them before we can really solve Gordan's problem. Does > anyone want to volunteer? I'll write up a fence agent API "howto" > today ... Thanks for all this, guys, much appreciated. :-) I just had another thought. Most motherboards these days are ATX, which means ATX type "short-pins-to-power-on" power switches. That means that as a _REALLY_ cheap solution I could just get something like a small relay switch and wire it into the serial port. When a pin on RS232 goes high (e.g. DTR), it activates the switch. I think it would be pretty reliable, and the total cost of components would be pennies. The fence agent would also be a total of about 10 lines of code, too. :) Can anyone think of why this wouldn't work? OK, it involves a bit of DIY, but it would work for a 2-node cluster. With more nodes, pulsing the power switch repeatedly might bring the node back up, which would be less than ideal if it's _really_ nastily intermittent. But since this isn't really scalable to cluster with more than 3 nodes (2 serial ports per node), it probably doesn't matter. Gordan From peter at widexs.nl Fri Feb 8 15:42:17 2008 From: peter at widexs.nl (Peter Bosgraaf) Date: Fri, 8 Feb 2008 16:42:17 +0100 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <1202483751.3388.30.camel@admc.win-rar.local> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> <20080207230727.GE63223@widexs.nl> <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> <1202483751.3388.30.camel@admc.win-rar.local> Message-ID: <20080208154217.GA68659@widexs.nl> Hi, On Fri, Feb 08, 2008 at 04:15:51PM +0100, jr wrote: > well, if there would be any kind of documentation on their webpage that > tells anything about how to control it, it'd be simple.. but there just > isn't (or i couldn't find). > johannes > Their a small company which explains the rather sober website. I could mail you the MIB if your interested. Also i'm sure they would be willing to send you a sample to testdrive it for a while, i know we got one when we asked for it. Cheers, -- Peter Bosgraaf // Hosting Architect // WideXS PBG666-RIPE // T: 023-5698070 // F: 023-5698099 PGP KeyID: 0xB436C25B (hkp://pgp.mit.edu) Fingerprint: 49D1 E1EE 3142 31F2 BDF0 CBBF 9675 8267 B436 C25B From jdozarchuk at babcock.com Fri Feb 8 17:47:26 2008 From: jdozarchuk at babcock.com (Ozarchuk, John D) Date: Fri, 8 Feb 2008 12:47:26 -0500 Subject: [Linux-cluster] NFS clustering - how to use fsid= options for theexports References: <3DDA6E3E456E144DA3BB0A62A7F7F77901CACD9D@SKYHQAMX08.klasi.is> Message-ID: <89FEB3FDB047394C998E7A0077715A670DCB66@barbpo4.bwes.net> I have a simliar setup, but I just set a Service to manage the IP address and filesystems. It's not that difficult to manage two /etc/exports files... ________________________________ From: linux-cluster-bounces at redhat.com on behalf of Finnur ?rn Gu?mundsson - TM Software Sent: Thu 2/7/2008 12:24 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] NFS clustering - how to use fsid= options for theexports Hi, I have setup a active-standby NFS cluster. Everything is working like it should, except for one thing. In HP ServiceGuard you use /etc/exports on each node in a active-standby cluster to manage exports, and there you can use the fsid=X for each filesystem you want to export (helps with stale nfs errors after a failover). In RHCS i cannot seem to get the server to correctly use the options ... Here is my service: Would this not be the correct way ? Is there a other way to do this except for manually using /etc/exports and just have RHCS manage the IP address transfer beteween the nodes? K?r kve?ja / Best Regards, Finnur ?rn Gu?mundsson Network Engineer - Network Operations fog at t.is TM Software Ur?arhvarf 6, IS-203 K?pavogur, Iceland Tel: +354 545 3000 - fax +354 545 3610 www.tm-software.is This e-mail message and any attachments are confidential and may be privileged. TM Software e-mail disclaimer: www.tm-software.is/disclaimer ----------------------------------------- This message is intended only for the individual or entity to which it is addressed and contains information that is proprietary to The Babcock & Wilcox Company and/or its affiliates, or may be otherwise confidential. If the reader of this message is not the intended recipient, or the employee agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately by return e-mail and delete this message from your computer. Thank you. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 7097 bytes Desc: not available URL: From fog at t.is Fri Feb 8 17:56:58 2008 From: fog at t.is (=?iso-8859-1?Q?Finnur_=D6rn_Gu=F0mundsson_-_TM_Software?=) Date: Fri, 8 Feb 2008 17:56:58 -0000 Subject: [Linux-cluster] NFS clustering - how to use fsid= options for theexports In-Reply-To: <89FEB3FDB047394C998E7A0077715A670DCB66@barbpo4.bwes.net> References: <3DDA6E3E456E144DA3BB0A62A7F7F77901CACD9D@SKYHQAMX08.klasi.is> <89FEB3FDB047394C998E7A0077715A670DCB66@barbpo4.bwes.net> Message-ID: <3DDA6E3E456E144DA3BB0A62A7F7F77901CACFDA@SKYHQAMX08.klasi.is> Exactly, that is what i ended up with! Thanks though, Finnur From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Ozarchuk, John D Sent: 8. febr?ar 2008 17:47 To: linux clustering Subject: RE: [Linux-cluster] NFS clustering - how to use fsid= options for theexports I have a simliar setup, but I just set a Service to manage the IP address and filesystems. It's not that difficult to manage two /etc/exports files... ________________________________ From: linux-cluster-bounces at redhat.com on behalf of Finnur ?rn Gu?mundsson - TM Software Sent: Thu 2/7/2008 12:24 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] NFS clustering - how to use fsid= options for theexports Hi, I have setup a active-standby NFS cluster. Everything is working like it should, except for one thing. In HP ServiceGuard you use /etc/exports on each node in a active-standby cluster to manage exports, and there you can use the fsid=X for each filesystem you want to export (helps with stale nfs errors after a failover). In RHCS i cannot seem to get the server to correctly use the options ... Here is my service: Would this not be the correct way ? Is there a other way to do this except for manually using /etc/exports and just have RHCS manage the IP address transfer beteween the nodes? K?r kve?ja / Best Regards, Finnur ?rn Gu?mundsson Network Engineer - Network Operations fog at t.is TM Software Ur?arhvarf 6, IS-203 K?pavogur, Iceland Tel: +354 545 3000 - fax +354 545 3610 www.tm-software.is This e-mail message and any attachments are confidential and may be privileged. TM Software e-mail disclaimer: www.tm-software.is/disclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhh at redhat.com Fri Feb 8 21:15:36 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 08 Feb 2008 16:15:36 -0500 Subject: [Linux-cluster] RHCS 5.1 latest packages, 2-node cluster, doesn't come up with only 1 node In-Reply-To: <20080208130449.M21432@webbertek.com.br> References: <20080208130449.M21432@webbertek.com.br> Message-ID: <1202505336.6443.87.camel@ayanami.boston.devel.redhat.com> On Fri, 2008-02-08 at 11:18 -0200, Celso K. Webber wrote: > Feb 7 20:07:01 mrp02 kernel: dlm: no local IP address has been set > Feb 7 20:07:01 mrp02 kernel: dlm: cannot start dlm lowcomms -107 This is why rgmanager didn't work (and possibly even exited). Does 'uname -n' match what's in cluster.conf? -- Lon From lhh at redhat.com Fri Feb 8 21:19:27 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 08 Feb 2008 16:19:27 -0500 Subject: [Linux-cluster] RHCS 5.1 latest packages, 2-node cluster, doesn't come up with only 1 node In-Reply-To: <1202505336.6443.87.camel@ayanami.boston.devel.redhat.com> References: <20080208130449.M21432@webbertek.com.br> <1202505336.6443.87.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202505567.6443.91.camel@ayanami.boston.devel.redhat.com> On Fri, 2008-02-08 at 16:15 -0500, Lon Hohberger wrote: > On Fri, 2008-02-08 at 11:18 -0200, Celso K. Webber wrote: > > Feb 7 20:07:01 mrp02 kernel: dlm: no local IP address has been set > > Feb 7 20:07:01 mrp02 kernel: dlm: cannot start dlm lowcomms -107 > > This is why rgmanager didn't work (and possibly even exited). Does > 'uname -n' match what's in cluster.conf? Hmm - strange that trying to start it again would cause dlm to suddenly work. I wonder why that would be. In 4.4, we had a bug in cman where it wasn't passing transitions of the quorum disk down to the internal CMAN service manager, but that's fixed. rgmanager literally just sits there waiting for quorum before trying to acquire the DLM lockspace. So, it looks like if we stay inquorate for some time, dlm_controld or groupd doesn't correctly set the local node name. -- Lon From lhh at redhat.com Fri Feb 8 21:25:49 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 08 Feb 2008 16:25:49 -0500 Subject: [Linux-cluster] Two-node cluster unpatched B doesn't see patched A In-Reply-To: <47AB8156.6010205@hp.com> References: <47AB8156.6010205@hp.com> Message-ID: <1202505949.6443.97.camel@ayanami.boston.devel.redhat.com> On Thu, 2008-02-07 at 17:08 -0500, Sutton, Harry (MSE) wrote: > On my production cluster, which uses a Quorum Disk in place of the third > node, I completed steps 1 and 2 on Node A, but the cluster did NOT > reform. cman sends out its advertisement, and I can see that Node B > receives it (by looking at the tcpdump traces), but Node B never responds. That's strange, I hardly think it's related to qdiskd. Maybe check 'cman_tool version' on both and ensure that the cluster.conf version matches the output of 'cman_tool version' - that's a common way to prevent the cluster from forming on RHEL4. If they're out of sync: * get the right cluster.conf in place & bump the config version # * run 'cman_tool version -r ' * copy cluster.conf to other node Note: Restoring the nodes to the same package versions will have utterly no effect if the config version is out of sync! -- Lon From celso at webbertek.com.br Fri Feb 8 21:33:33 2008 From: celso at webbertek.com.br (Celso K. Webber) Date: Fri, 8 Feb 2008 19:33:33 -0200 Subject: [Linux-cluster] RHCS 5.1 latest packages, 2-node cluster, doesn't come up with only 1 node In-Reply-To: <1202505336.6443.87.camel@ayanami.boston.devel.redhat.com> References: <20080208130449.M21432@webbertek.com.br> <1202505336.6443.87.camel@ayanami.boston.devel.redhat.com> Message-ID: <20080208211828.M67336@webbertek.com.br> Hi Lon, On Fri, 08 Feb 2008 16:15:36 -0500, Lon Hohberger wrote > On Fri, 2008-02-08 at 11:18 -0200, Celso K. Webber wrote: > > Feb 7 20:07:01 mrp02 kernel: dlm: no local IP address has been set > > Feb 7 20:07:01 mrp02 kernel: dlm: cannot start dlm lowcomms -107 > > This is why rgmanager didn't work (and possibly even exited). Does > 'uname -n' match what's in cluster.conf? > No, it does not! I didn't know it should match, I'm configuring RHCS Clusters since version 2.1 and this never bothered me, sorry!!! Well, I usually do the following in /etc/hosts: -> assume network 192.168.1.0/24 is for public access -> assume network 10.0.0.0/8 is for heartbeat 192.168.1.1 realservername1.domainname realservername1 192.168.1.2 realservername2.domainname realservername2 10.0.0.1 node1.localdomain node1 10.0.0.2 node2.localdomain node2 192.168.1.3 servicename1.domainname servicename1 192.168.1.4 servicename2.domainname servicename2 ... and so on for other virtual IPs for services ... Then I configure in cluster.conf the names associated with the private addresses/interfaces, so that I'm sure that heartbeat traffic is going through the correct interfaces. For obvious reasons, "uname -n" returns the public hostnames, such as realservername1.domainname. I noticed that from some time there is a question in the FAQ explaining how to "bind" the heartbeat traffic to a specific interface/address. But I was happy with my solution, specially because the answer to that question suggested touching the init script, and I don't like to alter standard system files, specially init scripts. At least in RHCS v4, I didn't find a better way to "bind" the heartbeat traffic to a specific interface. I didn't experiment about this with RHCS v5, I just went on with my previous method. For me this is common practice, for instance, Oracle Database respects an environment variable called ORACLE_HOSTNAME, so that you can "instruct" the several utilities to consider that name instead of the real server's name. This is very useful in a Cluster environment. Please tell me: * is it really wrong set the node names in cluster.conf to a name different to that reported by "uname -n"? * if it is "ugly" or considered wrong, what is the best way to instruct CMAN which interface to use for heartbeat? * does this solution work both for RHCS v4 and v5? * would it be better to have only one interface for public and heartbeat traffic, maybe channel bonding dual NICs? * is there any other significant difference between RHCSv4 and v5 I should be aware of? As always, thank you very very much for your support! Regards, Celso. -- *Celso Kopp Webber* celso at webbertek.com.br *Webbertek - Opensource Knowledge* (41) 8813-1919 - celular (41) 4063-8448, ramal 102 - fixo -- Esta mensagem foi verificada pelo sistema de antiv?rus e acredita-se estar livre de perigo. From harry.sutton at hp.com Fri Feb 8 22:00:08 2008 From: harry.sutton at hp.com (Sutton, Harry (MSE)) Date: Fri, 08 Feb 2008 17:00:08 -0500 Subject: [Linux-cluster] Two-node cluster unpatched B doesn't see patched A In-Reply-To: <1202505949.6443.97.camel@ayanami.boston.devel.redhat.com> References: <47AB8156.6010205@hp.com> <1202505949.6443.97.camel@ayanami.boston.devel.redhat.com> Message-ID: <47ACD0E8.4050603@hp.com> Lon Hohberger wrote: > Maybe check 'cman_tool version' on both and ensure that the cluster.conf > version matches the output of 'cman_tool version' - that's a common way > to prevent the cluster from forming on RHEL4. > > No joy there, Lon - the values match. /Harry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6255 bytes Desc: S/MIME Cryptographic Signature URL: From lhh at redhat.com Fri Feb 8 22:16:49 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 08 Feb 2008 17:16:49 -0500 Subject: [Linux-cluster] RHCS 5.1 latest packages, 2-node cluster, doesn't come up with only 1 node In-Reply-To: <20080208211828.M67336@webbertek.com.br> References: <20080208130449.M21432@webbertek.com.br> <1202505336.6443.87.camel@ayanami.boston.devel.redhat.com> <20080208211828.M67336@webbertek.com.br> Message-ID: <1202509009.6443.140.camel@ayanami.boston.devel.redhat.com> On Fri, 2008-02-08 at 19:33 -0200, Celso K. Webber wrote: > Hi Lon, > > On Fri, 08 Feb 2008 16:15:36 -0500, Lon Hohberger wrote > > On Fri, 2008-02-08 at 11:18 -0200, Celso K. Webber wrote: > > > Feb 7 20:07:01 mrp02 kernel: dlm: no local IP address has been set > > > Feb 7 20:07:01 mrp02 kernel: dlm: cannot start dlm lowcomms -107 > > > > This is why rgmanager didn't work (and possibly even exited). Does > > 'uname -n' match what's in cluster.conf? > > > No, it does not! I didn't know it should match, I'm configuring RHCS Clusters > since version 2.1 and this never bothered me, sorry!!! > > Well, I usually do the following in /etc/hosts: > -> assume network 192.168.1.0/24 is for public access > -> assume network 10.0.0.0/8 is for heartbeat > > 192.168.1.1 realservername1.domainname realservername1 > 192.168.1.2 realservername2.domainname realservername2 > > 10.0.0.1 node1.localdomain node1 > 10.0.0.2 node2.localdomain node2 > > 192.168.1.3 servicename1.domainname servicename1 > 192.168.1.4 servicename2.domainname servicename2 > ... and so on for other virtual IPs for services ... > > Then I configure in cluster.conf the names associated with the private > addresses/interfaces, so that I'm sure that heartbeat traffic is going > through the correct interfaces. > > For obvious reasons, "uname -n" returns the public hostnames, such as > realservername1.domainname. > > I noticed that from some time there is a question in the FAQ explaining how > to "bind" the heartbeat traffic to a specific interface/address. But I was > happy with my solution, specially because the answer to that question > suggested touching the init script, and I don't like to alter standard system > files, specially init scripts. At least in RHCS v4, I didn't find a better > way to "bind" the heartbeat traffic to a specific interface. I didn't > experiment about this with RHCS v5, I just went on with my previous method. > > For me this is common practice, for instance, Oracle Database respects an > environment variable called ORACLE_HOSTNAME, so that you can "instruct" the > several utilities to consider that name instead of the real server's name. > This is very useful in a Cluster environment. > > Please tell me: > * is it really wrong set the node names in cluster.conf to a name different > to that reported by "uname -n"? > * if it is "ugly" or considered wrong, what is the best way to instruct CMAN > which interface to use for heartbeat? I think it's mostly fixed in RHEL5. We have updated the CMAN init script for RHEL5 to allow /etc/sysconfig/cluster to have "NODENAME=preferred_host_name". It will go out with the next update, but here it is in CVS: *massive url* http://sources.redhat.com/cgi-bin/cvsweb.cgi/~checkout~/cluster/cman/init.d/Attic/cman?rev=1.26.2.6&content-type=text/plain&cvsroot=cluster&hideattic=0&only_with_tag=RHEL5 tinyurl: http://tinyurl.com/2fg6nd It still could be a bug. The dlm unable to determine the local hostname is definitely why rgmanager died (it needs the DLM!). Updating the script / trying to force CMAN with a specific node name is just one way to eliminate a possible cause (and it might fix it, too ;) ). > * does this solution work both for RHCS v4 and v5? The RHEL5 script is not backwards compatible, but cman_tool join -n is. > * would it be better to have only one interface for public and heartbeat > traffic, maybe channel bonding dual NICs? Better is certainly a matter of perception in this case. I would expect you'd want to get your current configuration working before altering your network topology. Also, it's not like your configuration is particularly strange... > * is there any other significant difference between RHCSv4 and v5 I should be > aware of? > As always, thank you very very much for your support! We do what we can, but please keep in mind that a public mailing list isn't a very good support forum compared to (for example): https://www.redhat.com/apps/support/ -- Lon From marcos at digitaltecnologia.info Sat Feb 9 08:26:22 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Sat, 9 Feb 2008 03:26:22 -0500 Subject: [Linux-cluster] Fenced failed Message-ID: I have two nodes and in the log appear many messages: Feb 9 06:22:39 vserver1 ccsd[18304]: process_get: Invalid connection descriptor received. Feb 9 06:22:39 vserver1 ccsd[18304]: Error while processing get: Invalid request descriptor Feb 9 06:22:39 vserver1 fenced[18334]: fence "vserver2.teste.br" failed My cluster.conf My /etc/hosts: 192.168.200.200 vserver1.teste.br vserver1 192.168.200.201 vserver2.teste.br vserver2 ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 From marcos at digitaltecnologia.info Sat Feb 9 12:16:46 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Sat, 9 Feb 2008 07:16:46 -0500 Subject: [Linux-cluster] Node Offline Message-ID: <4407800e1b774121bb5e2e7209c9f966@Mail61.safesecureweb.com> I start my cluster but the nodes don't see each other. [root at vserv3 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserv3.uniube.br 1 Online, Local, rgmanager vserv4.uniube.br 2 Offline Service Name Owner (Last) State ------- ---- ----- ------ ----- service:gfsweb (none) stopped [root at vserv4 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserv3.uniube.br 1 Offline vserv4.uniube.br 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:gfsweb (none) stopped The firewall was stopped. The network is ok. ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 From NiCE2Kn0w at web.de Sat Feb 9 13:51:42 2008 From: NiCE2Kn0w at web.de (Markus Neis) Date: Sat, 09 Feb 2008 14:51:42 +0100 Subject: [Linux-cluster] gfs locking on samba Message-ID: <1939546029@web.de> Hi there, I found out that when I use samba on gfs it's very slow until I use locking = no. This makes any transfer really fast, but in some situations I really need locking. Is there any possibility or any parameters I can use instead? regards _____________________________________________________________________ Unbegrenzter Speicherplatz f?r Ihr E-Mail Postfach? Jetzt aktivieren! http://www.digitaledienste.web.de/freemail/club/lp/?lp=7 From damjan.stulic at edwardjones.com Sun Feb 10 03:36:30 2008 From: damjan.stulic at edwardjones.com (Stulic,Damjan) Date: Sat, 9 Feb 2008 21:36:30 -0600 Subject: [Linux-cluster] watchdog: rebootig In-Reply-To: <1939546029@web.de> References: <1939546029@web.de> Message-ID: Having occasional problems with cluster watchdog rebooting servers. Any ideas on why this might be helping, or how to troubleshoot this problem? Any help is greatly appreciated, Damjan Stulic IS Security Identity Management Edward Jones Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd[29771]: Stopping service dns-dhcp-services Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd: [29771]: Executing /etc/init.d/named stop Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com named: succeeded Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd: [29771]: Executing /etc/init.d/dhcpd stop Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com dhcpd: dhcpd shutdown succeeded Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd: [29771]: Removing IPv4 address 172.17.163.80 from bond0 Feb 9 21:11:08 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd[29771]: Service dns-dhcp-services is stopped Feb 9 21:11:08 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com syslog-ng[3063]: Changing permissions on special file /dev/console Feb 9 21:11:08 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com kernel: clurgmgrd[12018]: segfault at 0000000000000050 rip 000000000040399e rsp 0000000041442010 error4 Feb 9 21:11:12 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd[29770]: Watchdog: Daemon died, rebooting... If you are not the intended recipient of this message (including attachments), or if you have received this message in error, immediately notify us and delete it and any attachments. If you no longer wish to receive e-mail from Edward Jones, please send this request to messages at edwardjones.com. You must include the e-mail address that you wish not to receive e-mail communications. For important additional information related to this e-mail, visit www.edwardjones.com/US_email_disclosure From jpalmae at gmail.com Sun Feb 10 23:58:20 2008 From: jpalmae at gmail.com (Jorge Palma) Date: Sun, 10 Feb 2008 20:58:20 -0300 Subject: [Linux-cluster] watchdog: rebootig In-Reply-To: References: <1939546029@web.de> Message-ID: <5b65f1b10802101558y694a0520p7c1ee62557e429d5@mail.gmail.com> On Feb 10, 2008 12:36 AM, Stulic,Damjan wrote: > Having occasional problems with cluster watchdog rebooting servers. > Any ideas on why this might be helping, or how to troubleshoot this > problem? > > Any help is greatly appreciated, > Damjan Stulic > IS Security Identity Management > Edward Jones > > > Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com > clurgmgrd[29771]: Stopping service dns-dhcp-services > Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd: > [29771]: Executing /etc/init.d/named stop > Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com named: > succeeded > Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd: > [29771]: Executing /etc/init.d/dhcpd stop > Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com dhcpd: dhcpd > shutdown succeeded > Feb 9 21:10:58 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com clurgmgrd: > [29771]: Removing IPv4 address 172.17.163.80 from bond0 > Feb 9 21:11:08 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com > clurgmgrd[29771]: Service dns-dhcp-services is stopped > Feb 9 21:11:08 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com > syslog-ng[3063]: Changing permissions on special file /dev/console > Feb 9 21:11:08 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com kernel: > clurgmgrd[12018]: segfault at 0000000000000050 rip 000000000040399e rsp > 0000000041442010 error4 > Feb 9 21:11:12 s_sys at nlpgbrdns-2.edj.ad.edwardjones.com > clurgmgrd[29770]: Watchdog: Daemon died, rebooting... > > If you are not the intended recipient of this message (including attachments), or if you have received this message in error, immediately notify us and delete it and any attachments. If you no longer wish to receive e-mail from Edward Jones, please send this request to messages at edwardjones.com. You must include the e-mail address that you wish not to receive e-mail communications. For important additional information related to this e-mail, visit www.edwardjones.com/US_email_disclosure > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > Are patches a day? -- Jorge Palma Escobar Ingeniero de Sistemas Red Hat Linux Certified Engineer Certificate N? 804005089418233 From jpalmae at gmail.com Mon Feb 11 00:00:33 2008 From: jpalmae at gmail.com (Jorge Palma) Date: Sun, 10 Feb 2008 21:00:33 -0300 Subject: [Linux-cluster] Fenced failed In-Reply-To: References: Message-ID: <5b65f1b10802101600o45b11b23t565f8434efcd34f5@mail.gmail.com> On Feb 9, 2008 5:26 AM, Marcos Ferreira da Silva wrote: > I have two nodes and in the log appear many messages: > > Feb 9 06:22:39 vserver1 ccsd[18304]: process_get: Invalid connection descriptor received. > Feb 9 06:22:39 vserver1 ccsd[18304]: Error while processing get: Invalid request descriptor > Feb 9 06:22:39 vserver1 fenced[18334]: fence "vserver2.teste.br" failed > > My cluster.conf > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > My /etc/hosts: > > 192.168.200.200 vserver1.teste.br vserver1 > 192.168.200.201 vserver2.teste.br vserver2 > > ------------------------------ > Marcos Ferreira da Silva > Digital Tecnologia > Uberlandia-MG > (34) 9154-0150 / 3226-2534 > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > Attach fencing agent fence_xvm to the nodes and eliminates fence_manual Greetings -- Jorge Palma Escobar Ingeniero de Sistemas Red Hat Linux Certified Engineer Certificate N? 804005089418233 From qqlka at nask.pl Mon Feb 11 08:55:34 2008 From: qqlka at nask.pl (=?iso-8859-2?Q?Agnieszka_Kuka=B3owicz?=) Date: Mon, 11 Feb 2008 09:55:34 +0100 Subject: [Linux-cluster] Fence_xvmd/fence_xvm problem Message-ID: <0ba401c86c8b$dd4b9070$0777b5c2@gda07ak> Hi, I was trying to configure Xen guests as virtual services under Cluster Suite. My configuration is simple: Node one "d1" runs xen guest as virtual service "vm_service1", and node one "d2" runs virtual service "vm_service2". The /etc/cluster/cluster.conf file is below: On guests "vm_service1" and "vm_service2" I have configured the second cluster. . The problem is that the fence_xvmd/fence_xvm mechanism doesn't work due to propably misconfiguration of multicast. Physical nodes "d1" and "d2" and xen guests "vm_service1" and "vm_service2" have two ethernet interfaces: private- 10.0.200.x (eth0) and public (eth1). On physical nodes, "fence_xvmd" deamon listens defaults on eth1 interface: [root at d2 ~]# netstat -g IPv6/IPv4 Group Memberships Interface RefCnt Group --------------- ------ --------------------- lo 1 ALL-SYSTEMS.MCAST.NET eth0 1 225.0.0.1 eth0 1 ALL-SYSTEMS.MCAST.NET eth1 1 225.0.0.12 eth1 1 ALL-SYSTEMS.MCAST.NET virbr0 1 ALL-SYSTEMS.MCAST.NET lo 1 ff02::1 .. Next when I make on xen guest "vm_service1" a test to fence guest "vm_service2" I get: [root at d11 cluster]# /sbin/fence_xvm -H d12 -ddddd Debugging threshold is now 5 -- args @ 0xbf8aea70 -- args->addr = 225.0.0.12 args->domain = d12 args->key_file = /etc/cluster/fence_xvm.key args->op = 2 args->hash = 2 args->auth = 2 args->port = 1229 args->family = 2 args->timeout = 30 args->retr_time = 20 args->flags = 0 args->debug = 5 -- end args -- Reading in key file /etc/cluster/fence_xvm.key into 0xbf8ada1c (4096 max size) Actual key length = 4096 bytesOpening /dev/urandom Sending to 225.0.0.12 via 127.0.0.1 Opening /dev/urandom Sending to 225.0.0.12 via X.X.X.X Opening /dev/urandom Sending to 225.0.0.12 via 10.0.200.124 Waiting for connection from XVM host daemon. .. Waiting for connection from XVM host daemon. Timed out waiting for response On the node "d2" where "vm_service2" is running I get: [root at d2 ~]# /sbin/fence_xvmd -fddd Debugging threshold is now 3 -- args @ 0xbfc54e3c -- args->addr = 225.0.0.12 args->domain = (null) args->key_file = /etc/cluster/fence_xvm.key args->op = 2 args->hash = 2 args->auth = 2 args->port = 1229 args->family = 2 args->timeout = 30 args->retr_time = 20 args->flags = 1 args->debug = 3 -- end args -- Reading in key file /etc/cluster/fence_xvm.key into 0xbfc53e3c (4096 max size) Actual key length = 4096 bytesOpened ckpt vm_states My Node ID = 1 Domain UUID Owner State ------ ---- ----- ----- Domain-0 00000000-0000-0000-0000-000000000000 00001 00001 vm_service2 2dd8193f-e4d4-f41c-a4af-f5b30d19fe00 00001 00001 Storing vm_service2 Domain UUID Owner State ------ ---- ----- ----- Domain-0 00000000-0000-0000-0000-000000000000 00001 00001 vm_service2 2dd8193f-e4d4-f41c-a4af-f5b30d19fe00 00001 00001 Storing vm_service2 Request to fence: d12. Evaluating Domain: d12 Last Owner/State Unknown Domain UUID Owner State ------ ---- ----- ----- Domain-0 00000000-0000-0000-0000-000000000000 00001 00001 vm_service2 2dd8193f-e4d4-f41c-a4af-f5b30d19fe00 00001 00001 Storing vm_service2 Request to fence: d12 Evaluating Domain: d12 Last Owner/State Unknown So it looks like the fence_xvmd and fence_xvm cannot communicate earch other. But "fence_xvm" on "vm_service1" sends multicast packets through all interfaces and node "d2" can receive them. Tcpdump on node "d2" says that the node "d2" receives the packages: [root at d2 ~]# tcpdump -i peth0 -n host 225.0.0.12 listening on peth0, link-type EN10MB (Ethernet), capture size 96 bytes 17:50:47.972477 IP 10.0.200.124.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 17:50:49.960841 IP 10.0.200.124.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 17:50:51.977425 IP 10.0.200.124.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 [root at d2 ~]# tcpdump -i peth1 -n host 225.0.0.12 listening on peth1, link-type EN10MB (Ethernet), capture size 96 bytes 17:51:26.168132 IP X.X.X.X.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 17:51:28.184802 IP X.X.X.X.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 17:51:30.196875 IP X.X.X.X.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 But I can't see the "node2" sends anything to xen guest "vm_service1". So "fence_xvm" gets timeout. What can I do wrong? Cheers Agnieszka Kuka?owicz NASK, Polska.pl -------------- next part -------------- An HTML attachment was scrubbed... URL: From asim.husanovic at gmail.com Mon Feb 11 09:55:45 2008 From: asim.husanovic at gmail.com (Asim Husanovic) Date: Mon, 11 Feb 2008 10:55:45 +0100 Subject: [Linux-cluster] Monitoring shared disk In-Reply-To: <3c1234d0802070740x990ed92o61be4ca385755b64@mail.gmail.com> References: <3c1234d0802070714y21d88c88x1569bcb88b898238@mail.gmail.com> <20080207152842.GC30561@wisc.edu> <3c1234d0802070740x990ed92o61be4ca385755b64@mail.gmail.com> Message-ID: <3c1234d0802110155s70f64508wefbafc0fc0af908e@mail.gmail.com> Hi, again I have three different configuration (im testing :) RH CS ) Note: Hardwer is virtual (VMware server 1.0.4 on windows xp host) Linux is CentOS 4.6 First: ====> LVM partition with GFS <==== in wmx file: #SHARED Disk 1 scsi1.present = "TRUE" scsi1.virtualDev = "lsilogic" scsi1.sharedBus = "virtual" scsi1:0.present = "TRUE" scsi1:0.fileName = "D:\VmWare\clusterdisks\SharedDisk1.vmdk" scsi1:0.mode = "independent-persistent" scsi1:0.redo = "" scsi1:0.deviceType = "disk" #SHARED Disk 2 scsi2.present = "TRUE" scsi2.virtualDev = "lsilogic" scsi2.sharedBus = "virtual" scsi2:0.present = "TRUE" scsi2:0.fileName = "D:\VmWare\clusterdisks\SharedDisk2.vmdk" scsi2:0.mode = "independent-persistent" scsi2:0.redo = "" scsi2:0.deviceType = "disk" #SHARED BUS settings disk.locking = "FALSE" diskLib.dataCacheMaxReadAheadSize = "0" diskLib.dataCacheMinReadAheadSize = "0" diskLib.dataCacheMaxSize = "0" diskLib.dataCachePageSize = "4096" diskLib.maxUnsyncedWrites = "0" [root at ankebut]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 10.81.4.98 ankebut.domain.com ankebut 172.16.40.10 node1.localdomain node1 10.81.4.104 yildiz.domain.com yildiz 172.16.40.11 node2.localdomain node2 127.0.0.1 localhost.localdomain localhost Installing this packages from http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo: rgmanager-1.9.72-1 system-config-cluster-1.0.51-2.0 ccs-1.0.11-1 magma-1.0.8-1 magma-plugins-1.0.12-0 cman-1.0.17-0 cman-kernel-2.6.9-53.5 gulm-1.0.10-0 dlm-1.0.7-1 dlm-kernel-2.6.9-52.2 fence-1.32.50-2 iddev-2.0.0-4 perl-Net-Telnet-3.03-3 GFS-6.1.15-1 GFS-kernel-2.6.9-75.9 GFS-kernheaders-2.6.9-75.9 gnbd-1.0.9-1 gnbd-kernel-2.6.9-10.29 gnbd-kernheaders-2.6.9-10.29 lvm2-cluster-2.02.27-2.el4.centos.1 [root at ankebut ~]# vgdisplay -v /dev/VG_SHARE Using volume group(s) on command line Finding volume group "VG_SHARE" --- Volume group --- VG Name VG_SHARE System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 15.99 GB PE Size 4.00 MB Total PE 4094 Alloc PE / Size 1002 / 3.91 GB Free PE / Size 3092 / 12.08 GB VG UUID C6D0HQ-AeIW-4k8a-Tzl5-ayGx-Tv14-DwkTe7 --- Logical volume --- LV Name /dev/VG_SHARE/LV_SHARE VG Name VG_SHARE LV UUID 7zA1Wf-bn5J-XmD4-azfK-CM3F-y90F-XviVNn LV Write Access read/write LV Status available # open 0 LV Size 3.91 GB Current LE 1002 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:2 --- Physical volumes --- PV Name /dev/sda1 PV UUID qn9dKj-CIHQ-hyxr-JBdi-rsna-cKkL-SY2OM0 PV Status allocatable Total PE / Free PE 2047 / 1045 PV Name /dev/sdb1 PV UUID Kr4yil-DAks-i7jB-HU7M-CLjW-BK2e-9oO91W PV Status allocatable Total PE / Free PE 2047 / 2047 [root at ankebut ~]# gfs_mkfs -j 8 -t CentOS_RHCS:sharedd -p lock_dlm /dev/VG_SHARE/LV_SHARE [root at ankebut]# cat /etc/cluster/cluster.conf [root at ankebut]# mount -t gfs /dev/VG_SHARE/LV_SHARE /share/ [root at ankebut]# cat /etc/fstab /dev/VG_SHARE/LV_SHARE /share gfs _netdev 0 0 TEST SHARING [root at ankebut]# echo 'Date from ankebut is:' `date` >> /share/date [root at ankebut]# cat /share/date Date from ankebut is: Fri Feb 1 12:40:17 CET 2008 [root at yildiz ~]# echo 'Date from yildiz is:' `date` >> /share/date [root at yildiz ~]# cat /share/date Date from ankebut is: Fri Feb 1 12:40:17 CET 2008 Date from yildiz is: Fri Feb 1 13:40:39 CET 2008 ====> LVM partition with GFS <==== in vmw file #SHARED Disk 1 scsi1.present = "TRUE" scsi1.virtualDev = "lsilogic" scsi1.sharedBus = "virtual" scsi1:0.present = "TRUE" scsi1:0.fileName = "D:\VmWare\clusterdisks\QuorumDisk.vmdk" scsi1:0.mode = "independent-persistent" scsi1:0.redo = "" scsi1:0.deviceType = "disk" #SHARED BUS settings disk.locking = "FALSE" diskLib.dataCacheMaxReadAheadSize = "0" diskLib.dataCacheMinReadAheadSize = "0" diskLib.dataCacheMaxSize = "0" diskLib.dataCachePageSize = "4096" diskLib.maxUnsyncedWrites = "0" [root at ankebut]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 10.81.4.98 ankebut.domain.com ankebut 172.16.40.10 node1.localdomain node1 10.81.4.104 yildiz.domain.com yildiz 172.16.40.11 node2.localdomain node2 127.0.0.1 localhost.localdomain localhost Installing this packages from http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo: rgmanager-1.9.72-1 system-config-cluster-1.0.51-2.0 ccs-1.0.11-1 magma-1.0.8-1 magma-plugins-1.0.12-0 cman-1.0.17-0 cman-kernel-2.6.9-53.5 gulm-1.0.10-0 dlm-1.0.7-1 dlm-kernel-2.6.9-52.2 fence-1.32.50-2 iddev-2.0.0-4 perl-Net-Telnet-3.03-3 [root at ankebut]# parted /dev/sda (parted) p Disk geometry for /dev/sda: 0.000-8192.000 megabytes Disk label type: msdos Minor Start End Type Filesystem Flags 1 0.031 7.844 primary 2 7.844 8189.384 primary [root at ankebut]# mkqdisk -c /dev/sda1 -l QuorumLabel [root at ankebut]# mkqdisk -L mkqdisk v0.5.1 /dev/sda1: Magic: eb7a62c2 Label: QuorumLabel Created: Mon Feb 4 09:58:58 2008 Host: ankebut.domain.com [root at ankebut]# cat /etc/cluster/cluster.conf TEST QUORUM DISK [root at ankebut]# cman_tool nodes Node Votes Exp Sts Name 0 3 0 M /dev/sda1 1 1 1 M node2.localdomain 2 1 1 M node1.localdomain [root at ankebut]# cman_tool status Protocol version: 5.0.1 Config version: 1 Cluster name: CentOS_RHCS Cluster ID: 42649 Cluster Member: Yes Membership state: Cluster-Member Nodes: 2 Expected_votes: 1 Total_votes: 5 Quorum: 1 Active subsystems: 2 Node name: node1.localdomain Node ID: 2 [root at ankebut]# cat /tmp/quorum-status Time Stamp: Mon Feb 4 10:29:08 2008 Node ID: 2 Score: 1/1 (Minimum required = 1) Current state: Master Initializing Set: { } Visible Set: { 1 2 } Master Node ID: 2 Quorate Set: { 1 2 } ====> RAW partitions for Oracle <==== in vmx file: #SHARED Disk 1 scsi1.present = "TRUE" scsi1.virtualDev = "lsilogic" scsi1.sharedBus = "virtual" scsi1:0.present = "TRUE" scsi1:0.fileName = "D:\VmWare\clusterdisks\SharedDisk.vmdk" scsi1:0.mode = "independent-persistent" scsi1:0.redo = "" scsi1:0.deviceType = "disk" #SHARED BUS settings disk.locking = "FALSE" diskLib.dataCacheMaxReadAheadSize = "0" diskLib.dataCacheMinReadAheadSize = "0" diskLib.dataCacheMaxSize = "0" diskLib.dataCachePageSize = "4096" diskLib.maxUnsyncedWrites = "0" [root at ankebut]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 10.81.4.98 ankebut.domain.com ankebut 172.16.40.10 node1.localdomain node1 10.81.4.104 yildiz.domain.com yildiz 172.16.40.11 node2.localdomain node2 127.0.0.1 localhost.localdomain localhost Installing this packages from http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo: rgmanager-1.9.72-1 system-config-cluster-1.0.51-2.0 ccs-1.0.11-1 magma-1.0.8-1 magma-plugins-1.0.12-0 cman-1.0.17-0 cman-kernel-2.6.9-53.5 gulm-1.0.10-0 dlm-1.0.7-1 dlm-kernel-2.6.9-52.2 fence-1.32.50-2 iddev-2.0.0-4 perl-Net-Telnet-3.03-3 [root at ankebut ~]# fdisk /dev/sda Command (m for help): p Disk /dev/sda: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 522 4192933+ 83 Linux /dev/sda2 523 1044 4192965 83 Linux [root at ankebut]# cat /etc/sysconfig/rawdevices # This file and interface are deprecated. # Applications needing raw device access should open regular # block devices with O_DIRECT. # raw device bindings # format: # # example: /dev/raw/raw1 /dev/sda1 # /dev/raw/raw2 8 5 /dev/raw/raw1 /dev/sda1 /dev/raw/raw2 /dev/sda2 [root at ankebut]# service rawdevices restart Assigning devices: /dev/raw/raw1 --> /dev/sda1 /dev/raw/raw1: bound to major 8, minor 1 /dev/raw/raw2 --> /dev/sda2 /dev/raw/raw2: bound to major 8, minor 2 Done [root at ankebut]# raw -qa /dev/raw/raw1: bound to major 8, minor 1 /dev/raw/raw2: bound to major 8, minor 2 [root at ankebut /]# mkfs.ext3 -j -b 4096 /dev/sda1 [root at ankebut /]# mkfs.ext3 -j -b 4096 /dev/sda2 [root at ankebut]# cat /etc/cluster/cluster.conf :(, I dont know how to create cluster.conf for raw configuration in my case TEST RAW DISK I dont know how to test RAW configuration Please, check my configurations and please tell me where i make mistake, if i make :) Thanks. With Regards Asim On Feb 7, 2008 4:40 PM, Asim Husanovic wrote: > Thanks. > Asim > > > On Feb 7, 2008 4:28 PM, Brian Kroth wrote: > > Here are some quick examples. There are almost certainly other ways to do > > it. > > > > Asim Husanovic : > > > Hi > > > > > > How to identify shared disk/volumes? > > > > scsi_id > > > > > How to collect the cluster FS information? > > > > gfs_tool > > > > > How to display shared disks on node/s? Which node/s? > > > > gfs_tool list > > cman_tool services > > > > > > You can wrap all of these inside snmp scripts/oids or use nagios passive > > checks if you want to monitor them remotely/automatically. > > > > Brian > > > From lhh at redhat.com Mon Feb 11 14:24:42 2008 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 11 Feb 2008 09:24:42 -0500 Subject: [Linux-cluster] Fence_xvmd/fence_xvm problem In-Reply-To: <0ba401c86c8b$dd4b9070$0777b5c2@gda07ak> References: <0ba401c86c8b$dd4b9070$0777b5c2@gda07ak> Message-ID: <1202739882.6443.142.camel@ayanami.boston.devel.redhat.com> On Mon, 2008-02-11 at 09:55 +0100, Agnieszka Kuka?owicz wrote: > Hi, > > > > I was trying to configure Xen guests as virtual services under Cluster > Suite. My configuration is simple: > > > > Node one "d1" runs xen guest as virtual service "vm_service1", and > node one "d2" runs virtual service "vm_service2". There's currently a limitation - you need to have your VM names match the filename in /etc/xen, which needs to match 'name = XXXXX' in /etc/xen/ -- Lon > From lhh at redhat.com Mon Feb 11 14:25:20 2008 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 11 Feb 2008 09:25:20 -0500 Subject: [Linux-cluster] Node Offline In-Reply-To: <4407800e1b774121bb5e2e7209c9f966@Mail61.safesecureweb.com> References: <4407800e1b774121bb5e2e7209c9f966@Mail61.safesecureweb.com> Message-ID: <1202739920.6443.144.camel@ayanami.boston.devel.redhat.com> On Sat, 2008-02-09 at 07:16 -0500, Marcos Ferreira da Silva wrote: > I start my cluster but the nodes don't see each other. > > [root at vserv3 ~]# clustat > Member Status: Quorate > > Member Name ID Status > ------ ---- ---- ------ > vserv3.uniube.br 1 Online, Local, rgmanager > vserv4.uniube.br 2 Offline > > Service Name Owner (Last) State > ------- ---- ----- ------ ----- > service:gfsweb (none) stopped Fencing configured? Any 'totem' messages? -- Lon From lhh at redhat.com Mon Feb 11 14:57:25 2008 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 11 Feb 2008 09:57:25 -0500 Subject: [Linux-cluster] watchdog: rebootig In-Reply-To: References: <1939546029@web.de> Message-ID: <1202741846.6443.148.camel@ayanami.boston.devel.redhat.com> On Sat, 2008-02-09 at 21:36 -0600, Stulic,Damjan wrote: > Having occasional problems with cluster watchdog rebooting servers. > Any ideas on why this might be helping, or how to troubleshoot this > problem? Hi Damjan, Could you tell me what version of rgmanager you're running? -- Lon From bernard.chew at muvee.com Mon Feb 11 15:01:58 2008 From: bernard.chew at muvee.com (Bernard Chew) Date: Mon, 11 Feb 2008 23:01:58 +0800 Subject: [Linux-cluster] Fence_xvmd/fence_xvm problem In-Reply-To: <0ba401c86c8b$dd4b9070$0777b5c2@gda07ak> References: <0ba401c86c8b$dd4b9070$0777b5c2@gda07ak> Message-ID: <229C73600EB0E54DA818AB599482BCE9022379DA@shadowfax.sg.muvee.net> > From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Agnieszka > Kukalowicz > Sent: Monday, February 11, 2008 4:56 PM > To: linux-cluster at redhat.com > Subject: [Linux-cluster] Fence_xvmd/fence_xvm problem > > Hi, > > I was trying to configure Xen guests as virtual services under Cluster Suite. My configuration is simple: > > Node one "d1" runs xen guest as virtual service "vm_service1", and node one "d2" runs virtual service > "vm_service2". > > The /etc/cluster/cluster.conf file is below: > > > > > > > > > > > > > > > > > > > > > > > > > > > passwd="apc"/> > > > > > > > > > > > path="/virts/service1" recovery="relocate"/> > path="/virts/service2" recovery="relocate"/> > > > > > > On guests "vm_service1" and "vm_service2" I have configured the second cluster. > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > The problem is that the fence_xvmd/fence_xvm mechanism doesn't work due to propably misconfiguration of > multicast. > > Physical nodes "d1" and "d2" and xen guests "vm_service1" and "vm_service2" have two ethernet interfaces: > private- 10.0.200.x (eth0) and public (eth1). > > On physical nodes, "fence_xvmd" deamon listens defaults on eth1 interface: > [root at d2 ~]# netstat -g > IPv6/IPv4 Group Memberships > Interface RefCnt Group > --------------- ------ --------------------- > lo 1 ALL-SYSTEMS.MCAST.NET > eth0 1 225.0.0.1 > eth0 1 ALL-SYSTEMS.MCAST.NET > eth1 1 225.0.0.12 > eth1 1 ALL-SYSTEMS.MCAST.NET > virbr0 1 ALL-SYSTEMS.MCAST.NET > lo 1 ff02::1 > .... > > Next when I make on xen guest "vm_service1" a test to fence guest "vm_service2" I get: > > [root at d11 cluster]# /sbin/fence_xvm -H d12 -ddddd > Debugging threshold is now 5 > -- args @ 0xbf8aea70 -- > args->addr = 225.0.0.12 > args->domain = d12 > args->key_file = /etc/cluster/fence_xvm.key > args->op = 2 > args->hash = 2 > args->auth = 2 > args->port = 1229 > args->family = 2 > args->timeout = 30 > args->retr_time = 20 > args->flags = 0 > args->debug = 5 > -- end args -- > Reading in key file /etc/cluster/fence_xvm.key into 0xbf8ada1c (4096 max size) > Actual key length = 4096 bytesOpening /dev/urandom > Sending to 225.0.0.12 via 127.0.0.1 > Opening /dev/urandom > Sending to 225.0.0.12 via X.X.X.X > Opening /dev/urandom > Sending to 225.0.0.12 via 10.0.200.124 > Waiting for connection from XVM host daemon. > .... > Waiting for connection from XVM host daemon. > Timed out waiting for response > > On the node "d2" where "vm_service2" is running I get: > > [root at d2 ~]# /sbin/fence_xvmd -fddd > Debugging threshold is now 3 > -- args @ 0xbfc54e3c -- > args->addr = 225.0.0.12 > args->domain = (null) > args->key_file = /etc/cluster/fence_xvm.key > args->op = 2 > args->hash = 2 > args->auth = 2 > args->port = 1229 > args->family = 2 > args->timeout = 30 > args->retr_time = 20 > args->flags = 1 > args->debug = 3 > -- end args -- > Reading in key file /etc/cluster/fence_xvm.key into 0xbfc53e3c (4096 max size) > Actual key length = 4096 bytesOpened ckpt vm_states > My Node ID = 1 > Domain UUID Owner State > ------ ---- ----- ----- > Domain-0 00000000-0000-0000-0000-000000000000 00001 00001 > vm_service2 2dd8193f-e4d4-f41c-a4af-f5b30d19fe00 00001 00001 > Storing vm_service2 > Domain UUID Owner State > ------ ---- ----- ----- > Domain-0 00000000-0000-0000-0000-000000000000 00001 00001 > vm_service2 2dd8193f-e4d4-f41c-a4af-f5b30d19fe00 00001 00001 > Storing vm_service2 > Request to fence: d12. > Evaluating Domain: d12 Last Owner/State Unknown > Domain UUID Owner State > ------ ---- ----- ----- > Domain-0 00000000-0000-0000-0000-000000000000 00001 00001 > vm_service2 2dd8193f-e4d4-f41c-a4af-f5b30d19fe00 00001 00001 > Storing vm_service2 > Request to fence: d12 > Evaluating Domain: d12 Last Owner/State Unknown > > So it looks like the fence_xvmd and fence_xvm cannot communicate earch other. > But "fence_xvm" on "vm_service1" sends multicast packets through all interfaces and node "d2" can receive them. > Tcpdump on node "d2" says that the node "d2" receives the packages: > > [root at d2 ~]# tcpdump -i peth0 -n host 225.0.0.12 > listening on peth0, link-type EN10MB (Ethernet), capture size 96 bytes > 17:50:47.972477 IP 10.0.200.124.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 > 17:50:49.960841 IP 10.0.200.124.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 > 17:50:51.977425 IP 10.0.200.124.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 > > [root at d2 ~]# tcpdump -i peth1 -n host 225.0.0.12 > listening on peth1, link-type EN10MB (Ethernet), capture size 96 bytes > 17:51:26.168132 IP X.X.X.X.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 > 17:51:28.184802 IP X.X.X.X.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 > 17:51:30.196875 IP X.X.X.X.filenet-pch > 225.0.0.12.novell-zfs: UDP, length 176 > > But I can't see the "node2" sends anything to xen guest "vm_service1". So "fence_xvm" gets timeout. > What can I do wrong? > > Cheers > > Agnieszka Kuka?owicz > NASK, Polska.pl Hi, Can you show the results of "netstat -nr" as well? Regards, Bernard Chew From lhh at redhat.com Mon Feb 11 15:06:37 2008 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 11 Feb 2008 10:06:37 -0500 Subject: [Linux-cluster] Fence_xvmd/fence_xvm problem In-Reply-To: <0ba401c86c8b$dd4b9070$0777b5c2@gda07ak> References: <0ba401c86c8b$dd4b9070$0777b5c2@gda07ak> Message-ID: <1202742397.6443.158.camel@ayanami.boston.devel.redhat.com> On Mon, 2008-02-11 at 09:55 +0100, Agnieszka Kuka?owicz wrote: > > > > name="virtual_fence"/> > > > My error; my previous email was in the case your VMs were restarting constantly. For fencing, the virtual machines need to know their virtual machine names according to domain0. The 'domain=' should match this, not the virtual machine's hostname. vm_service2 2dd8193f-e4d4-f41c-a4af-f5b30d19fe00 00001 00001 ^^^^^^^^^^^ Request to fence: d12. ^^^ So, if vm_service2 == d12, you need to change cluster.conf: -- Lon From marcos at digitaltecnologia.info Mon Feb 11 16:21:24 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Mon, 11 Feb 2008 14:21:24 -0200 Subject: [Linux-cluster] Node Offline In-Reply-To: <1202739920.6443.144.camel@ayanami.boston.devel.redhat.com> References: <4407800e1b774121bb5e2e7209c9f966@Mail61.safesecureweb.com> <1202739920.6443.144.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202746884.3078.12.camel@matriz.digitaltecnologia.info> I change the service to xvm. I wnat that vm admin start at vserver1 and intranetteste at vserver2. [root at vserver1 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserver1.uniube.br 1 Online, Local, rgmanager vserver2.uniube.br 2 Offline Service Name Owner (Last) State ------- ---- ----- ------ ----- vm:admin (none) disabled vm:intranetteste (none) disabled [root at vserver2 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserver1.uniube.br 1 Offline vserver2.uniube.br 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- vm:admin (none) disabled vm:intranetteste (none) disabled My cluster.conf The vserver1 start ok. But when I start the vserver2 node then I have a problem. In my log messages: Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] CLM CONFIGURATION CHANGE Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] New Configuration: Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.201) Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] Members Left: Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] Members Joined: Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] CLM CONFIGURATION CHANGE Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] New Configuration: Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.200) Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.201) Feb 11 14:18:43 vserver2 openais[6643]: [CLM ] Members Left: Feb 11 14:18:44 vserver2 openais[6643]: [CLM ] Members Joined: Feb 11 14:18:44 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.200) Feb 11 14:18:44 vserver2 openais[6643]: [SYNC ] This node is within the primary component and will provide service. Feb 11 14:18:44 vserver2 openais[6643]: [TOTEM] entering OPERATIONAL state. Feb 11 14:18:44 vserver2 openais[6643]: [TOTEM] Retransmit List: 1 after : Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] CLM CONFIGURATION CHANGE Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] New Configuration: Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.201) Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] Members Left: Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.200) Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] Members Joined: Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] CLM CONFIGURATION CHANGE Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] New Configuration: Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.201) Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] Members Left: Feb 11 14:19:24 vserver2 openais[6643]: [CLM ] Members Joined: Feb 11 14:19:24 vserver2 openais[6643]: [SYNC ] This node is within the primary component and will provide service. Feb 11 14:19:24 vserver2 openais[6643]: [TOTEM] entering OPERATIONAL state. and the cluster of node2 crash: Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] CLM CONFIGURATION CHANGE Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] New Configuration: Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.201) Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] Members Left: Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] Members Joined: Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] CLM CONFIGURATION CHANGE Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] New Configuration: Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.200) Feb 11 14:20:48 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.201) Feb 11 14:20:49 vserver2 openais[6643]: [CLM ] Members Left: Feb 11 14:20:49 vserver2 openais[6643]: [CLM ] Members Joined: Feb 11 14:20:49 vserver2 openais[6643]: [CLM ] r(0) ip(192.168.200.200) Feb 11 14:20:49 vserver2 openais[6643]: [SYNC ] This node is within the primary component and will provide service. Feb 11 14:20:49 vserver2 openais[6643]: [TOTEM] entering OPERATIONAL state. Feb 11 14:20:49 vserver2 openais[6643]: [MAIN ] Killing node vserver1.uniube.br because it has rejoined the cluster without cman_tool join Feb 11 14:20:49 vserver2 openais[6643]: [CMAN ] cman killed by node 1 because we rejoined the cluster without a full restart Feb 11 14:20:49 vserver2 gfs_controld[6679]: cluster is down, exiting Feb 11 14:20:49 vserver2 dlm_controld[6673]: groupd is down, exiting Feb 11 14:20:49 vserver2 fenced[6667]: cluster is down, exiting Feb 11 14:20:49 vserver2 kernel: dlm: closing connection to node 2 Feb 11 14:21:15 vserver2 ccsd[6634]: Unable to connect to cluster infrastructure after 30 seconds. Feb 11 14:21:45 vserver2 ccsd[6634]: Unable to connect to cluster infrastructure after 60 seconds. Feb 11 14:22:15 vserver2 ccsd[6634]: Unable to connect to cluster infrastructure after 90 seconds. - _____________________________ Marcos Ferreira da Silva DiGital Tecnologia Uberl?ndia - MG (34) 9154-0150 / 3226-2534 Em Seg, 2008-02-11 ?s 09:25 -0500, Lon Hohberger escreveu: > On Sat, 2008-02-09 at 07:16 -0500, Marcos Ferreira da Silva wrote: > > I start my cluster but the nodes don't see each other. > > > > [root at vserv3 ~]# clustat > > Member Status: Quorate > > > > Member Name ID Status > > ------ ---- ---- ------ > > vserv3.teste.br 1 Online, Local, rgmanager > > vserv4.teste.br 2 Offline > > > > Service Name Owner (Last) State > > ------- ---- ----- ------ ----- > > service:gfsweb (none) stopped > > Fencing configured? > > Any 'totem' messages? > > -- Lon > > From ccaulfie at redhat.com Mon Feb 11 16:47:50 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Mon, 11 Feb 2008 16:47:50 +0000 Subject: [Linux-cluster] Two-node cluster unpatched B doesn't see patched A In-Reply-To: <47AB8156.6010205@hp.com> References: <47AB8156.6010205@hp.com> Message-ID: <47B07C36.5070202@redhat.com> Sutton, Harry (MSE) wrote: > The most recent set of patches for RHCS, comprising: > > RHBA-2008:0093 dlm-kernel bug fix update > RHBA-2008:0092 cman-kernel bug fix update > RHBA-2008:0060 cman bug fix update > RHBA-2008:0095 gnbd-kernel bug fix update > RHBA-2008:0096 GFS-kernel bug fix update > RHSA-2008:0055 Important: kernel security and bug fix update > > has resulted in a problem in my two-node (production) cluster. Let me > explain ;-) > > I have a three-node test cluster where I install all patches before > rolling them into my (two-node) production cluster; I know, I know, > they're not the same, and that's the only difference I can see in what > has happened here (a first in two years). In the three-node cluster > (which, just to complicate things, only had two active nodes at the > time), I rolled these patches through the two nodes without taking the > whole cluster down. That is: > > 1. Stop all cluster services on Node A. Disable auto-start using > chkconfig off . Services stop successfully, Node A > leaves the cluster, Node B continues running all shared cluster services > (GFS, Fibre-channel-connected shared storage, HP MSA1000). > 2. Patch Node A, reboot to new kernel, re-install HP-supplied QLogic > driver, edit /etc/modprobe.conf for failover settings, rebuild initrd > file for QLogic drivers, reboot, re-enable auto-start of cluster > services, reboot once more and the cluster re-forms. > 3. Repeat Steps 1 and 2 for Node B > 4. Cluster is restored to normal operation, both nodes fully patched. > > On my production cluster, which uses a Quorum Disk in place of the third > node, I completed steps 1 and 2 on Node A, but the cluster did NOT > reform. cman sends out its advertisement, and I can see that Node B > receives it (by looking at the tcpdump traces), but Node B never responds. > > So: before I take down Node B (which is currently the only one running > my production services), can someone either (a) explain why the cluster > is not re-forming, or (b) assure me that by restoring both systems to > the same patch level, the cluster WILL reform properly? (Which begs the > question: why did my test cluster survive the patch process and my > production cluster didn't? Same versions of everything......) > > Thanks in advance, and best regards, I'm pretty certain that even simply rebooting node B will let the cluster re-form. I've heard of this problem before but never got to the bottom of it because it seems to be quite rare. It is almost certainly some state in node B that is preventing it replying to node A's join requests - I suspect it's a bug to do with protocol ACK numbers but can't be sure. Before you do it, would you be so kind as to send me the tcpdumps of the (non-)conversation, including the HELLO messages from node B. It might help in tracking it down. Thanks, -- Chrissie From marcos at digitaltecnologia.info Mon Feb 11 17:02:01 2008 From: marcos at digitaltecnologia.info (Marcos Ferreira da Silva) Date: Mon, 11 Feb 2008 15:02:01 -0200 Subject: [Linux-cluster] ccsd error Message-ID: <1202749321.3078.15.camel@matriz.digitaltecnologia.info> It's appearing many of these errors in my log. Feb 11 15:01:21 vserver1 ccsd[2424]: process_get: Invalid connection descriptor received. Feb 11 15:01:21 vserver1 ccsd[2424]: Error while processing get: Invalid request descriptor Feb 11 15:01:21 vserver1 fenced[2453]: fence "vserver2.teste.br" failed Feb 11 15:01:26 vserver1 fenced[2453]: fencing node "vserver2.teste.br" My cluster.conf: - _____________________________ Marcos Ferreira da Silva DiGital Tecnologia Uberl?ndia - MG (34) 9154-0150 / 3226-2534 From damjan.stulic at edwardjones.com Mon Feb 11 17:18:07 2008 From: damjan.stulic at edwardjones.com (Stulic,Damjan) Date: Mon, 11 Feb 2008 11:18:07 -0600 Subject: [Linux-cluster] watchdog: rebootig In-Reply-To: <1202741846.6443.148.camel@ayanami.boston.devel.redhat.com> References: <1939546029@web.de> <1202741846.6443.148.camel@ayanami.boston.devel.redhat.com> Message-ID: If you are not the intended recipient of this message (including attachments), or if you have received this message in error, immediately notify us and delete it and any attachments. If you no longer wish to receive e-mail from Edward Jones, please send this request to messages at edwardjones.com. You must include the e-mail address that you wish not to receive e-mail communications. For important additional information related to this e-mail, visit www.edwardjones.com/US_email_disclosure -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Lon Hohberger Sent: Monday, February 11, 2008 08:57 To: linux clustering Subject: Re: [Linux-cluster] watchdog: rebootig > On Sat, 2008-02-09 at 21:36 -0600, Stulic,Damjan wrote: >> Having occasional problems with cluster watchdog rebooting servers. >> Any ideas on why this might be helping, or how to troubleshoot this >> problem? > Hi Damjan, > > Could you tell me what version of rgmanager you're running? > > -- Lon rgmanager-1.9.54-1.x86_64 Name : rgmanager Relocations: (not relocatable) Version : 1.9.54 Vendor: Red Hat, Inc. Release : 1 Build Date: Fri 06 Oct 2006 11:43:10 AM CDT Red Hat Enterprise Linux ES release 4 We went through the update process about a month ago, all the newest packages were downloaded and installed. We had this problem before, cluster would occasionally panic during the clusvcadm -R. But this time it was different, there was no clusvcadm commands being issued (or any other cluster commands). Thank you, Damjan From doseyg at r-networks.net Mon Feb 11 21:08:26 2008 From: doseyg at r-networks.net (Glen Dosey) Date: Mon, 11 Feb 2008 16:08:26 -0500 (EST) Subject: [Linux-cluster] GFS1 NFS problems under 2.6.25-rc1 (was: GFS2 loses data under kernel 2.6.24...) In-Reply-To: <1202463059.22038.421.camel@quoit> References: <1202446917.5942.19.camel@eclipse.office.r-networks.net> <1202463059.22038.421.camel@quoit> Message-ID: <38964.155.82.73.253.1202764106.squirrel@www.r-networks.net> Yes, it fixes that problem, thank you. So I grabbed the latest code from kernel.org, 2.6.25-rc1 and tested it successfully. The problem now is that I cannot seem to get the GFS1 code working properly. I grabbed the latest HEAD from CVS and modified sys.c to work with the new kobject calls. Or at least I think I have, perhaps that is what is wrong, I have attached the diff. The gfs module builds successfully against the 2.6.25-rc1 kernel and loads. Locally on the server everything appears to work, but on the GFs filesystem NFS exported from the server, clients can create files but they sometimes fail to store any data. For example: [root at nccws00 gfs]# echo test > newfile [root at nccws00 gfs]# cat newfile test [root at nccws00 gfs]# dd if=/dev/zero of=test1.dd bs=8k count=2 dd: closing output file `test1.dd': Invalid argument [root at nccws00 gfs]# cp /tmp/patch-2.6.24.2.bz2 . cp: closing `./patch-2.6.24.2.bz2': Invalid argument [root at nccws00 gfs]# ll total 3152084 -rw-r--r-- 1 root root 5 Feb 11 16:00 newfile -rw-r--r-- 1 root root 0 Feb 11 16:01 patch-2.6.24.2.bz2 -rw-rw-rw- 1 root root 0 Feb 11 16:01 test1.dd -rw-r--r-- 1 root root 1073741824 Feb 7 17:16 test2.dd -rw-r--r-- 1 root root 1073741824 Feb 7 17:19 test3.dd -rw-r--r-- 1 root root 1073741824 Feb 7 19:38 test4.dd [root at nccws00 gfs]# Am I missing some other significant changes which need patches as well ? I only need GFS1 to work over NFS currently. Thanks, Glen Dosey > Hi, > > Does this patch fix it for you? > > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=9656b2c14c6ee0806c90a6be41dec71117fc8f50 > > or you can just upgrade to the latest upstream Linus kernel. It was a > result of the write_end function not working in exactly the same way as > the older commit_write used to, > > Steve. > > On Fri, 2008-02-08 at 00:01 -0500, Glen Dosey wrote: >> I experienced this today at work on a RHEL5 system and have verified it >> today at home on Fedora 8. Perhaps I am doing something foolish .... >> >> I have a fully patched RHEL5 x86_64 system which works fine with the Red >> Hat supplied cluster stuff, except the NFS server performance is abysmal >> (~640Mb/s NFS). After pulling my hair trying to fix NFS I decided to >> just grab the latest kernel which fixed the problem (~980Mb/s NFS). But >> it introduced another much more serious problem, which I've duplicated >> on my FC8 x86_64 system at home. >> >> I already have all the cman/clvmd/openais/gfs[2]-utils packages >> installed through the package manager. I downloaded kernel 2.6.24 from >> kernel.org and did a straight `make -j4 rpm ` and installed the >> resulting rpm in both instances. Both systems worked fine with >> RHEL/Fedora kernels, but here's what happens under 2.6.24 >> >> [root at eclipse test]# dd if=/dev/zero of=test3.dd bs=512M count=1 >> 1+0 records in >> 1+0 records out >> 536870912 bytes (537 MB) copied, 7.95285 s, 67.5 MB/s >> [root at eclipse test]# ll >> total 2101312 >> -rw-r--r-- 1 root root 0 2008-02-07 23:25 test2.dd >> -rw-r--r-- 1 root root 536870912 2008-02-07 23:42 test3.dd >> -rw-r--r-- 1 root root 1073741824 2008-02-07 22:54 test.dd >> [root at eclipse test]# cd .. >> [root at eclipse mnt]# umount /mnt/test/ >> [root at eclipse mnt]# mount /mnt/test/ >> [root at eclipse mnt]# mount | grep test >> /dev/mapper/disk00-test on /mnt/test type gfs2 >> (rw,hostdata=jid=0:id=524289:first=1) >> [root at eclipse mnt]# cd /mnt/test/ >> [root at eclipse test]# ll >> total 2101312 >> -rw-r--r-- 1 root root 0 2008-02-07 23:25 test2.dd >> -rw-r--r-- 1 root root 0 2008-02-07 23:42 test3.dd >> -rw-r--r-- 1 root root 1073741824 2008-02-07 22:54 test.dd >> >> Files that have data just go zero size after an umount and remount. I've >> tried a variety of file sizes and tried it with file containing data as >> well (not all zeros). This worked under the RHEL kernels, so is there >> something I'm doing wrong ? >> >> Both systems are running cman and are a quorate 2 node cluster (where >> the second node doesn't exist). At work it's a 1TB shared filesystem but >> here at home it's just a local disk, so there's nothing else with any >> access to it. >> >> If someone could maybe point out what I'm doing wrong I'd appreciate it, >> or just let me know this won't work for whatever reason. I haven't even >> touched on getting the GFS1 modules to build into this. >> >> Thanks, >> Glen >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- A non-text attachment was scrubbed... Name: GFS_kobject.patch Type: application/octet-stream Size: 1396 bytes Desc: not available URL: From lhh at redhat.com Mon Feb 11 21:29:24 2008 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 11 Feb 2008 16:29:24 -0500 Subject: [Linux-cluster] watchdog: rebootig In-Reply-To: References: <1939546029@web.de> <1202741846.6443.148.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202765364.20118.6.camel@ayanami.boston.devel.redhat.com> On Mon, 2008-02-11 at 11:18 -0600, Stulic,Damjan wrote: > rgmanager-1.9.54-1.x86_64 > Name : rgmanager Relocations: (not > relocatable) > Version : 1.9.54 Vendor: Red Hat, Inc. > Release : 1 Build Date: Fri 06 Oct 2006 > 11:43:10 AM CDT Ah, it's probably the rg_thread.c assertion failure; here's the patch: https://bugzilla.redhat.com/attachment.cgi?id=139908&action=edit Apparently we never opened the bug (for public viewing), but it was fixed in the 4.5 updates (note... recent ccs package would be needed too). -- Lon From qqlka at nask.pl Tue Feb 12 11:30:26 2008 From: qqlka at nask.pl (=?iso-8859-2?Q?Agnieszka_Kuka=B3owicz?=) Date: Tue, 12 Feb 2008 12:30:26 +0100 Subject: [Linux-cluster] Fence_xvmd/fence_xvm problem In-Reply-To: <1202742397.6443.158.camel@ayanami.boston.devel.redhat.com> Message-ID: <0c9501c86d6a$aab79ca0$0777b5c2@gda07ak> > For fencing, the virtual machines need to know their virtual machine > names according to domain0. The 'domain=' should match this, not the > virtual machine's hostname. > > vm_service2 2dd8193f-e4d4-f41c-a4af-f5b30d19fe00 00001 00001 > ^^^^^^^^^^^ > Request to fence: d12. > ^^^ > > So, if vm_service2 == d12, you need to change cluster.conf: I've changed cluster.conf and now everything works. It was my mistake. Thanks a lot. Cheers Agnieszka Kukalowicz From asim.husanovic at gmail.com Tue Feb 12 11:49:12 2008 From: asim.husanovic at gmail.com (Asim Husanovic) Date: Tue, 12 Feb 2008 12:49:12 +0100 Subject: [Linux-cluster] Monitoring shared disk In-Reply-To: <3c1234d0802110155s70f64508wefbafc0fc0af908e@mail.gmail.com> References: <3c1234d0802070714y21d88c88x1569bcb88b898238@mail.gmail.com> <20080207152842.GC30561@wisc.edu> <3c1234d0802070740x990ed92o61be4ca385755b64@mail.gmail.com> <3c1234d0802110155s70f64508wefbafc0fc0af908e@mail.gmail.com> Message-ID: <3c1234d0802120349j25874194td27ed0782b2d8dbf@mail.gmail.com> Please, check Please help Asim On Feb 11, 2008 10:55 AM, Asim Husanovic wrote: > Hi, again > > I have three different configuration (im testing :) RH CS ) > > Note: > Hardwer is virtual (VMware server 1.0.4 on windows xp host) > Linux is CentOS 4.6 > > First: > ====> LVM partition with GFS <==== > in wmx file: > #SHARED Disk 1 > scsi1.present = "TRUE" > scsi1.virtualDev = "lsilogic" > scsi1.sharedBus = "virtual" > scsi1:0.present = "TRUE" > scsi1:0.fileName = "D:\VmWare\clusterdisks\SharedDisk1.vmdk" > scsi1:0.mode = "independent-persistent" > scsi1:0.redo = "" > scsi1:0.deviceType = "disk" > > #SHARED Disk 2 > scsi2.present = "TRUE" > scsi2.virtualDev = "lsilogic" > scsi2.sharedBus = "virtual" > scsi2:0.present = "TRUE" > scsi2:0.fileName = "D:\VmWare\clusterdisks\SharedDisk2.vmdk" > scsi2:0.mode = "independent-persistent" > scsi2:0.redo = "" > scsi2:0.deviceType = "disk" > > #SHARED BUS settings > disk.locking = "FALSE" > diskLib.dataCacheMaxReadAheadSize = "0" > diskLib.dataCacheMinReadAheadSize = "0" > diskLib.dataCacheMaxSize = "0" > diskLib.dataCachePageSize = "4096" > diskLib.maxUnsyncedWrites = "0" > > [root at ankebut]# cat /etc/hosts > # Do not remove the following line, or various programs > # that require network functionality will fail. > > 10.81.4.98 ankebut.domain.com ankebut > 172.16.40.10 node1.localdomain node1 > 10.81.4.104 yildiz.domain.com yildiz > 172.16.40.11 node2.localdomain node2 > 127.0.0.1 localhost.localdomain localhost > > Installing this packages from > http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo: > rgmanager-1.9.72-1 > system-config-cluster-1.0.51-2.0 > ccs-1.0.11-1 > magma-1.0.8-1 > magma-plugins-1.0.12-0 > cman-1.0.17-0 > cman-kernel-2.6.9-53.5 > gulm-1.0.10-0 > dlm-1.0.7-1 > dlm-kernel-2.6.9-52.2 > fence-1.32.50-2 > iddev-2.0.0-4 > perl-Net-Telnet-3.03-3 > GFS-6.1.15-1 > GFS-kernel-2.6.9-75.9 > GFS-kernheaders-2.6.9-75.9 > gnbd-1.0.9-1 > gnbd-kernel-2.6.9-10.29 > gnbd-kernheaders-2.6.9-10.29 > lvm2-cluster-2.02.27-2.el4.centos.1 > > [root at ankebut ~]# vgdisplay -v /dev/VG_SHARE > Using volume group(s) on command line > Finding volume group "VG_SHARE" > --- Volume group --- > VG Name VG_SHARE > System ID > Format lvm2 > Metadata Areas 2 > Metadata Sequence No 3 > VG Access read/write > VG Status resizable > MAX LV 0 > Cur LV 1 > Open LV 0 > Max PV 0 > Cur PV 2 > Act PV 2 > VG Size 15.99 GB > PE Size 4.00 MB > Total PE 4094 > Alloc PE / Size 1002 / 3.91 GB > Free PE / Size 3092 / 12.08 GB > VG UUID C6D0HQ-AeIW-4k8a-Tzl5-ayGx-Tv14-DwkTe7 > > --- Logical volume --- > LV Name /dev/VG_SHARE/LV_SHARE > VG Name VG_SHARE > LV UUID 7zA1Wf-bn5J-XmD4-azfK-CM3F-y90F-XviVNn > LV Write Access read/write > LV Status available > # open 0 > LV Size 3.91 GB > Current LE 1002 > Segments 1 > Allocation inherit > Read ahead sectors 0 > Block device 253:2 > > --- Physical volumes --- > PV Name /dev/sda1 > PV UUID qn9dKj-CIHQ-hyxr-JBdi-rsna-cKkL-SY2OM0 > PV Status allocatable > Total PE / Free PE 2047 / 1045 > > PV Name /dev/sdb1 > PV UUID Kr4yil-DAks-i7jB-HU7M-CLjW-BK2e-9oO91W > PV Status allocatable > Total PE / Free PE 2047 / 2047 > > [root at ankebut ~]# gfs_mkfs -j 8 -t CentOS_RHCS:sharedd -p lock_dlm > /dev/VG_SHARE/LV_SHARE > > [root at ankebut]# cat /etc/cluster/cluster.conf > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at ankebut]# mount -t gfs /dev/VG_SHARE/LV_SHARE /share/ > > [root at ankebut]# cat /etc/fstab > /dev/VG_SHARE/LV_SHARE /share gfs _netdev 0 0 > > TEST SHARING > [root at ankebut]# echo 'Date from ankebut is:' `date` >> /share/date > [root at ankebut]# cat /share/date > Date from ankebut is: Fri Feb 1 12:40:17 CET 2008 > > [root at yildiz ~]# echo 'Date from yildiz is:' `date` >> /share/date > [root at yildiz ~]# cat /share/date > Date from ankebut is: Fri Feb 1 12:40:17 CET 2008 > Date from yildiz is: Fri Feb 1 13:40:39 CET 2008 > > > ====> LVM partition with GFS <==== > in vmw file > #SHARED Disk 1 > scsi1.present = "TRUE" > scsi1.virtualDev = "lsilogic" > scsi1.sharedBus = "virtual" > scsi1:0.present = "TRUE" > scsi1:0.fileName = "D:\VmWare\clusterdisks\QuorumDisk.vmdk" > scsi1:0.mode = "independent-persistent" > scsi1:0.redo = "" > scsi1:0.deviceType = "disk" > > #SHARED BUS settings > disk.locking = "FALSE" > diskLib.dataCacheMaxReadAheadSize = "0" > diskLib.dataCacheMinReadAheadSize = "0" > diskLib.dataCacheMaxSize = "0" > diskLib.dataCachePageSize = "4096" > diskLib.maxUnsyncedWrites = "0" > > [root at ankebut]# cat /etc/hosts > # Do not remove the following line, or various programs > # that require network functionality will fail. > > 10.81.4.98 ankebut.domain.com ankebut > 172.16.40.10 node1.localdomain node1 > 10.81.4.104 yildiz.domain.com yildiz > 172.16.40.11 node2.localdomain node2 > 127.0.0.1 localhost.localdomain localhost > > Installing this packages from > http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo: > rgmanager-1.9.72-1 > system-config-cluster-1.0.51-2.0 > ccs-1.0.11-1 > magma-1.0.8-1 > magma-plugins-1.0.12-0 > cman-1.0.17-0 > cman-kernel-2.6.9-53.5 > gulm-1.0.10-0 > dlm-1.0.7-1 > dlm-kernel-2.6.9-52.2 > fence-1.32.50-2 > iddev-2.0.0-4 > perl-Net-Telnet-3.03-3 > > [root at ankebut]# parted /dev/sda > (parted) p > Disk geometry for /dev/sda: 0.000-8192.000 megabytes > Disk label type: msdos > Minor Start End Type Filesystem Flags > 1 0.031 7.844 primary > 2 7.844 8189.384 primary > > [root at ankebut]# mkqdisk -c /dev/sda1 -l QuorumLabel > > [root at ankebut]# mkqdisk -L > mkqdisk v0.5.1 > /dev/sda1: > Magic: eb7a62c2 > Label: QuorumLabel > Created: Mon Feb 4 09:58:58 2008 > Host: ankebut.domain.com > > [root at ankebut]# cat /etc/cluster/cluster.conf > > > > > > > > > > > > > > > > > > > > label="QuorumLabel" status_file="/tmp/quorum-status"> > > > > > > > > > > > > > > TEST QUORUM DISK > [root at ankebut]# cman_tool nodes > Node Votes Exp Sts Name > 0 3 0 M /dev/sda1 > 1 1 1 M node2.localdomain > 2 1 1 M node1.localdomain > > [root at ankebut]# cman_tool status > Protocol version: 5.0.1 > Config version: 1 > Cluster name: CentOS_RHCS > Cluster ID: 42649 > Cluster Member: Yes > Membership state: Cluster-Member > Nodes: 2 > Expected_votes: 1 > Total_votes: 5 > Quorum: 1 > Active subsystems: 2 > Node name: node1.localdomain > Node ID: 2 > > [root at ankebut]# cat /tmp/quorum-status > Time Stamp: Mon Feb 4 10:29:08 2008 > Node ID: 2 > Score: 1/1 (Minimum required = 1) > Current state: Master > Initializing Set: { } > Visible Set: { 1 2 } > Master Node ID: 2 > Quorate Set: { 1 2 } > > > ====> RAW partitions for Oracle <==== > in vmx file: > #SHARED Disk 1 > scsi1.present = "TRUE" > scsi1.virtualDev = "lsilogic" > scsi1.sharedBus = "virtual" > scsi1:0.present = "TRUE" > scsi1:0.fileName = "D:\VmWare\clusterdisks\SharedDisk.vmdk" > scsi1:0.mode = "independent-persistent" > scsi1:0.redo = "" > scsi1:0.deviceType = "disk" > > #SHARED BUS settings > disk.locking = "FALSE" > diskLib.dataCacheMaxReadAheadSize = "0" > diskLib.dataCacheMinReadAheadSize = "0" > diskLib.dataCacheMaxSize = "0" > diskLib.dataCachePageSize = "4096" > diskLib.maxUnsyncedWrites = "0" > > [root at ankebut]# cat /etc/hosts > # Do not remove the following line, or various programs > # that require network functionality will fail. > > 10.81.4.98 ankebut.domain.com ankebut > 172.16.40.10 node1.localdomain node1 > 10.81.4.104 yildiz.domain.com yildiz > 172.16.40.11 node2.localdomain node2 > 127.0.0.1 localhost.localdomain localhost > > Installing this packages from > http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo: > rgmanager-1.9.72-1 > system-config-cluster-1.0.51-2.0 > ccs-1.0.11-1 > magma-1.0.8-1 > magma-plugins-1.0.12-0 > cman-1.0.17-0 > cman-kernel-2.6.9-53.5 > gulm-1.0.10-0 > dlm-1.0.7-1 > dlm-kernel-2.6.9-52.2 > fence-1.32.50-2 > iddev-2.0.0-4 > perl-Net-Telnet-3.03-3 > > [root at ankebut ~]# fdisk /dev/sda > Command (m for help): p > > Disk /dev/sda: 8589 MB, 8589934592 bytes > 255 heads, 63 sectors/track, 1044 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > > Device Boot Start End Blocks Id System > /dev/sda1 1 522 4192933+ 83 Linux > /dev/sda2 523 1044 4192965 83 Linux > > [root at ankebut]# cat /etc/sysconfig/rawdevices > # This file and interface are deprecated. > # Applications needing raw device access should open regular > # block devices with O_DIRECT. > # raw device bindings > # format: > # > # example: /dev/raw/raw1 /dev/sda1 > # /dev/raw/raw2 8 5 > /dev/raw/raw1 /dev/sda1 > /dev/raw/raw2 /dev/sda2 > > [root at ankebut]# service rawdevices restart > Assigning devices: > /dev/raw/raw1 --> /dev/sda1 > /dev/raw/raw1: bound to major 8, minor 1 > /dev/raw/raw2 --> /dev/sda2 > /dev/raw/raw2: bound to major 8, minor 2 > Done > > [root at ankebut]# raw -qa > /dev/raw/raw1: bound to major 8, minor 1 > /dev/raw/raw2: bound to major 8, minor 2 > > > [root at ankebut /]# mkfs.ext3 -j -b 4096 /dev/sda1 > [root at ankebut /]# mkfs.ext3 -j -b 4096 /dev/sda2 > > [root at ankebut]# cat /etc/cluster/cluster.conf > :(, I dont know how to create cluster.conf for raw configuration in my case > > TEST RAW DISK > I dont know how to test RAW configuration > > > Please, check my configurations and please tell me where i make > mistake, if i make :) > Thanks. > > With Regards > Asim > > > On Feb 7, 2008 4:40 PM, Asim Husanovic wrote: > > Thanks. > > Asim > > > > > > On Feb 7, 2008 4:28 PM, Brian Kroth wrote: > > > Here are some quick examples. There are almost certainly other ways to do > > > it. > > > > > > Asim Husanovic : > > > > Hi > > > > > > > > How to identify shared disk/volumes? > > > > > > scsi_id > > > > > > > How to collect the cluster FS information? > > > > > > gfs_tool > > > > > > > How to display shared disks on node/s? Which node/s? > > > > > > gfs_tool list > > > cman_tool services > > > > > > > > > You can wrap all of these inside snmp scripts/oids or use nagios passive > > > checks if you want to monitor them remotely/automatically. > > > > > > Brian > > > > > > From lhh at redhat.com Tue Feb 12 14:50:32 2008 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 12 Feb 2008 09:50:32 -0500 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> <20080207230727.GE63223@widexs.nl> <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202827832.2970.3.camel@localhost.localdomain> On Fri, 2008-02-08 at 15:32 +0000, gordan at bobich.net wrote: > I just had another thought. Most motherboards these days are ATX, which > means ATX type "short-pins-to-power-on" power switches. That means that as > a _REALLY_ cheap solution I could just get something like a small relay > switch and wire it into the serial port. When a pin on RS232 goes high > (e.g. DTR), it activates the switch. I think it would be pretty reliable, > and the total cost of components would be pennies. The fence agent would > also be a total of about 10 lines of code, too. :) It'd have to be a 'press-and-hold' sort of thing. E.g. either activate or deactivate DTR for 5+ seconds. ;) "Reboot" - Power-off: +dtr sleep 5 -dtr Power-on: +dtr -dtr This reminds me of the 'clapper' agent someone suggested awhile ago. -- Lon From lhh at redhat.com Tue Feb 12 15:59:03 2008 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 12 Feb 2008 10:59:03 -0500 Subject: [Linux-cluster] watchdog: rebootig In-Reply-To: <1202765364.20118.6.camel@ayanami.boston.devel.redhat.com> References: <1939546029@web.de> <1202741846.6443.148.camel@ayanami.boston.devel.redhat.com> <1202765364.20118.6.camel@ayanami.boston.devel.redhat.com> Message-ID: <1202831943.2970.5.camel@localhost.localdomain> On Mon, 2008-02-11 at 16:29 -0500, Lon Hohberger wrote: > On Mon, 2008-02-11 at 11:18 -0600, Stulic,Damjan wrote: > > > rgmanager-1.9.54-1.x86_64 > > Name : rgmanager Relocations: (not > > relocatable) > > Version : 1.9.54 Vendor: Red Hat, Inc. > > Release : 1 Build Date: Fri 06 Oct 2006 > > 11:43:10 AM CDT > > Ah, it's probably the rg_thread.c assertion failure; here's the patch: > > https://bugzilla.redhat.com/attachment.cgi?id=139908&action=edit > > Apparently we never opened the bug (for public viewing), but it was > fixed in the 4.5 updates (note... recent ccs package would be needed > too). Ah - here it is: https://bugzilla.redhat.com/show_bug.cgi?id=250085 -- Lon From rpeterso at redhat.com Tue Feb 12 17:30:50 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Tue, 12 Feb 2008 11:30:50 -0600 Subject: [Linux-cluster] GFS1 NFS problems under 2.6.25-rc1 (was: GFS2 loses data under kernel 2.6.24...) In-Reply-To: <38964.155.82.73.253.1202764106.squirrel@www.r-networks.net> References: <1202446917.5942.19.camel@eclipse.office.r-networks.net> <1202463059.22038.421.camel@quoit> <38964.155.82.73.253.1202764106.squirrel@www.r-networks.net> Message-ID: <1202837450.2701.6.camel@technetium.msp.redhat.com> On Mon, 2008-02-11 at 16:08 -0500, Glen Dosey wrote: > The gfs module builds successfully against the 2.6.25-rc1 kernel and > loads. Locally on the server everything appears to work, but on the GFs > filesystem NFS exported from the server, clients can create files but they > sometimes fail to store any data. For example: Hi Glen, I'll try to verify that the HEAD branch contains all of the recent fixes found in the RHEL5 branch. I'll also try to recreate the problem when I can. In the mean time, can you open up a bugzilla record for this problem and assign it to me? Thanks. Regards, Bob Peterson Red Hat GFS From wferi at niif.hu Tue Feb 12 18:13:10 2008 From: wferi at niif.hu (Wagner Ferenc) Date: Tue, 12 Feb 2008 19:13:10 +0100 Subject: [Linux-cluster] no version for "gfs2_unmount_lockproto" Message-ID: <878x1q59hl.fsf@szonett.ki.iif.hu> Hi, I've compiled cluster-2.01.00 against Linux 2.6.23.16. On modprobe gfs I got the following two kernel messages: gfs: no version for "gfs2_unmount_lockproto" found: kernel tainted. GFS 2.01.00 (built Feb 12 2008 14:42:50) installed Strange. I went on, and the mount command froze (Ctrl-C can't kill it, though it isn't in D state: it's consuming 5% of CPU) with these messages: Trying to join cluster "lock_dlm", "pilot:test" Joined cluster. Now mounting FS... GFS: fsid=pilot:test.4294967295: can't mount journal #4294967295 GFS: fsid=pilot:test.4294967295: there are only 6 journals (0 - 5) Right before issuing the mount command, cman_tool services reported: type level name id state fence 0 default 00010001 none [1 3] dlm 1 clvmd 00020001 none [1 3] On node 3, the same command reports: type level name id state fence 0 default 00010001 none [1 3] dlm 1 clvmd 00020001 none [1 3] dlm 1 test 00020002 none [1 3] gfs 2 test 00010003 none [3] This node can indeed use the filesystem. It's running Linux 2.6.23.8 with the same cluster suite. Anybody has an idea what can be wrong? -- Thanks, Feri. From rpeterso at redhat.com Tue Feb 12 18:34:34 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Tue, 12 Feb 2008 12:34:34 -0600 Subject: [Linux-cluster] no version for "gfs2_unmount_lockproto" In-Reply-To: <878x1q59hl.fsf@szonett.ki.iif.hu> References: <878x1q59hl.fsf@szonett.ki.iif.hu> Message-ID: <1202841274.2701.15.camel@technetium.msp.redhat.com> On Tue, 2008-02-12 at 19:13 +0100, Wagner Ferenc wrote: > Hi, > > I've compiled cluster-2.01.00 against Linux 2.6.23.16. On modprobe > gfs I got the following two kernel messages: > > gfs: no version for "gfs2_unmount_lockproto" found: kernel tainted. > GFS 2.01.00 (built Feb 12 2008 14:42:50) installed Hi Wagner, The HEAD / RHEL5 / (similar) versions of GFS use part of gfs2's locking infrastructure. For RHEL5, we did a patch to export those symbols from GFS2. The patch looks like the one I have below. So the GFS2 module has to be loaded for GFS to work and that's just so GFS and GFS2 share the same locking protocol. (i.e. people can mount both gfs and gfs2 at the same time and go through the same dlm). The exporting of those symbols did not get pushed into upstream GFS2 because it's only needed for GFS(1), which itself isn't part of the upstream kernel. If you add these symbol exports to GFS2 it should allow GFS to mount properly. Regards, Bob Peterson Red Hat GFS -- --- a/fs/gfs2/locking.c 2008-02-11 11:10:57.000000000 -0600 +++ b/fs/gfs2/locking.c 2008-02-08 14:10:36.000000000 -0600 @@ -181,4 +181,6 @@ void gfs2_withdraw_lockproto(struct lm_l EXPORT_SYMBOL_GPL(gfs2_register_lockproto); EXPORT_SYMBOL_GPL(gfs2_unregister_lockproto); - +EXPORT_SYMBOL_GPL(gfs2_withdraw_lockproto); +EXPORT_SYMBOL_GPL(gfs2_mount_lockproto); +EXPORT_SYMBOL_GPL(gfs2_unmount_lockproto); From bpkroth at wisc.edu Tue Feb 12 18:46:26 2008 From: bpkroth at wisc.edu (Brian Kroth) Date: Tue, 12 Feb 2008 12:46:26 -0600 Subject: [Linux-cluster] no version for "gfs2_unmount_lockproto" In-Reply-To: <1202841274.2701.15.camel@technetium.msp.redhat.com> References: <878x1q59hl.fsf@szonett.ki.iif.hu> <1202841274.2701.15.camel@technetium.msp.redhat.com> Message-ID: <20080212184626.GH10361@wisc.edu> Bob Peterson : > Hi Wagner, > > The HEAD / RHEL5 / (similar) versions of GFS use part of gfs2's > locking infrastructure. For RHEL5, we did a patch to export > those symbols from GFS2. The patch looks like the one I have > below. So the GFS2 module has to be loaded for GFS to work > and that's just so GFS and GFS2 share the same locking protocol. > (i.e. people can mount both gfs and gfs2 at the same time and > go through the same dlm). The exporting of those symbols did > not get pushed into upstream GFS2 because it's only needed for > GFS(1), which itself isn't part of the upstream kernel. > > If you add these symbol exports to GFS2 it should allow GFS > to mount properly. > > Regards, > > Bob Peterson > Red Hat GFS > -- > --- a/fs/gfs2/locking.c 2008-02-11 11:10:57.000000000 -0600 > +++ b/fs/gfs2/locking.c 2008-02-08 14:10:36.000000000 -0600 > @@ -181,4 +181,6 @@ void gfs2_withdraw_lockproto(struct lm_l > > EXPORT_SYMBOL_GPL(gfs2_register_lockproto); > EXPORT_SYMBOL_GPL(gfs2_unregister_lockproto); > - > +EXPORT_SYMBOL_GPL(gfs2_withdraw_lockproto); > +EXPORT_SYMBOL_GPL(gfs2_mount_lockproto); > +EXPORT_SYMBOL_GPL(gfs2_unmount_lockproto); I'm curious. Are there plans to include this patch in the mainline? Any reasons why or why not? Thanks, Brian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2192 bytes Desc: not available URL: From alaack at ustrap.com Tue Feb 12 18:53:53 2008 From: alaack at ustrap.com (Andrea Laack) Date: Tue, 12 Feb 2008 12:53:53 -0600 Subject: [Linux-cluster] cluster crash problem Message-ID: <200802121835.m1CIZBCb015988@gate.ustrap.com> We are running RHEL 3.0 with version 1.0.3 of RedHat cluster suite. We are utilizing a Promise Vtrak 15200 for shared storage and an Adaptec ASA-7211C iSCSI initiator. Having problems with any process that uses high I/O across the iSCSI link. Last night dba attempted to create an Oracle instance on the shared storage device. The cluster crashed and failed over the the backup node. Nothing in the logs. Log level set at 6. Only indication I have that something happened is from the graphs of the disk I/O (HotSanic). This shows 69.42 Pentabytes (yes, it shows pentabytes). We are using a watchdog timer. This has happened before when copying *very* large amounts of data that includes *very* large files. Many small files does not cause the cluster to crash. Has anyone seen this type of problem? Any help will be sincerely appreciated. Adaptec will only talk to me if I pay them $199/phone call. Thanks Andrea Andrea Laack Network Administrator Universal Strap W209N17500 Industrial Drive Jackson, WI 53037 262-677-3641 Ext 5220 From rpeterso at redhat.com Tue Feb 12 19:06:30 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Tue, 12 Feb 2008 13:06:30 -0600 Subject: [Linux-cluster] no version for "gfs2_unmount_lockproto" In-Reply-To: <20080212184626.GH10361@wisc.edu> References: <878x1q59hl.fsf@szonett.ki.iif.hu> <1202841274.2701.15.camel@technetium.msp.redhat.com> <20080212184626.GH10361@wisc.edu> Message-ID: <1202843190.2701.29.camel@technetium.msp.redhat.com> On Tue, 2008-02-12 at 12:46 -0600, Brian Kroth wrote: > I'm curious. Are there plans to include this patch in the mainline? Any > reasons why or why not? > > Thanks, > Brian Hi Brian, That's a gfs2 patch, and gfs2 "mainline" is the upstream git tree from kernel.org. We've never tried to add that patch to the upstream kernel because we figured it would be rejected by the upstream community. Those people likely wouldn't accept the suggestion that it should be added "just in case someone wants to run out-of-tree gfs on top of gfs2." After all, the patch is really only useful if gfs (vers. 1) is required, but gfs is not part of that upstream kernel. >From what I've heard, the Fedora community doesn't really accept anything along those lines anymore either. So far they've let us get away with it, but I've heard that it's unacceptable for us going forward (Fedora 9 or 10? I'm not sure where the cutoff is). So lately we've been discussing whether/how to separate out the gfs1 and gfs2 infrastructure so that there isn't that crossover. That issue is still work-in-progress. Regards, Bob Peterson Red Hat GFS From swhiteho at redhat.com Tue Feb 12 19:52:05 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Tue, 12 Feb 2008 19:52:05 +0000 Subject: [Linux-cluster] no version for "gfs2_unmount_lockproto" In-Reply-To: <1202843190.2701.29.camel@technetium.msp.redhat.com> References: <878x1q59hl.fsf@szonett.ki.iif.hu> <1202841274.2701.15.camel@technetium.msp.redhat.com> <20080212184626.GH10361@wisc.edu> <1202843190.2701.29.camel@technetium.msp.redhat.com> Message-ID: <1202845925.3740.15.camel@localhost.localdomain> Hi, On Tue, 2008-02-12 at 13:06 -0600, Bob Peterson wrote: > On Tue, 2008-02-12 at 12:46 -0600, Brian Kroth wrote: > > I'm curious. Are there plans to include this patch in the mainline? Any > > reasons why or why not? > > > > Thanks, > > Brian > > Hi Brian, > > That's a gfs2 patch, and gfs2 "mainline" is the upstream git tree from > kernel.org. We've never tried to add that patch to the upstream kernel > because we figured it would be rejected by the upstream community. > Those people likely wouldn't accept the suggestion that it should be > added "just in case someone wants to run out-of-tree gfs on top of > gfs2." After all, the patch is really only useful if gfs (vers. 1) is > required, but gfs is not part of that upstream kernel. > > >From what I've heard, the Fedora community doesn't really accept > anything along those lines anymore either. So far they've let us get > away with it, but I've heard that it's unacceptable for us going > forward (Fedora 9 or 10? I'm not sure where the cutoff is). So lately > we've been discussing whether/how to separate out the gfs1 and gfs2 > infrastructure so that there isn't that crossover. That issue is still > work-in-progress. > > Regards, > > Bob Peterson > Red Hat GFS > Just to confirm some of the details, F-8 is the last one which will have our exports in it. The reason that we haven't pushed the export patch upstream is that in the future for GFS2 we intend to get rid of that particular interface, so we didn't want to export it in case someone else started using it and thus prevented or slowed that process. Also exports for use by out-of-tree code tend to be frowned upon. The plan is that the interface between the filesystem and the lock manager will be the DLM's interface, so we will still allow pluggable lock managers, but just at a slightly different level in the code. So at some stage we'll need to duplicate the lock modules for GFS anyway, and about now seems as good a time as any to start that process, Steve. From swhiteho at redhat.com Tue Feb 12 20:02:19 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Tue, 12 Feb 2008 20:02:19 +0000 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <1202827832.2970.3.camel@localhost.localdomain> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> <20080207230727.GE63223@widexs.nl> <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> <1202827832.2970.3.camel@localhost.localdomain> Message-ID: <1202846539.3740.24.camel@localhost.localdomain> Hi, On Tue, 2008-02-12 at 09:50 -0500, Lon Hohberger wrote: > On Fri, 2008-02-08 at 15:32 +0000, gordan at bobich.net wrote: > > > I just had another thought. Most motherboards these days are ATX, which > > means ATX type "short-pins-to-power-on" power switches. That means that as > > a _REALLY_ cheap solution I could just get something like a small relay > > switch and wire it into the serial port. When a pin on RS232 goes high > > (e.g. DTR), it activates the switch. I think it would be pretty reliable, > > and the total cost of components would be pennies. The fence agent would > > also be a total of about 10 lines of code, too. :) > > It'd have to be a 'press-and-hold' sort of thing. E.g. either activate > or deactivate DTR for 5+ seconds. ;) > I did something similar in the early days of Sistina. I used a (ISA bus!) lab card with 12 relays on it, one of which was connected across the reset switch of my test machine. Together with a serial console that gave me all the control that I generally needed when working remotely. Its relatively unlikely that an RS-232 port would supply enough current to drive a relay directly, but you might find a suitable alternative in an opto-isolator, and that would also have the advantage of not being inductive and thus it won't potentially put a surge onto your power-rails in case your decoupling isn't 100%. Its also cheaper. Steve. From jakub.suchy at enlogit.cz Tue Feb 12 20:36:43 2008 From: jakub.suchy at enlogit.cz (Jakub Suchy) Date: Tue, 12 Feb 2008 21:36:43 +0100 Subject: [Linux-cluster] 2-node tie-breaking In-Reply-To: <1202846539.3740.24.camel@localhost.localdomain> References: <1202412937.2938.55.camel@localhost.localdomain> <47AB71D2.2040201@bobich.net> <1202421366.2938.70.camel@localhost.localdomain> <47AB840C.8040905@bobich.net> <20080207223951.GB15215@localhost> <20080207230727.GE63223@widexs.nl> <1202483577.21504.96.camel@ayanami.boston.devel.redhat.com> <1202827832.2970.3.camel@localhost.localdomain> <1202846539.3740.24.camel@localhost.localdomain> Message-ID: <20080212203643.GA29087@localhost> > Its relatively unlikely that an RS-232 port would supply enough current > to drive a relay directly, but you might find a suitable alternative in > an opto-isolator, and that would also have the advantage of not being > inductive and thus it won't potentially put a surge onto your > power-rails in case your decoupling isn't 100%. Its also cheaper. Czech Republic is a Wi-Fi giant. As Wi-Fi access points have been really unreliable, many Wi-Fi amateurs developed Watchdogs on their own. This is one of them: http://www.simandl.cz/stranky/elektro/resetator/resetator.htm In Czech only, but it may be OK for an experienced developer. I can provide rough translation if needed. Jakub -- Jakub Such? GSM: +420 - 777 817 949 Enlogit s.r.o, U Cukrovaru 509/4, 400 07 ?st? nad Labem tel.: +420 - 474 745 159, fax: +420 - 474 745 160 e-mail: info at enlogit.cz, web: http://www.enlogit.cz From christopher.barry at qlogic.com Tue Feb 12 20:51:00 2008 From: christopher.barry at qlogic.com (Christopher Barry) Date: Tue, 12 Feb 2008 14:51:00 -0600 Subject: [Linux-cluster] Here is my cluster node shutdown script Message-ID: Hi All, Just throwing this out there in case anyone has a use for it. It works well for me, and I know that until I wrote it, shutdowns were painful. (they were fences, really ;) Improvements, and or comments welcome. A small way of giving back ;) Cheers, -C -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: shutdown_node.bz2 Type: application/x-bzip Size: 1244 bytes Desc: shutdown_node.bz2 URL: From wferi at niif.hu Tue Feb 12 21:29:45 2008 From: wferi at niif.hu (Ferenc Wagner) Date: Tue, 12 Feb 2008 22:29:45 +0100 Subject: [Linux-cluster] no version for "gfs2_unmount_lockproto" In-Reply-To: <1202841274.2701.15.camel@technetium.msp.redhat.com> (Bob Peterson's message of "Tue, 12 Feb 2008 12:34:34 -0600") References: <878x1q59hl.fsf@szonett.ki.iif.hu> <1202841274.2701.15.camel@technetium.msp.redhat.com> Message-ID: <87skzxq2wm.fsf@szonett.ki.iif.hu> Bob Peterson writes: > On Tue, 2008-02-12 at 19:13 +0100, Wagner Ferenc wrote: > >> I've compiled cluster-2.01.00 against Linux 2.6.23.16. On modprobe >> gfs I got the following two kernel messages: >> >> gfs: no version for "gfs2_unmount_lockproto" found: kernel tainted. >> GFS 2.01.00 (built Feb 12 2008 14:42:50) installed > > The HEAD / RHEL5 / (similar) versions of GFS use part of gfs2's > locking infrastructure. For RHEL5, we did a patch to export > those symbols from GFS2. The patch looks like the one I have > below. [...] > > --- a/fs/gfs2/locking.c 2008-02-11 11:10:57.000000000 -0600 > +++ b/fs/gfs2/locking.c 2008-02-08 14:10:36.000000000 -0600 > @@ -181,4 +181,6 @@ void gfs2_withdraw_lockproto(struct lm_l > > EXPORT_SYMBOL_GPL(gfs2_register_lockproto); > EXPORT_SYMBOL_GPL(gfs2_unregister_lockproto); > - > +EXPORT_SYMBOL_GPL(gfs2_withdraw_lockproto); > +EXPORT_SYMBOL_GPL(gfs2_mount_lockproto); > +EXPORT_SYMBOL_GPL(gfs2_unmount_lockproto); Actually, I also patched my kernel tree like this. In cases when I forgot it, I wasn't even allowed to load the gfs module into the kernel. In this case the "tainted" warning was related to a slight vermagic mismatch, and after recompiling everything properly, it went away. But the issue remained: the mount command just sits there, consuming some CPU, and by now I've got the following console output (with my notes in the brackets): [modprobe gfs] GFS 2.01.00 (built Feb 12 2008 22:07:48) installed [starting the cluster infrastructure] dlm: Using TCP for communications dlm: connecting to 3 dlm: got connection from 3 [mount /mnt] Trying to join cluster "lock_dlm", "pilot:test" Joined cluster. Now mounting FS... GFS: fsid=pilot:test.4294967295: can't mount journal #4294967295 GFS: fsid=pilot:test.4294967295: there are only 6 journals (0 - 5) [a couple of minutes passed here] GFS: fsid=pilot:test.4294967295: Unmount seems to be stalled. Dumping lock state... Glock (2, 25) gl_flags = gl_count = 2 gl_state = 0 req_gh = no req_bh = no lvb_count = 0 object = yes new_le = no incore_le = no reclaim = no aspace = 0 ail_bufs = no Inode: num = 25/25 type = 1 i_count = 1 i_flags = vnode = no Glock (5, 25) gl_flags = gl_count = 2 gl_state = 3 req_gh = no req_bh = no lvb_count = 0 object = yes new_le = no incore_le = no reclaim = no aspace = no ail_bufs = no Holder owner = -1 gh_state = 3 gh_flags = 5 7 error = 0 gh_iflags = 1 6 7 Now, mount is still stalled, and still consumes 6% of CPU. -- Regards, Feri. From l_x2828 at yahoo.com Tue Feb 12 21:32:32 2008 From: l_x2828 at yahoo.com (BJ) Date: Tue, 12 Feb 2008 13:32:32 -0800 (PST) Subject: [Linux-cluster] Help - how to check if a node is still healthy after upgrade GFS on it? Message-ID: <681346.60850.qm@web55508.mail.re4.yahoo.com> Hi, Could some one please tell me what is the safe/best way to check if a cluster node is still healthy after we upgrade GFS on this node? I guess it's probably not very safe to say it's healthy by verifying this node can rejoin the cluster after being upgraded and then rebooted. A bit more information on the upgrade. We upgrade one of the three nodes in a cluster from GFS-6.0.2-25 to GFS-6.0.2.36-1. Regards, BJ -- ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs From randy.brown at noaa.gov Tue Feb 12 22:03:03 2008 From: randy.brown at noaa.gov (Randy Brown) Date: Tue, 12 Feb 2008 17:03:03 -0500 Subject: [Linux-cluster] One node rebooting quite often Message-ID: <47B21797.90005@noaa.gov> I am attaching the relevant log entries from a node that is rebooting about twice per day on average. I can't seem to pinpoint why? I was hoping someone could Identify something I'm overlooking. I will gladly provide more information as needed. I'm running a two-node nfs cluster on Centos 5 with the following packages: rgmanager-2.0.31-1.el5.centos gfs2-utils-0.1.38-1.el5 gfs-utils-0.1.12-1.el5 system-config-lvm-1.0.22-1.0.el5 cman-2.0.73-1.el5_1.1 lvm2-cluster-2.02.26-1.el5 kmod-gfs-0.1.16-6.2.6.18_8.1.15.el5 lvm2-2.02.26-3.el5 kmod-gfs-0.1.19-7.el5_1.1 kernel-2.6.18-53.1.6.el5 Thanks! Randy -------------- next part -------------- A non-text attachment was scrubbed... Name: cluster.log Type: text/x-log Size: 190498 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cluster.conf Type: text/xml Size: 5045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: randy.brown.vcf Type: text/x-vcard Size: 313 bytes Desc: not available URL: From rpeterso at redhat.com Tue Feb 12 22:43:22 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Tue, 12 Feb 2008 16:43:22 -0600 Subject: [Linux-cluster] no version for "gfs2_unmount_lockproto" In-Reply-To: <87skzxq2wm.fsf@szonett.ki.iif.hu> References: <878x1q59hl.fsf@szonett.ki.iif.hu> <1202841274.2701.15.camel@technetium.msp.redhat.com> <87skzxq2wm.fsf@szonett.ki.iif.hu> Message-ID: <1202856202.2701.43.camel@technetium.msp.redhat.com> On Tue, 2008-02-12 at 22:29 +0100, Ferenc Wagner wrote: > Actually, I also patched my kernel tree like this. In cases when I > forgot it, I wasn't even allowed to load the gfs module into the > kernel. In this case the "tainted" warning was related to a slight > vermagic mismatch, and after recompiling everything properly, it went > away. > > But the issue remained: the mount command just sits there, consuming > some CPU, and by now I've got the following console output (with my > notes in the brackets): > Now, mount is still stalled, and still consumes 6% of CPU. Hi Ferenc, This gfs hang on failed mounts is documented in bugzilla bug #425421, and I've already got a patch for it. Since this bug was reported internally to Red Hat, I don't know if this bug record is viewable by the public. (I don't have control over the permission bits and how they default, so don't shoot the messenger.) :7) The fix has not been shipped yet due to code freeze, but I'll attach the patch that fixes it below. Regards, Bob Peterson Red Hat GFS -- Index: ops_fstype.c =================================================================== RCS file: /cvs/cluster/cluster/gfs-kernel/src/gfs/ops_fstype.c,v retrieving revision 1.28.2.5 diff -w -u -p -p -u -r1.28.2.5 ops_fstype.c --- ops_fstype.c 19 Jun 2007 21:06:10 -0000 1.28.2.5 +++ ops_fstype.c 1 Feb 2008 20:16:38 -0000 @@ -388,6 +388,7 @@ out: return error; fail_dput: + gfs_inode_put(sdp->sd_linode); if (sb->s_root) { dput(sb->s_root); sb->s_root = NULL; From nhuczp at gmail.com Wed Feb 13 05:51:43 2008 From: nhuczp at gmail.com (chenzp) Date: Wed, 13 Feb 2008 13:51:43 +0800 Subject: [Linux-cluster] RHCS 5.1 latest packages, 2-node cluster, doesn't come up with only 1 node In-Reply-To: <20080208132257.M22787@webbertek.com.br> References: <20080208130449.M21432@webbertek.com.br> <20080208132257.M22787@webbertek.com.br> Message-ID: <6cc6db6f0802122151j23720c86jc70c834420371bf3@mail.gmail.com> 2008/2/8, Celso K. Webber : > > Hello, > > I forgot to add some versioning information from the Cluster packages, > here > they are: > > * Main cluster packages: > cman-2.0.73-1.el5_1.1.x86_64.rpm > openais-0.80.3-7.el5.x86_64.rpm > perl-Net-Telnet-3.03-5.noarch.rpm > > * Admin tools packages: > Cluster_Administration-en-US-5.1.0-7.noarch.rpm > cluster-cim-0.10.0-5.el5_1.1.x86_64.rpm > cluster-snmp-0.10.0-5.el5_1.1.x86_64.rpm > luci-0.10.0-6.el5.x86_64.rpm > modcluster-0.10.0-5.el5_1.1.x86_64.rpm > rgmanager-2.0.31-1.el5.x86_64.rpm > ricci-0.10.0-6.el5.x86_64.rpm please use luci and ricci build cluster !! RHEL-5-manual: http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/Cluster_Administration/s1-start-luci-ricci-conga-CA.html Conga User Manual: http://sources.redhat.com/cluster/conga/doc/user_manual.html system-config-cluster-1.0.50-1.3.noarch.rpm > tog-pegasus-2.6.1-2.el5_1.1.*.rpm > oddjob-*.rpm > > Thank you, > > Celso. > > On Fri, 8 Feb 2008 11:18:20 -0200, Celso K. Webber wrote > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wferi at niif.hu Wed Feb 13 08:23:00 2008 From: wferi at niif.hu (Ferenc Wagner) Date: Wed, 13 Feb 2008 09:23:00 +0100 Subject: [Linux-cluster] no version for "gfs2_unmount_lockproto" In-Reply-To: <1202856202.2701.43.camel@technetium.msp.redhat.com> (Bob Peterson's message of "Tue, 12 Feb 2008 16:43:22 -0600") References: <878x1q59hl.fsf@szonett.ki.iif.hu> <1202841274.2701.15.camel@technetium.msp.redhat.com> <87skzxq2wm.fsf@szonett.ki.iif.hu> <1202856202.2701.43.camel@technetium.msp.redhat.com> Message-ID: <87bq6lp8nv.fsf@szonett.ki.iif.hu> Bob Peterson writes: > On Tue, 2008-02-12 at 22:29 +0100, Ferenc Wagner wrote: >> Actually, I also patched my kernel tree like this. In cases when I >> forgot it, I wasn't even allowed to load the gfs module into the >> kernel. In this case the "tainted" warning was related to a slight >> vermagic mismatch, and after recompiling everything properly, it went >> away. >> >> But the issue remained: the mount command just sits there, consuming >> some CPU, and by now I've got the following console output (with my >> notes in the brackets): > >> Now, mount is still stalled, and still consumes 6% of CPU. > > This gfs hang on failed mounts is documented in bugzilla bug #425421, > and I've already got a patch for it. Since this bug was reported > internally to Red Hat, I don't know if this bug record is viewable by > the public. (I don't have control over the permission bits and how > they default, so don't shoot the messenger.) :7) :) I certainly wouldn't have, especially that you provided the patch. But no worries, the bugzilla entry was perfectly accessible for me. > The fix has not been shipped yet due to code freeze, but I'll attach > the patch that fixes it below. Thanks! This patch indeed fixed the hang. But of course not the mount: Trying to join cluster "lock_dlm", "pilot:test" Joined cluster. Now mounting FS... GFS: fsid=pilot:test.4294967295: can't mount journal #4294967295 GFS: fsid=pilot:test.4294967295: there are only 6 journals (0 - 5) A stab in the dark: # gfs_tool jindex /dev/mapper/gfs-test gfs_tool: /dev/mapper/gfs-test is not a GFS file/filesystem Scary. What may be the problem? The other node is using this volume... Can even unmount/remount it. Though in dmesg it says: GFS: fsid=pilot:test.0: jid=0: Trying to acquire journal lock... GFS: fsid=pilot:test.0: jid=0: Looking at journal... GFS: fsid=pilot:test.0: jid=0: Done GFS: fsid=pilot:test.0: jid=1: Trying to acquire journal lock... GFS: fsid=pilot:test.0: jid=1: Looking at journal... GFS: fsid=pilot:test.0: jid=1: Done GFS: fsid=pilot:test.0: jid=2: Trying to acquire journal lock... GFS: fsid=pilot:test.0: jid=2: Looking at journal... GFS: fsid=pilot:test.0: jid=2: Done GFS: fsid=pilot:test.0: jid=3: Trying to acquire journal lock... GFS: fsid=pilot:test.0: jid=3: Looking at journal... GFS: fsid=pilot:test.0: jid=3: Done GFS: fsid=pilot:test.0: jid=4: Trying to acquire journal lock... GFS: fsid=pilot:test.0: jid=4: Looking at journal... GFS: fsid=pilot:test.0: jid=4: Done GFS: fsid=pilot:test.0: jid=5: Trying to acquire journal lock... GFS: fsid=pilot:test.0: jid=5: Looking at journal... GFS: fsid=pilot:test.0: jid=5: Done Maybe it's locking all journals? Why? -- Thanks, Feri. From fs at debian.org Tue Feb 12 14:09:59 2008 From: fs at debian.org (Frederik Schueler) Date: Tue, 12 Feb 2008 15:09:59 +0100 Subject: [Linux-cluster] nfsd looping with 100% CPU Message-ID: <20080212140948.GE9620@mail.lowpingbastards.de> Hello, I am experiencing some serious trouble with an NFS Server running 2.6.24: after a couple of minutes, one kernel nfsd process loops with 100% CPU, and the load raises continuously. After some more minutes, there are two, then tree nfsds looping, and the load continues to slowly raise. top - 18:10:28 up 45 min, 1 user, load average: 7.11, 6.27, 3.69 Tasks: 79 total, 5 running, 74 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 50.0%sy, 0.0%ni, 49.8%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2074496k total, 151112k used, 1923384k free, 7848k buffers Swap: 1951856k total, 0k used, 1951856k free, 73748k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5589 root 24 4 0 0 0 R 100 0.0 11:11.88 nfsd 1 root 20 0 1940 644 548 S 0 0.0 0:01.44 init 2 root 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 15 -5 0 0 0 S 0 0.0 0:00.02 ksoftirqd/0 5 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT -5 0 0 0 S 0 0.0 0:00.02 migration/1 7 root 15 -5 0 0 0 S 0 0.0 0:00.00 ksoftirqd/1 This is a 32bit P4 box exporting two 1TB GFS shares to ~80 clients, and 3-4 of those clients use to write continuously with ~300-1500Kbit/s onto one of the shares. With 2.6.24, I get max ~250Kbit/s and the clients writing have increasing load values. This is a 2 nodes GFS cluster, the second node is a fallback running the old version from which I am trying to upgrade. I have attached the kernel config and the sysrq-t log, if that may be of any help. Best regards Frederik Sch?ler -- ENOSIG # # Automatically generated make config: don't edit # Linux kernel version: 2.6.24 # Thu Feb 7 16:59:30 2008 # # CONFIG_64BIT is not set CONFIG_X86_32=y # CONFIG_X86_64 is not set CONFIG_X86=y CONFIG_GENERIC_TIME=y CONFIG_GENERIC_CMOS_UPDATE=y CONFIG_CLOCKSOURCE_WATCHDOG=y CONFIG_GENERIC_CLOCKEVENTS=y CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y CONFIG_LOCKDEP_SUPPORT=y CONFIG_STACKTRACE_SUPPORT=y CONFIG_SEMAPHORE_SLEEPERS=y CONFIG_MMU=y CONFIG_ZONE_DMA=y CONFIG_QUICKLIST=y CONFIG_GENERIC_ISA_DMA=y CONFIG_GENERIC_IOMAP=y CONFIG_GENERIC_BUG=y CONFIG_GENERIC_HWEIGHT=y CONFIG_ARCH_MAY_HAVE_PC_FDC=y CONFIG_DMI=y # CONFIG_RWSEM_GENERIC_SPINLOCK is not set CONFIG_RWSEM_XCHGADD_ALGORITHM=y # CONFIG_ARCH_HAS_ILOG2_U32 is not set # CONFIG_ARCH_HAS_ILOG2_U64 is not set CONFIG_GENERIC_CALIBRATE_DELAY=y # CONFIG_GENERIC_TIME_VSYSCALL is not set CONFIG_ARCH_SUPPORTS_OPROFILE=y # CONFIG_ZONE_DMA32 is not set CONFIG_ARCH_POPULATES_NODE_MAP=y # CONFIG_AUDIT_ARCH is not set CONFIG_GENERIC_HARDIRQS=y CONFIG_GENERIC_IRQ_PROBE=y CONFIG_GENERIC_PENDING_IRQ=y CONFIG_X86_SMP=y CONFIG_X86_HT=y CONFIG_X86_BIOS_REBOOT=y CONFIG_X86_TRAMPOLINE=y CONFIG_KTIME_SCALAR=y CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" # # General setup # CONFIG_EXPERIMENTAL=y CONFIG_LOCK_KERNEL=y CONFIG_INIT_ENV_ARG_LIMIT=32 CONFIG_LOCALVERSION="" # CONFIG_LOCALVERSION_AUTO is not set CONFIG_SWAP=y CONFIG_SYSVIPC=y CONFIG_SYSVIPC_SYSCTL=y CONFIG_POSIX_MQUEUE=y CONFIG_BSD_PROCESS_ACCT=y CONFIG_BSD_PROCESS_ACCT_V3=y CONFIG_TASKSTATS=y CONFIG_TASK_DELAY_ACCT=y CONFIG_TASK_XACCT=y CONFIG_TASK_IO_ACCOUNTING=y # CONFIG_USER_NS is not set # CONFIG_PID_NS is not set CONFIG_AUDIT=y CONFIG_AUDITSYSCALL=y CONFIG_AUDIT_TREE=y # CONFIG_IKCONFIG is not set CONFIG_LOG_BUF_SHIFT=20 CONFIG_CGROUPS=y # CONFIG_CGROUP_DEBUG is not set CONFIG_CGROUP_NS=y CONFIG_CPUSETS=y CONFIG_FAIR_GROUP_SCHED=y CONFIG_FAIR_USER_SCHED=y # CONFIG_FAIR_CGROUP_SCHED is not set CONFIG_CGROUP_CPUACCT=y CONFIG_SYSFS_DEPRECATED=y CONFIG_PROC_PID_CPUSET=y CONFIG_RELAY=y CONFIG_BLK_DEV_INITRD=y CONFIG_INITRAMFS_SOURCE="" CONFIG_CC_OPTIMIZE_FOR_SIZE=y CONFIG_SYSCTL=y # CONFIG_EMBEDDED is not set CONFIG_UID16=y CONFIG_SYSCTL_SYSCALL=y CONFIG_KALLSYMS=y # CONFIG_KALLSYMS_ALL is not set # CONFIG_KALLSYMS_EXTRA_PASS is not set CONFIG_HOTPLUG=y CONFIG_PRINTK=y CONFIG_BUG=y CONFIG_ELF_CORE=y CONFIG_BASE_FULL=y CONFIG_FUTEX=y CONFIG_ANON_INODES=y CONFIG_EPOLL=y CONFIG_SIGNALFD=y CONFIG_EVENTFD=y CONFIG_SHMEM=y CONFIG_VM_EVENT_COUNTERS=y CONFIG_SLAB=y # CONFIG_SLUB is not set # CONFIG_SLOB is not set CONFIG_SLABINFO=y CONFIG_RT_MUTEXES=y # CONFIG_TINY_SHMEM is not set CONFIG_BASE_SMALL=0 CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y CONFIG_MODULE_FORCE_UNLOAD=y CONFIG_MODVERSIONS=y # CONFIG_MODULE_SRCVERSION_ALL is not set CONFIG_KMOD=y CONFIG_STOP_MACHINE=y CONFIG_BLOCK=y CONFIG_LBD=y CONFIG_BLK_DEV_IO_TRACE=y CONFIG_LSF=y # CONFIG_BLK_DEV_BSG is not set # # IO Schedulers # CONFIG_IOSCHED_NOOP=y CONFIG_IOSCHED_AS=y CONFIG_IOSCHED_DEADLINE=y CONFIG_IOSCHED_CFQ=y # CONFIG_DEFAULT_AS is not set # CONFIG_DEFAULT_DEADLINE is not set CONFIG_DEFAULT_CFQ=y # CONFIG_DEFAULT_NOOP is not set CONFIG_DEFAULT_IOSCHED="cfq" CONFIG_PREEMPT_NOTIFIERS=y # # Processor type and features # CONFIG_TICK_ONESHOT=y # CONFIG_NO_HZ is not set CONFIG_HIGH_RES_TIMERS=y CONFIG_GENERIC_CLOCKEVENTS_BUILD=y CONFIG_SMP=y CONFIG_X86_PC=y # CONFIG_X86_ELAN is not set # CONFIG_X86_VOYAGER is not set # CONFIG_X86_NUMAQ is not set # CONFIG_X86_SUMMIT is not set # CONFIG_X86_BIGSMP is not set # CONFIG_X86_VISWS is not set # CONFIG_X86_GENERICARCH is not set # CONFIG_X86_ES7000 is not set # CONFIG_X86_VSMP is not set CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y CONFIG_PARAVIRT=y CONFIG_PARAVIRT_GUEST=y # CONFIG_XEN is not set CONFIG_VMI=y CONFIG_LGUEST_GUEST=y # CONFIG_M386 is not set # CONFIG_M486 is not set # CONFIG_M586 is not set # CONFIG_M586TSC is not set # CONFIG_M586MMX is not set CONFIG_M686=y # CONFIG_MPENTIUMII is not set # CONFIG_MPENTIUMIII is not set # CONFIG_MPENTIUMM is not set # CONFIG_MPENTIUM4 is not set # CONFIG_MK6 is not set # CONFIG_MK7 is not set # CONFIG_MK8 is not set # CONFIG_MCRUSOE is not set # CONFIG_MEFFICEON is not set # CONFIG_MWINCHIPC6 is not set # CONFIG_MWINCHIP2 is not set # CONFIG_MWINCHIP3D is not set # CONFIG_MGEODEGX1 is not set # CONFIG_MGEODE_LX is not set # CONFIG_MCYRIXIII is not set # CONFIG_MVIAC3_2 is not set # CONFIG_MVIAC7 is not set # CONFIG_MPSC is not set # CONFIG_MCORE2 is not set # CONFIG_GENERIC_CPU is not set # CONFIG_X86_GENERIC is not set CONFIG_X86_CMPXCHG=y CONFIG_X86_L1_CACHE_SHIFT=5 CONFIG_X86_XADD=y CONFIG_X86_PPRO_FENCE=y CONFIG_X86_WP_WORKS_OK=y CONFIG_X86_INVLPG=y CONFIG_X86_BSWAP=y CONFIG_X86_POPAD_OK=y CONFIG_X86_GOOD_APIC=y CONFIG_X86_USE_PPRO_CHECKSUM=y CONFIG_X86_TSC=y CONFIG_X86_CMOV=y CONFIG_X86_MINIMUM_CPU_FAMILY=4 CONFIG_HPET_TIMER=y CONFIG_NR_CPUS=8 CONFIG_SCHED_SMT=y CONFIG_SCHED_MC=y CONFIG_PREEMPT_NONE=y # CONFIG_PREEMPT_VOLUNTARY is not set # CONFIG_PREEMPT is not set # CONFIG_PREEMPT_BKL is not set CONFIG_X86_LOCAL_APIC=y CONFIG_X86_IO_APIC=y CONFIG_X86_MCE=y CONFIG_X86_MCE_NONFATAL=m CONFIG_X86_MCE_P4THERMAL=y CONFIG_VM86=y CONFIG_TOSHIBA=m CONFIG_I8K=m # CONFIG_X86_REBOOTFIXUPS is not set CONFIG_MICROCODE=m CONFIG_MICROCODE_OLD_INTERFACE=y CONFIG_X86_MSR=m CONFIG_X86_CPUID=m # CONFIG_NOHIGHMEM is not set CONFIG_HIGHMEM4G=y # CONFIG_HIGHMEM64G is not set CONFIG_PAGE_OFFSET=0xC0000000 CONFIG_HIGHMEM=y CONFIG_ARCH_FLATMEM_ENABLE=y CONFIG_ARCH_SPARSEMEM_ENABLE=y CONFIG_ARCH_SELECT_MEMORY_MODEL=y CONFIG_SELECT_MEMORY_MODEL=y CONFIG_FLATMEM_MANUAL=y # CONFIG_DISCONTIGMEM_MANUAL is not set # CONFIG_SPARSEMEM_MANUAL is not set CONFIG_FLATMEM=y CONFIG_FLAT_NODE_MEM_MAP=y CONFIG_SPARSEMEM_STATIC=y # CONFIG_SPARSEMEM_VMEMMAP_ENABLE is not set CONFIG_SPLIT_PTLOCK_CPUS=4 # CONFIG_RESOURCES_64BIT is not set CONFIG_ZONE_DMA_FLAG=1 CONFIG_BOUNCE=y CONFIG_NR_QUICK=1 CONFIG_VIRT_TO_BUS=y # CONFIG_HIGHPTE is not set # CONFIG_MATH_EMULATION is not set CONFIG_MTRR=y CONFIG_EFI=y # CONFIG_IRQBALANCE is not set CONFIG_BOOT_IOREMAP=y # CONFIG_SECCOMP is not set # CONFIG_HZ_100 is not set CONFIG_HZ_250=y # CONFIG_HZ_300 is not set # CONFIG_HZ_1000 is not set CONFIG_HZ=250 CONFIG_KEXEC=y # CONFIG_CRASH_DUMP is not set CONFIG_PHYSICAL_START=0x100000 # CONFIG_RELOCATABLE is not set CONFIG_PHYSICAL_ALIGN=0x100000 CONFIG_HOTPLUG_CPU=y CONFIG_COMPAT_VDSO=y CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y # # Power management options # CONFIG_PM=y CONFIG_PM_LEGACY=y # CONFIG_PM_DEBUG is not set CONFIG_PM_SLEEP_SMP=y CONFIG_PM_SLEEP=y CONFIG_SUSPEND_SMP_POSSIBLE=y CONFIG_SUSPEND=y CONFIG_HIBERNATION_SMP_POSSIBLE=y CONFIG_HIBERNATION=y CONFIG_PM_STD_PARTITION="" CONFIG_ACPI=y CONFIG_ACPI_SLEEP=y CONFIG_ACPI_PROCFS=y # CONFIG_ACPI_PROCFS_POWER is not set CONFIG_ACPI_SYSFS_POWER=y CONFIG_ACPI_PROC_EVENT=y CONFIG_ACPI_AC=m CONFIG_ACPI_BATTERY=m CONFIG_ACPI_BUTTON=m CONFIG_ACPI_VIDEO=m CONFIG_ACPI_FAN=m CONFIG_ACPI_DOCK=m CONFIG_ACPI_BAY=m CONFIG_ACPI_PROCESSOR=m CONFIG_ACPI_HOTPLUG_CPU=y CONFIG_ACPI_THERMAL=m CONFIG_ACPI_ASUS=m CONFIG_ACPI_TOSHIBA=m CONFIG_ACPI_BLACKLIST_YEAR=0 # CONFIG_ACPI_DEBUG is not set CONFIG_ACPI_EC=y CONFIG_ACPI_POWER=y CONFIG_ACPI_SYSTEM=y CONFIG_X86_PM_TIMER=y CONFIG_ACPI_CONTAINER=m CONFIG_ACPI_SBS=m CONFIG_APM=m # CONFIG_APM_IGNORE_USER_SUSPEND is not set # CONFIG_APM_DO_ENABLE is not set # CONFIG_APM_CPU_IDLE is not set # CONFIG_APM_DISPLAY_BLANK is not set # CONFIG_APM_ALLOW_INTS is not set # CONFIG_APM_REAL_MODE_POWER_OFF is not set # # CPU Frequency scaling # CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ_TABLE=m # CONFIG_CPU_FREQ_DEBUG is not set CONFIG_CPU_FREQ_STAT=m # CONFIG_CPU_FREQ_STAT_DETAILS is not set CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y # CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set # CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set # CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set CONFIG_CPU_FREQ_GOV_PERFORMANCE=y CONFIG_CPU_FREQ_GOV_POWERSAVE=m CONFIG_CPU_FREQ_GOV_USERSPACE=m CONFIG_CPU_FREQ_GOV_ONDEMAND=m CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m # # CPUFreq processor drivers # CONFIG_X86_ACPI_CPUFREQ=m CONFIG_X86_POWERNOW_K6=m CONFIG_X86_POWERNOW_K7=m CONFIG_X86_POWERNOW_K7_ACPI=y CONFIG_X86_POWERNOW_K8=m CONFIG_X86_POWERNOW_K8_ACPI=y CONFIG_X86_GX_SUSPMOD=m CONFIG_X86_SPEEDSTEP_CENTRINO=m CONFIG_X86_SPEEDSTEP_CENTRINO_TABLE=y CONFIG_X86_SPEEDSTEP_ICH=m CONFIG_X86_SPEEDSTEP_SMI=m CONFIG_X86_P4_CLOCKMOD=m CONFIG_X86_CPUFREQ_NFORCE2=m CONFIG_X86_LONGRUN=m CONFIG_X86_LONGHAUL=m CONFIG_X86_E_POWERSAVER=m # # shared options # # CONFIG_X86_ACPI_CPUFREQ_PROC_INTF is not set CONFIG_X86_SPEEDSTEP_LIB=m CONFIG_X86_SPEEDSTEP_RELAXED_CAP_CHECK=y CONFIG_CPU_IDLE=y CONFIG_CPU_IDLE_GOV_LADDER=y # # Bus options (PCI etc.) # CONFIG_PCI=y # CONFIG_PCI_GOBIOS is not set # CONFIG_PCI_GOMMCONFIG is not set # CONFIG_PCI_GODIRECT is not set CONFIG_PCI_GOANY=y CONFIG_PCI_BIOS=y CONFIG_PCI_DIRECT=y CONFIG_PCI_MMCONFIG=y CONFIG_PCI_DOMAINS=y CONFIG_PCIEPORTBUS=y CONFIG_HOTPLUG_PCI_PCIE=m CONFIG_PCIEAER=y CONFIG_ARCH_SUPPORTS_MSI=y CONFIG_PCI_MSI=y CONFIG_PCI_LEGACY=y # CONFIG_PCI_DEBUG is not set CONFIG_HT_IRQ=y CONFIG_ISA_DMA_API=y CONFIG_ISA=y # CONFIG_EISA is not set # CONFIG_MCA is not set CONFIG_SCx200=m CONFIG_SCx200HR_TIMER=m CONFIG_K8_NB=y CONFIG_PCCARD=m # CONFIG_PCMCIA_DEBUG is not set CONFIG_PCMCIA=m CONFIG_PCMCIA_LOAD_CIS=y CONFIG_PCMCIA_IOCTL=y CONFIG_CARDBUS=y # # PC-card bridges # CONFIG_YENTA=m CONFIG_YENTA_O2=y CONFIG_YENTA_RICOH=y CONFIG_YENTA_TI=y CONFIG_YENTA_ENE_TUNE=y CONFIG_YENTA_TOSHIBA=y CONFIG_PD6729=m CONFIG_I82092=m CONFIG_I82365=m CONFIG_TCIC=m CONFIG_PCMCIA_PROBE=y CONFIG_PCCARD_NONSTATIC=m CONFIG_HOTPLUG_PCI=m CONFIG_HOTPLUG_PCI_FAKE=m CONFIG_HOTPLUG_PCI_COMPAQ=m # CONFIG_HOTPLUG_PCI_COMPAQ_NVRAM is not set CONFIG_HOTPLUG_PCI_IBM=m CONFIG_HOTPLUG_PCI_ACPI=m CONFIG_HOTPLUG_PCI_ACPI_IBM=m CONFIG_HOTPLUG_PCI_CPCI=y CONFIG_HOTPLUG_PCI_CPCI_ZT5550=m CONFIG_HOTPLUG_PCI_CPCI_GENERIC=m CONFIG_HOTPLUG_PCI_SHPC=m # # Executable file formats / Emulations # CONFIG_BINFMT_ELF=y CONFIG_BINFMT_AOUT=m CONFIG_BINFMT_MISC=m # # Networking # CONFIG_NET=y # # Networking options # CONFIG_PACKET=y CONFIG_PACKET_MMAP=y CONFIG_UNIX=y CONFIG_XFRM=y CONFIG_XFRM_USER=m # CONFIG_XFRM_SUB_POLICY is not set # CONFIG_XFRM_MIGRATE is not set CONFIG_NET_KEY=m # CONFIG_NET_KEY_MIGRATE is not set CONFIG_INET=y CONFIG_IP_MULTICAST=y CONFIG_IP_ADVANCED_ROUTER=y CONFIG_ASK_IP_FIB_HASH=y # CONFIG_IP_FIB_TRIE is not set CONFIG_IP_FIB_HASH=y CONFIG_IP_MULTIPLE_TABLES=y CONFIG_IP_ROUTE_MULTIPATH=y CONFIG_IP_ROUTE_VERBOSE=y # CONFIG_IP_PNP is not set CONFIG_NET_IPIP=m CONFIG_NET_IPGRE=m CONFIG_NET_IPGRE_BROADCAST=y CONFIG_IP_MROUTE=y CONFIG_IP_PIMSM_V1=y CONFIG_IP_PIMSM_V2=y # CONFIG_ARPD is not set CONFIG_SYN_COOKIES=y CONFIG_INET_AH=m CONFIG_INET_ESP=m CONFIG_INET_IPCOMP=m CONFIG_INET_XFRM_TUNNEL=m CONFIG_INET_TUNNEL=m CONFIG_INET_XFRM_MODE_TRANSPORT=m CONFIG_INET_XFRM_MODE_TUNNEL=m CONFIG_INET_XFRM_MODE_BEET=m CONFIG_INET_LRO=m CONFIG_INET_DIAG=m CONFIG_INET_TCP_DIAG=m CONFIG_TCP_CONG_ADVANCED=y CONFIG_TCP_CONG_BIC=y CONFIG_TCP_CONG_CUBIC=m CONFIG_TCP_CONG_WESTWOOD=m CONFIG_TCP_CONG_HTCP=m CONFIG_TCP_CONG_HSTCP=m CONFIG_TCP_CONG_HYBLA=m CONFIG_TCP_CONG_VEGAS=m CONFIG_TCP_CONG_SCALABLE=m CONFIG_TCP_CONG_LP=m CONFIG_TCP_CONG_VENO=m CONFIG_TCP_CONG_YEAH=m CONFIG_TCP_CONG_ILLINOIS=m CONFIG_DEFAULT_BIC=y # CONFIG_DEFAULT_CUBIC is not set # CONFIG_DEFAULT_HTCP is not set # CONFIG_DEFAULT_VEGAS is not set # CONFIG_DEFAULT_WESTWOOD is not set # CONFIG_DEFAULT_RENO is not set CONFIG_DEFAULT_TCP_CONG="bic" # CONFIG_TCP_MD5SIG is not set CONFIG_IP_VS=m # CONFIG_IP_VS_DEBUG is not set CONFIG_IP_VS_TAB_BITS=12 # # IPVS transport protocol load balancing support # CONFIG_IP_VS_PROTO_TCP=y CONFIG_IP_VS_PROTO_UDP=y CONFIG_IP_VS_PROTO_ESP=y CONFIG_IP_VS_PROTO_AH=y # # IPVS scheduler # CONFIG_IP_VS_RR=m CONFIG_IP_VS_WRR=m CONFIG_IP_VS_LC=m CONFIG_IP_VS_WLC=m CONFIG_IP_VS_LBLC=m CONFIG_IP_VS_LBLCR=m CONFIG_IP_VS_DH=m CONFIG_IP_VS_SH=m CONFIG_IP_VS_SED=m CONFIG_IP_VS_NQ=m # # IPVS application helper # CONFIG_IP_VS_FTP=m CONFIG_IPV6=m CONFIG_IPV6_PRIVACY=y # CONFIG_IPV6_ROUTER_PREF is not set # CONFIG_IPV6_OPTIMISTIC_DAD is not set CONFIG_INET6_AH=m CONFIG_INET6_ESP=m CONFIG_INET6_IPCOMP=m # CONFIG_IPV6_MIP6 is not set CONFIG_INET6_XFRM_TUNNEL=m CONFIG_INET6_TUNNEL=m CONFIG_INET6_XFRM_MODE_TRANSPORT=m CONFIG_INET6_XFRM_MODE_TUNNEL=m CONFIG_INET6_XFRM_MODE_BEET=m CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION=m CONFIG_IPV6_SIT=m CONFIG_IPV6_TUNNEL=m CONFIG_IPV6_MULTIPLE_TABLES=y CONFIG_IPV6_SUBTREES=y # CONFIG_NETLABEL is not set CONFIG_NETWORK_SECMARK=y CONFIG_NETFILTER=y # CONFIG_NETFILTER_DEBUG is not set CONFIG_BRIDGE_NETFILTER=y # # Core Netfilter Configuration # CONFIG_NETFILTER_NETLINK=m CONFIG_NETFILTER_NETLINK_QUEUE=m CONFIG_NETFILTER_NETLINK_LOG=m CONFIG_NF_CONNTRACK_ENABLED=m CONFIG_NF_CONNTRACK=m CONFIG_NF_CT_ACCT=y CONFIG_NF_CONNTRACK_MARK=y CONFIG_NF_CONNTRACK_SECMARK=y CONFIG_NF_CONNTRACK_EVENTS=y CONFIG_NF_CT_PROTO_GRE=m CONFIG_NF_CT_PROTO_SCTP=m CONFIG_NF_CT_PROTO_UDPLITE=m CONFIG_NF_CONNTRACK_AMANDA=m CONFIG_NF_CONNTRACK_FTP=m CONFIG_NF_CONNTRACK_H323=m CONFIG_NF_CONNTRACK_IRC=m CONFIG_NF_CONNTRACK_NETBIOS_NS=m CONFIG_NF_CONNTRACK_PPTP=m CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_CT_NETLINK=m CONFIG_NETFILTER_XTABLES=m CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m CONFIG_NETFILTER_XT_TARGET_CONNMARK=m CONFIG_NETFILTER_XT_TARGET_DSCP=m CONFIG_NETFILTER_XT_TARGET_MARK=m CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m CONFIG_NETFILTER_XT_TARGET_NFLOG=m CONFIG_NETFILTER_XT_TARGET_NOTRACK=m CONFIG_NETFILTER_XT_TARGET_TRACE=m CONFIG_NETFILTER_XT_TARGET_SECMARK=m CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m CONFIG_NETFILTER_XT_TARGET_TCPMSS=m CONFIG_NETFILTER_XT_MATCH_COMMENT=m CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m CONFIG_NETFILTER_XT_MATCH_CONNMARK=m CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m CONFIG_NETFILTER_XT_MATCH_DCCP=m CONFIG_NETFILTER_XT_MATCH_DSCP=m CONFIG_NETFILTER_XT_MATCH_ESP=m CONFIG_NETFILTER_XT_MATCH_HELPER=m CONFIG_NETFILTER_XT_MATCH_LENGTH=m CONFIG_NETFILTER_XT_MATCH_LIMIT=m CONFIG_NETFILTER_XT_MATCH_MAC=m CONFIG_NETFILTER_XT_MATCH_MARK=m CONFIG_NETFILTER_XT_MATCH_POLICY=m CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m CONFIG_NETFILTER_XT_MATCH_QUOTA=m CONFIG_NETFILTER_XT_MATCH_REALM=m CONFIG_NETFILTER_XT_MATCH_SCTP=m CONFIG_NETFILTER_XT_MATCH_STATE=m CONFIG_NETFILTER_XT_MATCH_STATISTIC=m CONFIG_NETFILTER_XT_MATCH_STRING=m CONFIG_NETFILTER_XT_MATCH_TCPMSS=m CONFIG_NETFILTER_XT_MATCH_TIME=m CONFIG_NETFILTER_XT_MATCH_U32=m CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m # # IP: Netfilter Configuration # CONFIG_NF_CONNTRACK_IPV4=m CONFIG_NF_CONNTRACK_PROC_COMPAT=y CONFIG_IP_NF_QUEUE=m CONFIG_IP_NF_IPTABLES=m CONFIG_IP_NF_MATCH_IPRANGE=m CONFIG_IP_NF_MATCH_TOS=m CONFIG_IP_NF_MATCH_RECENT=m CONFIG_IP_NF_MATCH_ECN=m CONFIG_IP_NF_MATCH_AH=m CONFIG_IP_NF_MATCH_TTL=m CONFIG_IP_NF_MATCH_OWNER=m CONFIG_IP_NF_MATCH_ADDRTYPE=m CONFIG_IP_NF_FILTER=m CONFIG_IP_NF_TARGET_REJECT=m CONFIG_IP_NF_TARGET_LOG=m CONFIG_IP_NF_TARGET_ULOG=m CONFIG_NF_NAT=m CONFIG_NF_NAT_NEEDED=y CONFIG_IP_NF_TARGET_MASQUERADE=m CONFIG_IP_NF_TARGET_REDIRECT=m CONFIG_IP_NF_TARGET_NETMAP=m CONFIG_IP_NF_TARGET_SAME=m CONFIG_NF_NAT_SNMP_BASIC=m CONFIG_NF_NAT_PROTO_GRE=m CONFIG_NF_NAT_FTP=m CONFIG_NF_NAT_IRC=m CONFIG_NF_NAT_TFTP=m CONFIG_NF_NAT_AMANDA=m CONFIG_NF_NAT_PPTP=m CONFIG_NF_NAT_H323=m CONFIG_NF_NAT_SIP=m CONFIG_IP_NF_MANGLE=m CONFIG_IP_NF_TARGET_TOS=m CONFIG_IP_NF_TARGET_ECN=m CONFIG_IP_NF_TARGET_TTL=m CONFIG_IP_NF_TARGET_CLUSTERIP=m CONFIG_IP_NF_RAW=m CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m # # IPv6: Netfilter Configuration (EXPERIMENTAL) # CONFIG_NF_CONNTRACK_IPV6=m CONFIG_IP6_NF_QUEUE=m CONFIG_IP6_NF_IPTABLES=m CONFIG_IP6_NF_MATCH_RT=m CONFIG_IP6_NF_MATCH_OPTS=m CONFIG_IP6_NF_MATCH_FRAG=m CONFIG_IP6_NF_MATCH_HL=m CONFIG_IP6_NF_MATCH_OWNER=m CONFIG_IP6_NF_MATCH_IPV6HEADER=m CONFIG_IP6_NF_MATCH_AH=m CONFIG_IP6_NF_MATCH_MH=m CONFIG_IP6_NF_MATCH_EUI64=m CONFIG_IP6_NF_FILTER=m CONFIG_IP6_NF_TARGET_LOG=m CONFIG_IP6_NF_TARGET_REJECT=m CONFIG_IP6_NF_MANGLE=m CONFIG_IP6_NF_TARGET_HL=m CONFIG_IP6_NF_RAW=m # # DECnet: Netfilter Configuration # CONFIG_DECNET_NF_GRABULATOR=m # # Bridge: Netfilter Configuration # CONFIG_BRIDGE_NF_EBTABLES=m CONFIG_BRIDGE_EBT_BROUTE=m CONFIG_BRIDGE_EBT_T_FILTER=m CONFIG_BRIDGE_EBT_T_NAT=m CONFIG_BRIDGE_EBT_802_3=m CONFIG_BRIDGE_EBT_AMONG=m CONFIG_BRIDGE_EBT_ARP=m CONFIG_BRIDGE_EBT_IP=m CONFIG_BRIDGE_EBT_LIMIT=m CONFIG_BRIDGE_EBT_MARK=m CONFIG_BRIDGE_EBT_PKTTYPE=m CONFIG_BRIDGE_EBT_STP=m CONFIG_BRIDGE_EBT_VLAN=m CONFIG_BRIDGE_EBT_ARPREPLY=m CONFIG_BRIDGE_EBT_DNAT=m CONFIG_BRIDGE_EBT_MARK_T=m CONFIG_BRIDGE_EBT_REDIRECT=m CONFIG_BRIDGE_EBT_SNAT=m CONFIG_BRIDGE_EBT_LOG=m CONFIG_BRIDGE_EBT_ULOG=m CONFIG_IP_DCCP=m CONFIG_INET_DCCP_DIAG=m CONFIG_IP_DCCP_ACKVEC=y # # DCCP CCIDs Configuration (EXPERIMENTAL) # CONFIG_IP_DCCP_CCID2=m # CONFIG_IP_DCCP_CCID2_DEBUG is not set CONFIG_IP_DCCP_CCID3=m CONFIG_IP_DCCP_TFRC_LIB=m # CONFIG_IP_DCCP_CCID3_DEBUG is not set CONFIG_IP_DCCP_CCID3_RTO=100 # # DCCP Kernel Hacking # # CONFIG_IP_DCCP_DEBUG is not set CONFIG_IP_SCTP=m # CONFIG_SCTP_DBG_MSG is not set # CONFIG_SCTP_DBG_OBJCNT is not set # CONFIG_SCTP_HMAC_NONE is not set # CONFIG_SCTP_HMAC_SHA1 is not set CONFIG_SCTP_HMAC_MD5=y CONFIG_TIPC=m CONFIG_TIPC_ADVANCED=y CONFIG_TIPC_ZONES=3 CONFIG_TIPC_CLUSTERS=1 CONFIG_TIPC_NODES=255 CONFIG_TIPC_SLAVE_NODES=0 CONFIG_TIPC_PORTS=8191 CONFIG_TIPC_LOG=0 # CONFIG_TIPC_DEBUG is not set CONFIG_ATM=y CONFIG_ATM_CLIP=y # CONFIG_ATM_CLIP_NO_ICMP is not set CONFIG_ATM_LANE=m CONFIG_ATM_MPOA=m CONFIG_ATM_BR2684=m # CONFIG_ATM_BR2684_IPFILTER is not set CONFIG_BRIDGE=m CONFIG_VLAN_8021Q=m CONFIG_DECNET=m # CONFIG_DECNET_ROUTER is not set CONFIG_LLC=y CONFIG_LLC2=m CONFIG_IPX=m # CONFIG_IPX_INTERN is not set CONFIG_ATALK=m CONFIG_DEV_APPLETALK=m CONFIG_LTPC=m CONFIG_IPDDP=m CONFIG_IPDDP_ENCAP=y CONFIG_IPDDP_DECAP=y CONFIG_X25=m CONFIG_LAPB=m CONFIG_ECONET=m CONFIG_ECONET_AUNUDP=y CONFIG_ECONET_NATIVE=y CONFIG_WAN_ROUTER=m CONFIG_NET_SCHED=y # # Queueing/Scheduling # CONFIG_NET_SCH_CBQ=m CONFIG_NET_SCH_HTB=m CONFIG_NET_SCH_HFSC=m CONFIG_NET_SCH_ATM=m CONFIG_NET_SCH_PRIO=m CONFIG_NET_SCH_RR=m CONFIG_NET_SCH_RED=m CONFIG_NET_SCH_SFQ=m CONFIG_NET_SCH_TEQL=m CONFIG_NET_SCH_TBF=m CONFIG_NET_SCH_GRED=m CONFIG_NET_SCH_DSMARK=m CONFIG_NET_SCH_NETEM=m CONFIG_NET_SCH_INGRESS=m # # Classification # CONFIG_NET_CLS=y CONFIG_NET_CLS_BASIC=m CONFIG_NET_CLS_TCINDEX=m CONFIG_NET_CLS_ROUTE4=m CONFIG_NET_CLS_ROUTE=y CONFIG_NET_CLS_FW=m CONFIG_NET_CLS_U32=m CONFIG_CLS_U32_PERF=y CONFIG_CLS_U32_MARK=y CONFIG_NET_CLS_RSVP=m CONFIG_NET_CLS_RSVP6=m CONFIG_NET_EMATCH=y CONFIG_NET_EMATCH_STACK=32 CONFIG_NET_EMATCH_CMP=m CONFIG_NET_EMATCH_NBYTE=m CONFIG_NET_EMATCH_U32=m CONFIG_NET_EMATCH_META=m CONFIG_NET_EMATCH_TEXT=m CONFIG_NET_CLS_ACT=y CONFIG_NET_ACT_POLICE=m CONFIG_NET_ACT_GACT=m CONFIG_GACT_PROB=y CONFIG_NET_ACT_MIRRED=m CONFIG_NET_ACT_IPT=m CONFIG_NET_ACT_NAT=m CONFIG_NET_ACT_PEDIT=m CONFIG_NET_ACT_SIMP=m # CONFIG_NET_CLS_POLICE is not set CONFIG_NET_CLS_IND=y CONFIG_NET_SCH_FIFO=y # # Network testing # CONFIG_NET_PKTGEN=m CONFIG_HAMRADIO=y # # Packet Radio protocols # CONFIG_AX25=m # CONFIG_AX25_DAMA_SLAVE is not set CONFIG_NETROM=m CONFIG_ROSE=m # # AX.25 network device drivers # CONFIG_MKISS=m CONFIG_6PACK=m CONFIG_BPQETHER=m CONFIG_SCC=m # CONFIG_SCC_DELAY is not set # CONFIG_SCC_TRXECHO is not set CONFIG_BAYCOM_SER_FDX=m CONFIG_BAYCOM_SER_HDX=m CONFIG_BAYCOM_PAR=m CONFIG_BAYCOM_EPP=m CONFIG_IRDA=m # # IrDA protocols # CONFIG_IRLAN=m CONFIG_IRNET=m CONFIG_IRCOMM=m # CONFIG_IRDA_ULTRA is not set # # IrDA options # CONFIG_IRDA_CACHE_LAST_LSAP=y CONFIG_IRDA_FAST_RR=y CONFIG_IRDA_DEBUG=y # # Infrared-port device drivers # # # SIR device drivers # CONFIG_IRTTY_SIR=m # # Dongle support # CONFIG_DONGLE=y CONFIG_ESI_DONGLE=m CONFIG_ACTISYS_DONGLE=m CONFIG_TEKRAM_DONGLE=m CONFIG_TOIM3232_DONGLE=m CONFIG_LITELINK_DONGLE=m CONFIG_MA600_DONGLE=m CONFIG_GIRBIL_DONGLE=m CONFIG_MCP2120_DONGLE=m CONFIG_OLD_BELKIN_DONGLE=m CONFIG_ACT200L_DONGLE=m CONFIG_KINGSUN_DONGLE=m CONFIG_KSDAZZLE_DONGLE=m CONFIG_KS959_DONGLE=m # # Old SIR device drivers # # # Old Serial dongle support # # # FIR device drivers # CONFIG_USB_IRDA=m CONFIG_SIGMATEL_FIR=m CONFIG_NSC_FIR=m CONFIG_WINBOND_FIR=m CONFIG_TOSHIBA_FIR=m CONFIG_SMC_IRCC_FIR=m CONFIG_ALI_FIR=m CONFIG_VLSI_FIR=m CONFIG_VIA_FIR=m CONFIG_MCS_FIR=m CONFIG_BT=m CONFIG_BT_L2CAP=m CONFIG_BT_SCO=m CONFIG_BT_RFCOMM=m CONFIG_BT_RFCOMM_TTY=y CONFIG_BT_BNEP=m CONFIG_BT_BNEP_MC_FILTER=y CONFIG_BT_BNEP_PROTO_FILTER=y CONFIG_BT_CMTP=m CONFIG_BT_HIDP=m # # Bluetooth device drivers # CONFIG_BT_HCIUSB=m CONFIG_BT_HCIUSB_SCO=y CONFIG_BT_HCIBTSDIO=m CONFIG_BT_HCIUART=m CONFIG_BT_HCIUART_H4=y CONFIG_BT_HCIUART_BCSP=y CONFIG_BT_HCIUART_LL=y CONFIG_BT_HCIBCM203X=m CONFIG_BT_HCIBPA10X=m CONFIG_BT_HCIBFUSB=m CONFIG_BT_HCIDTL1=m CONFIG_BT_HCIBT3C=m CONFIG_BT_HCIBLUECARD=m CONFIG_BT_HCIBTUART=m CONFIG_BT_HCIVHCI=m CONFIG_AF_RXRPC=m # CONFIG_AF_RXRPC_DEBUG is not set CONFIG_RXKAD=m CONFIG_FIB_RULES=y # # Wireless # CONFIG_CFG80211=m CONFIG_NL80211=y CONFIG_WIRELESS_EXT=y CONFIG_MAC80211=m CONFIG_MAC80211_RCSIMPLE=y CONFIG_MAC80211_LEDS=y # CONFIG_MAC80211_DEBUGFS is not set # CONFIG_MAC80211_DEBUG is not set CONFIG_IEEE80211=m # CONFIG_IEEE80211_DEBUG is not set CONFIG_IEEE80211_CRYPT_WEP=m CONFIG_IEEE80211_CRYPT_CCMP=m CONFIG_IEEE80211_CRYPT_TKIP=m CONFIG_IEEE80211_SOFTMAC=m # CONFIG_IEEE80211_SOFTMAC_DEBUG is not set CONFIG_RFKILL=m CONFIG_RFKILL_INPUT=m CONFIG_RFKILL_LEDS=y CONFIG_NET_9P=m CONFIG_NET_9P_FD=m CONFIG_NET_9P_VIRTIO=m # CONFIG_NET_9P_DEBUG is not set # # Device Drivers # # # Generic Driver Options # CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" CONFIG_STANDALONE=y CONFIG_PREVENT_FIRMWARE_BUILD=y CONFIG_FW_LOADER=m # CONFIG_DEBUG_DRIVER is not set # CONFIG_DEBUG_DEVRES is not set # CONFIG_SYS_HYPERVISOR is not set CONFIG_CONNECTOR=m CONFIG_MTD=m # CONFIG_MTD_DEBUG is not set CONFIG_MTD_CONCAT=m CONFIG_MTD_PARTITIONS=y CONFIG_MTD_REDBOOT_PARTS=m CONFIG_MTD_REDBOOT_DIRECTORY_BLOCK=-1 # CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED is not set # CONFIG_MTD_REDBOOT_PARTS_READONLY is not set # # User Modules And Translation Layers # CONFIG_MTD_CHAR=m CONFIG_MTD_BLKDEVS=m CONFIG_MTD_BLOCK=m CONFIG_MTD_BLOCK_RO=m CONFIG_FTL=m CONFIG_NFTL=m CONFIG_NFTL_RW=y CONFIG_INFTL=m CONFIG_RFD_FTL=m CONFIG_SSFDC=m CONFIG_MTD_OOPS=m # # RAM/ROM/Flash chip drivers # CONFIG_MTD_CFI=m CONFIG_MTD_JEDECPROBE=m CONFIG_MTD_GEN_PROBE=m # CONFIG_MTD_CFI_ADV_OPTIONS is not set CONFIG_MTD_MAP_BANK_WIDTH_1=y CONFIG_MTD_MAP_BANK_WIDTH_2=y CONFIG_MTD_MAP_BANK_WIDTH_4=y # CONFIG_MTD_MAP_BANK_WIDTH_8 is not set # CONFIG_MTD_MAP_BANK_WIDTH_16 is not set # CONFIG_MTD_MAP_BANK_WIDTH_32 is not set CONFIG_MTD_CFI_I1=y CONFIG_MTD_CFI_I2=y # CONFIG_MTD_CFI_I4 is not set # CONFIG_MTD_CFI_I8 is not set CONFIG_MTD_CFI_INTELEXT=m CONFIG_MTD_CFI_AMDSTD=m CONFIG_MTD_CFI_STAA=m CONFIG_MTD_CFI_UTIL=m CONFIG_MTD_RAM=m CONFIG_MTD_ROM=m CONFIG_MTD_ABSENT=m # # Mapping drivers for chip access # CONFIG_MTD_COMPLEX_MAPPINGS=y CONFIG_MTD_PHYSMAP=m CONFIG_MTD_PHYSMAP_START=0x8000000 CONFIG_MTD_PHYSMAP_LEN=0x4000000 CONFIG_MTD_PHYSMAP_BANKWIDTH=2 CONFIG_MTD_PNC2000=m CONFIG_MTD_SC520CDP=m CONFIG_MTD_NETSC520=m CONFIG_MTD_TS5500=m CONFIG_MTD_SBC_GXX=m CONFIG_MTD_SCx200_DOCFLASH=m # CONFIG_MTD_AMD76XROM is not set # CONFIG_MTD_ICHXROM is not set # CONFIG_MTD_ESB2ROM is not set # CONFIG_MTD_CK804XROM is not set # CONFIG_MTD_SCB2_FLASH is not set CONFIG_MTD_NETtel=m CONFIG_MTD_DILNETPC=m CONFIG_MTD_DILNETPC_BOOTSIZE=0x80000 # CONFIG_MTD_L440GX is not set CONFIG_MTD_PCI=m CONFIG_MTD_INTEL_VR_NOR=m CONFIG_MTD_PLATRAM=m # # Self-contained MTD device drivers # CONFIG_MTD_PMC551=m # CONFIG_MTD_PMC551_BUGFIX is not set # CONFIG_MTD_PMC551_DEBUG is not set CONFIG_MTD_DATAFLASH=m CONFIG_MTD_M25P80=m CONFIG_MTD_SLRAM=m CONFIG_MTD_PHRAM=m CONFIG_MTD_MTDRAM=m CONFIG_MTDRAM_TOTAL_SIZE=4096 CONFIG_MTDRAM_ERASE_SIZE=128 CONFIG_MTD_BLOCK2MTD=m # # Disk-On-Chip Device Drivers # CONFIG_MTD_DOC2000=m CONFIG_MTD_DOC2001=m CONFIG_MTD_DOC2001PLUS=m CONFIG_MTD_DOCPROBE=m CONFIG_MTD_DOCECC=m # CONFIG_MTD_DOCPROBE_ADVANCED is not set CONFIG_MTD_DOCPROBE_ADDRESS=0 CONFIG_MTD_NAND=m # CONFIG_MTD_NAND_VERIFY_WRITE is not set # CONFIG_MTD_NAND_ECC_SMC is not set # CONFIG_MTD_NAND_MUSEUM_IDS is not set CONFIG_MTD_NAND_IDS=m CONFIG_MTD_NAND_DISKONCHIP=m # CONFIG_MTD_NAND_DISKONCHIP_PROBE_ADVANCED is not set CONFIG_MTD_NAND_DISKONCHIP_PROBE_ADDRESS=0 # CONFIG_MTD_NAND_DISKONCHIP_BBTWRITE is not set CONFIG_MTD_NAND_CAFE=m CONFIG_MTD_NAND_CS553X=m # CONFIG_MTD_NAND_NANDSIM is not set CONFIG_MTD_NAND_PLATFORM=m CONFIG_MTD_ALAUDA=m CONFIG_MTD_ONENAND=m CONFIG_MTD_ONENAND_VERIFY_WRITE=y # CONFIG_MTD_ONENAND_OTP is not set CONFIG_MTD_ONENAND_2X_PROGRAM=y CONFIG_MTD_ONENAND_SIM=m # # UBI - Unsorted block images # CONFIG_MTD_UBI=m CONFIG_MTD_UBI_WL_THRESHOLD=4096 CONFIG_MTD_UBI_BEB_RESERVE=1 # CONFIG_MTD_UBI_GLUEBI is not set # # UBI debugging options # # CONFIG_MTD_UBI_DEBUG is not set CONFIG_PARPORT=m CONFIG_PARPORT_PC=m CONFIG_PARPORT_SERIAL=m CONFIG_PARPORT_PC_FIFO=y # CONFIG_PARPORT_PC_SUPERIO is not set CONFIG_PARPORT_PC_PCMCIA=m # CONFIG_PARPORT_GSC is not set CONFIG_PARPORT_AX88796=m CONFIG_PARPORT_1284=y CONFIG_PARPORT_NOT_PC=y CONFIG_PNP=y # CONFIG_PNP_DEBUG is not set # # Protocols # CONFIG_ISAPNP=y CONFIG_PNPBIOS=y CONFIG_PNPBIOS_PROC_FS=y CONFIG_PNPACPI=y CONFIG_BLK_DEV=y CONFIG_BLK_DEV_FD=m CONFIG_BLK_DEV_XD=m CONFIG_PARIDE=m # # Parallel IDE high-level drivers # CONFIG_PARIDE_PD=m CONFIG_PARIDE_PCD=m CONFIG_PARIDE_PF=m CONFIG_PARIDE_PT=m CONFIG_PARIDE_PG=m # # Parallel IDE protocol modules # CONFIG_PARIDE_ATEN=m CONFIG_PARIDE_BPCK=m CONFIG_PARIDE_BPCK6=m CONFIG_PARIDE_COMM=m CONFIG_PARIDE_DSTR=m CONFIG_PARIDE_FIT2=m CONFIG_PARIDE_FIT3=m CONFIG_PARIDE_EPAT=m # CONFIG_PARIDE_EPATC8 is not set CONFIG_PARIDE_EPIA=m CONFIG_PARIDE_FRIQ=m CONFIG_PARIDE_FRPW=m CONFIG_PARIDE_KBIC=m CONFIG_PARIDE_KTTI=m CONFIG_PARIDE_ON20=m CONFIG_PARIDE_ON26=m CONFIG_BLK_CPQ_DA=m CONFIG_BLK_CPQ_CISS_DA=m CONFIG_CISS_SCSI_TAPE=y CONFIG_BLK_DEV_DAC960=m CONFIG_BLK_DEV_UMEM=m # CONFIG_BLK_DEV_COW_COMMON is not set CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_CRYPTOLOOP=m CONFIG_BLK_DEV_NBD=m CONFIG_BLK_DEV_SX8=m # CONFIG_BLK_DEV_UB is not set CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM_COUNT=16 CONFIG_BLK_DEV_RAM_SIZE=8192 CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024 CONFIG_CDROM_PKTCDVD=m CONFIG_CDROM_PKTCDVD_BUFFERS=8 # CONFIG_CDROM_PKTCDVD_WCACHE is not set CONFIG_ATA_OVER_ETH=m CONFIG_VIRTIO_BLK=m CONFIG_MISC_DEVICES=y CONFIG_IBM_ASM=m CONFIG_PHANTOM=m CONFIG_EEPROM_93CX6=m CONFIG_SGI_IOC4=m CONFIG_TIFM_CORE=m CONFIG_TIFM_7XX1=m CONFIG_ASUS_LAPTOP=m CONFIG_FUJITSU_LAPTOP=m CONFIG_MSI_LAPTOP=m CONFIG_SONY_LAPTOP=m # CONFIG_SONYPI_COMPAT is not set CONFIG_THINKPAD_ACPI=m # CONFIG_THINKPAD_ACPI_DEBUG is not set CONFIG_THINKPAD_ACPI_BAY=y CONFIG_IDE=m CONFIG_BLK_DEV_IDE=m # # Please see Documentation/ide.txt for help/info on IDE drives # # CONFIG_BLK_DEV_IDE_SATA is not set # CONFIG_BLK_DEV_HD_IDE is not set CONFIG_BLK_DEV_IDEDISK=m # CONFIG_IDEDISK_MULTI_MODE is not set CONFIG_BLK_DEV_IDECS=m CONFIG_BLK_DEV_DELKIN=m CONFIG_BLK_DEV_IDECD=m CONFIG_BLK_DEV_IDETAPE=m CONFIG_BLK_DEV_IDEFLOPPY=m # CONFIG_BLK_DEV_IDESCSI is not set # CONFIG_BLK_DEV_IDEACPI is not set # CONFIG_IDE_TASK_IOCTL is not set CONFIG_IDE_PROC_FS=y # # IDE chipset support/bugfixes # CONFIG_IDE_GENERIC=m # CONFIG_BLK_DEV_PLATFORM is not set CONFIG_BLK_DEV_CMD640=y # CONFIG_BLK_DEV_CMD640_ENHANCED is not set CONFIG_BLK_DEV_IDEPNP=y # # PCI IDE chipsets support # CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y # CONFIG_IDEPCI_PCIBUS_ORDER is not set # CONFIG_BLK_DEV_OFFBOARD is not set CONFIG_BLK_DEV_GENERIC=m CONFIG_BLK_DEV_OPTI621=m CONFIG_BLK_DEV_RZ1000=m CONFIG_BLK_DEV_IDEDMA_PCI=y CONFIG_BLK_DEV_AEC62XX=m CONFIG_BLK_DEV_ALI15X3=m # CONFIG_WDC_ALI15X3 is not set CONFIG_BLK_DEV_AMD74XX=m CONFIG_BLK_DEV_ATIIXP=m CONFIG_BLK_DEV_CMD64X=m CONFIG_BLK_DEV_TRIFLEX=m CONFIG_BLK_DEV_CY82C693=m CONFIG_BLK_DEV_CS5520=m CONFIG_BLK_DEV_CS5530=m CONFIG_BLK_DEV_CS5535=m CONFIG_BLK_DEV_HPT34X=m # CONFIG_HPT34X_AUTODMA is not set CONFIG_BLK_DEV_HPT366=m CONFIG_BLK_DEV_JMICRON=m CONFIG_BLK_DEV_SC1200=m CONFIG_BLK_DEV_PIIX=m CONFIG_BLK_DEV_IT8213=m CONFIG_BLK_DEV_IT821X=m CONFIG_BLK_DEV_NS87415=m CONFIG_BLK_DEV_PDC202XX_OLD=m CONFIG_PDC202XX_BURST=y CONFIG_BLK_DEV_PDC202XX_NEW=m CONFIG_BLK_DEV_SVWKS=m CONFIG_BLK_DEV_SIIMAGE=m CONFIG_BLK_DEV_SIS5513=m CONFIG_BLK_DEV_SLC90E66=m CONFIG_BLK_DEV_TRM290=m CONFIG_BLK_DEV_VIA82CXXX=m CONFIG_BLK_DEV_TC86C001=m # CONFIG_IDE_ARM is not set # # Other IDE chipsets support # # # Note: most of these also require special kernel boot parameters # # CONFIG_BLK_DEV_4DRIVES is not set # CONFIG_BLK_DEV_ALI14XX is not set # CONFIG_BLK_DEV_DTC2278 is not set # CONFIG_BLK_DEV_HT6560B is not set # CONFIG_BLK_DEV_QD65XX is not set # CONFIG_BLK_DEV_UMC8672 is not set CONFIG_BLK_DEV_IDEDMA=y CONFIG_IDE_ARCH_OBSOLETE_INIT=y # CONFIG_BLK_DEV_HD is not set # # SCSI device support # CONFIG_RAID_ATTRS=m CONFIG_SCSI=m CONFIG_SCSI_DMA=y CONFIG_SCSI_TGT=m CONFIG_SCSI_NETLINK=y CONFIG_SCSI_PROC_FS=y # # SCSI support type (disk, tape, CD-ROM) # CONFIG_BLK_DEV_SD=m CONFIG_CHR_DEV_ST=m CONFIG_CHR_DEV_OSST=m CONFIG_BLK_DEV_SR=m CONFIG_BLK_DEV_SR_VENDOR=y CONFIG_CHR_DEV_SG=m CONFIG_CHR_DEV_SCH=m # # Some SCSI devices (e.g. CD jukebox) support multiple LUNs # CONFIG_SCSI_MULTI_LUN=y CONFIG_SCSI_CONSTANTS=y CONFIG_SCSI_LOGGING=y CONFIG_SCSI_SCAN_ASYNC=y CONFIG_SCSI_WAIT_SCAN=m # # SCSI Transports # CONFIG_SCSI_SPI_ATTRS=m CONFIG_SCSI_FC_ATTRS=m CONFIG_SCSI_FC_TGT_ATTRS=y CONFIG_SCSI_ISCSI_ATTRS=m CONFIG_SCSI_SAS_ATTRS=m CONFIG_SCSI_SAS_LIBSAS=m CONFIG_SCSI_SAS_ATA=y # CONFIG_SCSI_SAS_LIBSAS_DEBUG is not set CONFIG_SCSI_SRP_ATTRS=m CONFIG_SCSI_SRP_TGT_ATTRS=y CONFIG_SCSI_LOWLEVEL=y CONFIG_ISCSI_TCP=m CONFIG_BLK_DEV_3W_XXXX_RAID=m CONFIG_SCSI_3W_9XXX=m CONFIG_SCSI_7000FASST=m CONFIG_SCSI_ACARD=m CONFIG_SCSI_AHA152X=m CONFIG_SCSI_AHA1542=m CONFIG_SCSI_AACRAID=m CONFIG_SCSI_AIC7XXX=m CONFIG_AIC7XXX_CMDS_PER_DEVICE=8 CONFIG_AIC7XXX_RESET_DELAY_MS=15000 CONFIG_AIC7XXX_DEBUG_ENABLE=y CONFIG_AIC7XXX_DEBUG_MASK=0 CONFIG_AIC7XXX_REG_PRETTY_PRINT=y CONFIG_SCSI_AIC7XXX_OLD=m CONFIG_SCSI_AIC79XX=m CONFIG_AIC79XX_CMDS_PER_DEVICE=32 CONFIG_AIC79XX_RESET_DELAY_MS=15000 CONFIG_AIC79XX_DEBUG_ENABLE=y CONFIG_AIC79XX_DEBUG_MASK=0 CONFIG_AIC79XX_REG_PRETTY_PRINT=y CONFIG_SCSI_AIC94XX=m # CONFIG_AIC94XX_DEBUG is not set CONFIG_SCSI_DPT_I2O=m CONFIG_SCSI_ADVANSYS=m CONFIG_SCSI_IN2000=m CONFIG_SCSI_ARCMSR=m # CONFIG_SCSI_ARCMSR_AER is not set CONFIG_MEGARAID_NEWGEN=y CONFIG_MEGARAID_MM=m CONFIG_MEGARAID_MAILBOX=m CONFIG_MEGARAID_LEGACY=m CONFIG_MEGARAID_SAS=m CONFIG_SCSI_HPTIOP=m CONFIG_SCSI_BUSLOGIC=m # CONFIG_SCSI_OMIT_FLASHPOINT is not set CONFIG_SCSI_DMX3191D=m CONFIG_SCSI_DTC3280=m CONFIG_SCSI_EATA=m CONFIG_SCSI_EATA_TAGGED_QUEUE=y CONFIG_SCSI_EATA_LINKED_COMMANDS=y CONFIG_SCSI_EATA_MAX_TAGS=16 CONFIG_SCSI_FUTURE_DOMAIN=m CONFIG_SCSI_GDTH=m CONFIG_SCSI_GENERIC_NCR5380=m CONFIG_SCSI_GENERIC_NCR5380_MMIO=m CONFIG_SCSI_GENERIC_NCR53C400=y CONFIG_SCSI_IPS=m CONFIG_SCSI_INITIO=m # CONFIG_SCSI_INIA100 is not set CONFIG_SCSI_PPA=m CONFIG_SCSI_IMM=m # CONFIG_SCSI_IZIP_EPP16 is not set # CONFIG_SCSI_IZIP_SLOW_CTR is not set CONFIG_SCSI_NCR53C406A=m CONFIG_SCSI_STEX=m CONFIG_SCSI_SYM53C8XX_2=m CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1 CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16 CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64 CONFIG_SCSI_SYM53C8XX_MMIO=y CONFIG_SCSI_IPR=m # CONFIG_SCSI_IPR_TRACE is not set # CONFIG_SCSI_IPR_DUMP is not set CONFIG_SCSI_PAS16=m CONFIG_SCSI_PSI240I=m CONFIG_SCSI_QLOGIC_FAS=m CONFIG_SCSI_QLOGIC_1280=m CONFIG_SCSI_QLA_FC=m CONFIG_SCSI_QLA_ISCSI=m CONFIG_SCSI_LPFC=m # CONFIG_SCSI_SEAGATE is not set CONFIG_SCSI_SYM53C416=m CONFIG_SCSI_DC395x=m CONFIG_SCSI_DC390T=m CONFIG_SCSI_T128=m CONFIG_SCSI_U14_34F=m CONFIG_SCSI_U14_34F_TAGGED_QUEUE=y CONFIG_SCSI_U14_34F_LINKED_COMMANDS=y CONFIG_SCSI_U14_34F_MAX_TAGS=8 CONFIG_SCSI_ULTRASTOR=m CONFIG_SCSI_NSP32=m CONFIG_SCSI_DEBUG=m CONFIG_SCSI_SRP=m CONFIG_SCSI_LOWLEVEL_PCMCIA=y CONFIG_PCMCIA_AHA152X=m CONFIG_PCMCIA_FDOMAIN=m CONFIG_PCMCIA_NINJA_SCSI=m CONFIG_PCMCIA_QLOGIC=m CONFIG_PCMCIA_SYM53C500=m CONFIG_ATA=m # CONFIG_ATA_NONSTANDARD is not set CONFIG_ATA_ACPI=y CONFIG_SATA_AHCI=m CONFIG_SATA_SVW=m CONFIG_ATA_PIIX=m CONFIG_SATA_MV=m CONFIG_SATA_NV=m CONFIG_PDC_ADMA=m CONFIG_SATA_QSTOR=m CONFIG_SATA_PROMISE=m CONFIG_SATA_SX4=m CONFIG_SATA_SIL=m CONFIG_SATA_SIL24=m CONFIG_SATA_SIS=m CONFIG_SATA_ULI=m CONFIG_SATA_VIA=m CONFIG_SATA_VITESSE=m CONFIG_SATA_INIC162X=m # CONFIG_PATA_ACPI is not set # CONFIG_PATA_ALI is not set # CONFIG_PATA_AMD is not set CONFIG_PATA_ARTOP=m # CONFIG_PATA_ATIIXP is not set # CONFIG_PATA_CMD640_PCI is not set # CONFIG_PATA_CMD64X is not set # CONFIG_PATA_CS5520 is not set # CONFIG_PATA_CS5530 is not set # CONFIG_PATA_CS5535 is not set # CONFIG_PATA_CS5536 is not set # CONFIG_PATA_CYPRESS is not set # CONFIG_PATA_EFAR is not set CONFIG_ATA_GENERIC=m # CONFIG_PATA_HPT366 is not set # CONFIG_PATA_HPT37X is not set # CONFIG_PATA_HPT3X2N is not set # CONFIG_PATA_HPT3X3 is not set # CONFIG_PATA_ISAPNP is not set # CONFIG_PATA_IT821X is not set # CONFIG_PATA_IT8213 is not set # CONFIG_PATA_JMICRON is not set # CONFIG_PATA_LEGACY is not set # CONFIG_PATA_TRIFLEX is not set CONFIG_PATA_MARVELL=m # CONFIG_PATA_MPIIX is not set # CONFIG_PATA_OLDPIIX is not set # CONFIG_PATA_NETCELL is not set # CONFIG_PATA_NS87410 is not set # CONFIG_PATA_NS87415 is not set # CONFIG_PATA_OPTI is not set # CONFIG_PATA_OPTIDMA is not set # CONFIG_PATA_PCMCIA is not set # CONFIG_PATA_PDC_OLD is not set # CONFIG_PATA_QDI is not set # CONFIG_PATA_RADISYS is not set # CONFIG_PATA_RZ1000 is not set # CONFIG_PATA_SC1200 is not set # CONFIG_PATA_SERVERWORKS is not set # CONFIG_PATA_PDC2027X is not set # CONFIG_PATA_SIL680 is not set CONFIG_PATA_SIS=m # CONFIG_PATA_VIA is not set # CONFIG_PATA_WINBOND is not set # CONFIG_PATA_WINBOND_VLB is not set CONFIG_MD=y CONFIG_BLK_DEV_MD=m CONFIG_MD_LINEAR=m CONFIG_MD_RAID0=m CONFIG_MD_RAID1=m CONFIG_MD_RAID10=m CONFIG_MD_RAID456=m CONFIG_MD_RAID5_RESHAPE=y CONFIG_MD_MULTIPATH=m CONFIG_MD_FAULTY=m CONFIG_BLK_DEV_DM=m # CONFIG_DM_DEBUG is not set CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_MIRROR=m CONFIG_DM_ZERO=m CONFIG_DM_MULTIPATH=m CONFIG_DM_MULTIPATH_EMC=m CONFIG_DM_MULTIPATH_RDAC=m CONFIG_DM_MULTIPATH_HP=m CONFIG_DM_DELAY=m CONFIG_DM_UEVENT=y CONFIG_FUSION=y CONFIG_FUSION_SPI=m CONFIG_FUSION_FC=m CONFIG_FUSION_SAS=m CONFIG_FUSION_MAX_SGE=40 CONFIG_FUSION_CTL=m CONFIG_FUSION_LAN=m # CONFIG_FUSION_LOGGING is not set # # IEEE 1394 (FireWire) support # CONFIG_FIREWIRE=m CONFIG_FIREWIRE_OHCI=m CONFIG_FIREWIRE_SBP2=m # CONFIG_IEEE1394 is not set CONFIG_I2O=m CONFIG_I2O_LCT_NOTIFY_ON_CHANGES=y CONFIG_I2O_EXT_ADAPTEC=y CONFIG_I2O_CONFIG=m CONFIG_I2O_CONFIG_OLD_IOCTL=y CONFIG_I2O_BUS=m CONFIG_I2O_BLOCK=m CONFIG_I2O_SCSI=m CONFIG_I2O_PROC=m # CONFIG_MACINTOSH_DRIVERS is not set CONFIG_NETDEVICES=y CONFIG_NETDEVICES_MULTIQUEUE=y CONFIG_IFB=m CONFIG_DUMMY=m CONFIG_BONDING=m # CONFIG_MACVLAN is not set CONFIG_EQUALIZER=m CONFIG_TUN=m CONFIG_VETH=m CONFIG_NET_SB1000=m CONFIG_ARCNET=m CONFIG_ARCNET_1201=m CONFIG_ARCNET_1051=m CONFIG_ARCNET_RAW=m CONFIG_ARCNET_CAP=m CONFIG_ARCNET_COM90xx=m CONFIG_ARCNET_COM90xxIO=m CONFIG_ARCNET_RIM_I=m CONFIG_ARCNET_COM20020=m CONFIG_ARCNET_COM20020_ISA=m CONFIG_ARCNET_COM20020_PCI=m CONFIG_PHYLIB=m # # MII PHY device drivers # CONFIG_MARVELL_PHY=m CONFIG_DAVICOM_PHY=m CONFIG_QSEMI_PHY=m CONFIG_LXT_PHY=m CONFIG_CICADA_PHY=m CONFIG_VITESSE_PHY=m CONFIG_SMSC_PHY=m CONFIG_BROADCOM_PHY=m CONFIG_ICPLUS_PHY=m CONFIG_FIXED_PHY=m CONFIG_FIXED_MII_10_FDX=y CONFIG_FIXED_MII_100_FDX=y CONFIG_FIXED_MII_1000_FDX=y CONFIG_FIXED_MII_AMNT=1 CONFIG_MDIO_BITBANG=m CONFIG_NET_ETHERNET=y CONFIG_MII=m CONFIG_HAPPYMEAL=m CONFIG_SUNGEM=m CONFIG_CASSINI=m CONFIG_NET_VENDOR_3COM=y CONFIG_EL1=m CONFIG_EL2=m CONFIG_ELPLUS=m CONFIG_EL16=m CONFIG_EL3=m CONFIG_3C515=m CONFIG_VORTEX=m CONFIG_TYPHOON=m CONFIG_LANCE=m CONFIG_NET_VENDOR_SMC=y CONFIG_WD80x3=m CONFIG_ULTRA=m CONFIG_SMC9194=m CONFIG_NET_VENDOR_RACAL=y CONFIG_NI52=m CONFIG_NI65=m CONFIG_NET_TULIP=y CONFIG_DE2104X=m CONFIG_TULIP=m # CONFIG_TULIP_MWI is not set # CONFIG_TULIP_MMIO is not set CONFIG_TULIP_NAPI=y CONFIG_TULIP_NAPI_HW_MITIGATION=y CONFIG_DE4X5=m CONFIG_WINBOND_840=m CONFIG_DM9102=m CONFIG_ULI526X=m CONFIG_PCMCIA_XIRCOM=m CONFIG_AT1700=m CONFIG_DEPCA=m CONFIG_HP100=m CONFIG_NET_ISA=y CONFIG_E2100=m CONFIG_EWRK3=m CONFIG_EEXPRESS=m CONFIG_EEXPRESS_PRO=m CONFIG_HPLAN_PLUS=m CONFIG_HPLAN=m CONFIG_LP486E=m CONFIG_ETH16I=m CONFIG_NE2000=m CONFIG_ZNET=m CONFIG_SEEQ8005=m # CONFIG_IBM_NEW_EMAC_ZMII is not set # CONFIG_IBM_NEW_EMAC_RGMII is not set # CONFIG_IBM_NEW_EMAC_TAH is not set # CONFIG_IBM_NEW_EMAC_EMAC4 is not set CONFIG_NET_PCI=y CONFIG_PCNET32=m CONFIG_PCNET32_NAPI=y CONFIG_AMD8111_ETH=m CONFIG_AMD8111E_NAPI=y CONFIG_ADAPTEC_STARFIRE=m CONFIG_ADAPTEC_STARFIRE_NAPI=y CONFIG_AC3200=m CONFIG_APRICOT=m CONFIG_B44=m CONFIG_B44_PCI_AUTOSELECT=y CONFIG_B44_PCICORE_AUTOSELECT=y CONFIG_B44_PCI=y CONFIG_FORCEDETH=m # CONFIG_FORCEDETH_NAPI is not set CONFIG_CS89x0=m CONFIG_EEPRO100=m CONFIG_E100=m CONFIG_FEALNX=m CONFIG_NATSEMI=m CONFIG_NE2K_PCI=m CONFIG_8139CP=m CONFIG_8139TOO=m CONFIG_8139TOO_PIO=y CONFIG_8139TOO_TUNE_TWISTER=y CONFIG_8139TOO_8129=y # CONFIG_8139_OLD_RX_RESET is not set CONFIG_SIS900=m CONFIG_EPIC100=m CONFIG_SUNDANCE=m # CONFIG_SUNDANCE_MMIO is not set CONFIG_TLAN=m CONFIG_VIA_RHINE=m # CONFIG_VIA_RHINE_MMIO is not set CONFIG_VIA_RHINE_NAPI=y CONFIG_SC92031=m # CONFIG_NET_POCKET is not set CONFIG_NETDEV_1000=y CONFIG_DL2K=m CONFIG_E1000=m CONFIG_E1000_NAPI=y # CONFIG_E1000_DISABLE_PACKET_SPLIT is not set CONFIG_E1000E=m CONFIG_IP1000=m CONFIG_NS83820=m CONFIG_HAMACHI=m CONFIG_YELLOWFIN=m CONFIG_R8169=m CONFIG_R8169_NAPI=y CONFIG_R8169_VLAN=y CONFIG_SIS190=m CONFIG_SKGE=m # CONFIG_SKGE_DEBUG is not set CONFIG_SKY2=m # CONFIG_SKY2_DEBUG is not set # CONFIG_SK98LIN is not set CONFIG_VIA_VELOCITY=m CONFIG_TIGON3=m CONFIG_QLA3XXX=m CONFIG_ATL1=m CONFIG_NETDEV_10000=y CONFIG_CHELSIO_T1=m CONFIG_CHELSIO_T1_1G=y CONFIG_CHELSIO_T1_NAPI=y CONFIG_CHELSIO_T3=m CONFIG_IXGBE=m CONFIG_IXGB=m CONFIG_IXGB_NAPI=y CONFIG_S2IO=m CONFIG_S2IO_NAPI=y CONFIG_MYRI10GE=m CONFIG_NETXEN_NIC=m CONFIG_NIU=m CONFIG_MLX4_CORE=m CONFIG_MLX4_DEBUG=y CONFIG_TEHUTI=m CONFIG_TR=y CONFIG_IBMTR=m CONFIG_IBMOL=m CONFIG_IBMLS=m CONFIG_TMS380TR=m CONFIG_TMSPCI=m CONFIG_SKISA=m CONFIG_PROTEON=m CONFIG_ABYSS=m # # Wireless LAN # CONFIG_WLAN_PRE80211=y CONFIG_STRIP=m CONFIG_ARLAN=m CONFIG_WAVELAN=m CONFIG_PCMCIA_WAVELAN=m CONFIG_PCMCIA_NETWAVE=m CONFIG_WLAN_80211=y CONFIG_PCMCIA_RAYCS=m CONFIG_IPW2100=m CONFIG_IPW2100_MONITOR=y # CONFIG_IPW2100_DEBUG is not set CONFIG_IPW2200=m CONFIG_IPW2200_MONITOR=y CONFIG_IPW2200_RADIOTAP=y CONFIG_IPW2200_PROMISCUOUS=y CONFIG_IPW2200_QOS=y # CONFIG_IPW2200_DEBUG is not set CONFIG_LIBERTAS=m CONFIG_LIBERTAS_USB=m CONFIG_LIBERTAS_CS=m CONFIG_LIBERTAS_SDIO=m # CONFIG_LIBERTAS_DEBUG is not set CONFIG_AIRO=m CONFIG_HERMES=m CONFIG_PLX_HERMES=m CONFIG_TMD_HERMES=m CONFIG_NORTEL_HERMES=m CONFIG_PCI_HERMES=m CONFIG_PCMCIA_HERMES=m CONFIG_PCMCIA_SPECTRUM=m CONFIG_ATMEL=m CONFIG_PCI_ATMEL=m CONFIG_PCMCIA_ATMEL=m CONFIG_USB_ATMEL=m CONFIG_AIRO_CS=m CONFIG_PCMCIA_WL3501=m CONFIG_PRISM54=m CONFIG_USB_ZD1201=m CONFIG_RTL8187=m CONFIG_ADM8211=m CONFIG_P54_COMMON=m CONFIG_P54_USB=m CONFIG_P54_PCI=m CONFIG_ATH5K=m CONFIG_IWLWIFI=y # CONFIG_IWLWIFI_DEBUG is not set CONFIG_IWLWIFI_SENSITIVITY=y CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT=y CONFIG_IWLWIFI_QOS=y CONFIG_IWL4965=m CONFIG_IWL3945=m CONFIG_HOSTAP=m CONFIG_HOSTAP_FIRMWARE=y # CONFIG_HOSTAP_FIRMWARE_NVRAM is not set CONFIG_HOSTAP_PLX=m CONFIG_HOSTAP_PCI=m CONFIG_HOSTAP_CS=m # CONFIG_BCM43XX is not set CONFIG_B43=m CONFIG_B43_PCI_AUTOSELECT=y CONFIG_B43_PCICORE_AUTOSELECT=y CONFIG_B43_PCMCIA=y CONFIG_B43_LEDS=y CONFIG_B43_RFKILL=y # CONFIG_B43_DEBUG is not set CONFIG_B43_DMA=y CONFIG_B43_PIO=y CONFIG_B43_DMA_AND_PIO_MODE=y # CONFIG_B43_DMA_MODE is not set # CONFIG_B43_PIO_MODE is not set CONFIG_B43LEGACY=m CONFIG_B43LEGACY_PCI_AUTOSELECT=y CONFIG_B43LEGACY_PCICORE_AUTOSELECT=y CONFIG_B43LEGACY_DEBUG=y CONFIG_B43LEGACY_DMA=y CONFIG_B43LEGACY_PIO=y CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y # CONFIG_B43LEGACY_DMA_MODE is not set # CONFIG_B43LEGACY_PIO_MODE is not set CONFIG_ZD1211RW=m # CONFIG_ZD1211RW_DEBUG is not set CONFIG_RT2X00=m CONFIG_RT2X00_LIB=m CONFIG_RT2X00_LIB_PCI=m CONFIG_RT2X00_LIB_USB=m CONFIG_RT2X00_LIB_FIRMWARE=y CONFIG_RT2X00_LIB_RFKILL=y CONFIG_RT2400PCI=m CONFIG_RT2400PCI_RFKILL=y CONFIG_RT2500PCI=m CONFIG_RT2500PCI_RFKILL=y CONFIG_RT61PCI=m CONFIG_RT61PCI_RFKILL=y CONFIG_RT2500USB=m CONFIG_RT73USB=m # CONFIG_RT2X00_DEBUG is not set # # USB Network Adapters # CONFIG_USB_CATC=m CONFIG_USB_KAWETH=m CONFIG_USB_PEGASUS=m CONFIG_USB_RTL8150=m CONFIG_USB_USBNET=m CONFIG_USB_NET_AX8817X=m CONFIG_USB_NET_CDCETHER=m CONFIG_USB_NET_DM9601=m CONFIG_USB_NET_GL620A=m CONFIG_USB_NET_NET1080=m CONFIG_USB_NET_PLUSB=m CONFIG_USB_NET_MCS7830=m CONFIG_USB_NET_RNDIS_HOST=m CONFIG_USB_NET_CDC_SUBSET=m CONFIG_USB_ALI_M5632=y CONFIG_USB_AN2720=y CONFIG_USB_BELKIN=y CONFIG_USB_ARMLINUX=y CONFIG_USB_EPSON2888=y CONFIG_USB_KC2190=y CONFIG_USB_NET_ZAURUS=m CONFIG_NET_PCMCIA=y CONFIG_PCMCIA_3C589=m CONFIG_PCMCIA_3C574=m CONFIG_PCMCIA_FMVJ18X=m CONFIG_PCMCIA_PCNET=m CONFIG_PCMCIA_NMCLAN=m CONFIG_PCMCIA_SMC91C92=m CONFIG_PCMCIA_XIRC2PS=m CONFIG_PCMCIA_AXNET=m CONFIG_ARCNET_COM20020_CS=m CONFIG_PCMCIA_IBMTR=m CONFIG_WAN=y CONFIG_HOSTESS_SV11=m CONFIG_COSA=m CONFIG_LANMEDIA=m CONFIG_SEALEVEL_4021=m CONFIG_HDLC=m CONFIG_HDLC_RAW=m CONFIG_HDLC_RAW_ETH=m CONFIG_HDLC_CISCO=m CONFIG_HDLC_FR=m CONFIG_HDLC_PPP=m CONFIG_HDLC_X25=m CONFIG_PCI200SYN=m CONFIG_WANXL=m CONFIG_PC300=m CONFIG_PC300_MLPPP=y # # Cyclades-PC300 MLPPP support is disabled. # # # Refer to the file README.mlppp, provided by PC300 package. # # CONFIG_PC300TOO is not set CONFIG_N2=m CONFIG_C101=m CONFIG_FARSYNC=m CONFIG_DSCC4=m CONFIG_DSCC4_PCISYNC=y CONFIG_DSCC4_PCI_RST=y CONFIG_DLCI=m CONFIG_DLCI_MAX=8 CONFIG_SDLA=m CONFIG_WAN_ROUTER_DRIVERS=m CONFIG_CYCLADES_SYNC=m CONFIG_CYCLOMX_X25=y CONFIG_LAPBETHER=m CONFIG_X25_ASY=m CONFIG_SBNI=m # CONFIG_SBNI_MULTILINE is not set CONFIG_ATM_DRIVERS=y CONFIG_ATM_DUMMY=m CONFIG_ATM_TCP=m CONFIG_ATM_LANAI=m CONFIG_ATM_ENI=m # CONFIG_ATM_ENI_DEBUG is not set # CONFIG_ATM_ENI_TUNE_BURST is not set CONFIG_ATM_FIRESTREAM=m CONFIG_ATM_ZATM=m # CONFIG_ATM_ZATM_DEBUG is not set CONFIG_ATM_NICSTAR=m # CONFIG_ATM_NICSTAR_USE_SUNI is not set # CONFIG_ATM_NICSTAR_USE_IDT77105 is not set CONFIG_ATM_IDT77252=m # CONFIG_ATM_IDT77252_DEBUG is not set # CONFIG_ATM_IDT77252_RCV_ALL is not set CONFIG_ATM_IDT77252_USE_SUNI=y CONFIG_ATM_HORIZON=m # CONFIG_ATM_HORIZON_DEBUG is not set CONFIG_ATM_IA=m # CONFIG_ATM_IA_DEBUG is not set CONFIG_ATM_FORE200E_MAYBE=m CONFIG_ATM_HE=m CONFIG_ATM_HE_USE_SUNI=y CONFIG_FDDI=y CONFIG_DEFXX=m # CONFIG_DEFXX_MMIO is not set CONFIG_SKFP=m CONFIG_HIPPI=y CONFIG_ROADRUNNER=m # CONFIG_ROADRUNNER_LARGE_RINGS is not set CONFIG_PLIP=m CONFIG_PPP=m CONFIG_PPP_MULTILINK=y CONFIG_PPP_FILTER=y CONFIG_PPP_ASYNC=m CONFIG_PPP_SYNC_TTY=m CONFIG_PPP_DEFLATE=m CONFIG_PPP_BSDCOMP=m CONFIG_PPP_MPPE=m CONFIG_PPPOE=m CONFIG_PPPOATM=m CONFIG_PPPOL2TP=m CONFIG_SLIP=m CONFIG_SLIP_COMPRESSED=y CONFIG_SLHC=m CONFIG_SLIP_SMART=y CONFIG_SLIP_MODE_SLIP6=y CONFIG_NET_FC=y CONFIG_SHAPER=m CONFIG_NETCONSOLE=m CONFIG_NETCONSOLE_DYNAMIC=y CONFIG_NETPOLL=y # CONFIG_NETPOLL_TRAP is not set CONFIG_NET_POLL_CONTROLLER=y CONFIG_VIRTIO_NET=m CONFIG_ISDN=m CONFIG_ISDN_I4L=m CONFIG_ISDN_PPP=y CONFIG_ISDN_PPP_VJ=y CONFIG_ISDN_MPP=y CONFIG_IPPP_FILTER=y CONFIG_ISDN_PPP_BSDCOMP=m CONFIG_ISDN_AUDIO=y CONFIG_ISDN_TTY_FAX=y CONFIG_ISDN_X25=y # # ISDN feature submodules # # CONFIG_ISDN_DIVERSION is not set # # ISDN4Linux hardware drivers # # # Passive cards # CONFIG_ISDN_DRV_HISAX=m # # D-channel protocol features # CONFIG_HISAX_EURO=y CONFIG_DE_AOC=y # CONFIG_HISAX_NO_SENDCOMPLETE is not set # CONFIG_HISAX_NO_LLC is not set # CONFIG_HISAX_NO_KEYPAD is not set CONFIG_HISAX_1TR6=y CONFIG_HISAX_NI1=y CONFIG_HISAX_MAX_CARDS=8 # # HiSax supported cards # CONFIG_HISAX_16_0=y CONFIG_HISAX_16_3=y CONFIG_HISAX_TELESPCI=y CONFIG_HISAX_S0BOX=y CONFIG_HISAX_AVM_A1=y CONFIG_HISAX_FRITZPCI=y CONFIG_HISAX_AVM_A1_PCMCIA=y CONFIG_HISAX_ELSA=y CONFIG_HISAX_IX1MICROR2=y CONFIG_HISAX_DIEHLDIVA=y CONFIG_HISAX_ASUSCOM=y CONFIG_HISAX_TELEINT=y CONFIG_HISAX_HFCS=y CONFIG_HISAX_SEDLBAUER=y CONFIG_HISAX_SPORTSTER=y CONFIG_HISAX_MIC=y CONFIG_HISAX_NETJET=y CONFIG_HISAX_NETJET_U=y CONFIG_HISAX_NICCY=y CONFIG_HISAX_ISURF=y CONFIG_HISAX_HSTSAPHIR=y CONFIG_HISAX_BKM_A4T=y CONFIG_HISAX_SCT_QUADRO=y CONFIG_HISAX_GAZEL=y CONFIG_HISAX_HFC_PCI=y CONFIG_HISAX_W6692=y CONFIG_HISAX_HFC_SX=y CONFIG_HISAX_ENTERNOW_PCI=y # CONFIG_HISAX_DEBUG is not set # # HiSax PCMCIA card service modules # CONFIG_HISAX_SEDLBAUER_CS=m CONFIG_HISAX_ELSA_CS=m CONFIG_HISAX_AVM_A1_CS=m CONFIG_HISAX_TELES_CS=m # # HiSax sub driver modules # CONFIG_HISAX_ST5481=m CONFIG_HISAX_HFCUSB=m CONFIG_HISAX_HFC4S8S=m CONFIG_HISAX_FRITZ_PCIPNP=m CONFIG_HISAX_HDLC=y # # Active cards # CONFIG_ISDN_DRV_ICN=m CONFIG_ISDN_DRV_PCBIT=m CONFIG_ISDN_DRV_SC=m CONFIG_ISDN_DRV_ACT2000=m CONFIG_ISDN_DRV_GIGASET=m CONFIG_GIGASET_BASE=m CONFIG_GIGASET_M105=m CONFIG_GIGASET_M101=m # CONFIG_GIGASET_DEBUG is not set # CONFIG_GIGASET_UNDOCREQ is not set CONFIG_ISDN_CAPI=m CONFIG_ISDN_DRV_AVMB1_VERBOSE_REASON=y CONFIG_CAPI_TRACE=y CONFIG_ISDN_CAPI_MIDDLEWARE=y CONFIG_ISDN_CAPI_CAPI20=m CONFIG_ISDN_CAPI_CAPIFS_BOOL=y CONFIG_ISDN_CAPI_CAPIFS=m CONFIG_ISDN_CAPI_CAPIDRV=m # # CAPI hardware drivers # CONFIG_CAPI_AVM=y CONFIG_ISDN_DRV_AVMB1_B1ISA=m CONFIG_ISDN_DRV_AVMB1_B1PCI=m CONFIG_ISDN_DRV_AVMB1_B1PCIV4=y CONFIG_ISDN_DRV_AVMB1_T1ISA=m CONFIG_ISDN_DRV_AVMB1_B1PCMCIA=m CONFIG_ISDN_DRV_AVMB1_AVM_CS=m CONFIG_ISDN_DRV_AVMB1_T1PCI=m CONFIG_ISDN_DRV_AVMB1_C4=m CONFIG_CAPI_EICON=y CONFIG_ISDN_DIVAS=m CONFIG_ISDN_DIVAS_BRIPCI=y CONFIG_ISDN_DIVAS_PRIPCI=y CONFIG_ISDN_DIVAS_DIVACAPI=m CONFIG_ISDN_DIVAS_USERIDI=m CONFIG_ISDN_DIVAS_MAINT=m CONFIG_PHONE=m CONFIG_PHONE_IXJ=m CONFIG_PHONE_IXJ_PCMCIA=m # # Input device support # CONFIG_INPUT=y CONFIG_INPUT_FF_MEMLESS=m CONFIG_INPUT_POLLDEV=m # # Userland interfaces # CONFIG_INPUT_MOUSEDEV=y CONFIG_INPUT_MOUSEDEV_PSAUX=y CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768 CONFIG_INPUT_JOYDEV=m CONFIG_INPUT_EVDEV=m # CONFIG_INPUT_EVBUG is not set # # Input Device Drivers # CONFIG_INPUT_KEYBOARD=y CONFIG_KEYBOARD_ATKBD=y CONFIG_KEYBOARD_SUNKBD=m CONFIG_KEYBOARD_LKKBD=m CONFIG_KEYBOARD_XTKBD=m CONFIG_KEYBOARD_NEWTON=m CONFIG_KEYBOARD_STOWAWAY=m CONFIG_INPUT_MOUSE=y CONFIG_MOUSE_PS2=m CONFIG_MOUSE_PS2_ALPS=y CONFIG_MOUSE_PS2_LOGIPS2PP=y CONFIG_MOUSE_PS2_SYNAPTICS=y CONFIG_MOUSE_PS2_LIFEBOOK=y CONFIG_MOUSE_PS2_TRACKPOINT=y # CONFIG_MOUSE_PS2_TOUCHKIT is not set CONFIG_MOUSE_SERIAL=m CONFIG_MOUSE_APPLETOUCH=m CONFIG_MOUSE_INPORT=m # CONFIG_MOUSE_ATIXL is not set CONFIG_MOUSE_LOGIBM=m CONFIG_MOUSE_PC110PAD=m CONFIG_MOUSE_VSXXXAA=m CONFIG_INPUT_JOYSTICK=y CONFIG_JOYSTICK_ANALOG=m CONFIG_JOYSTICK_A3D=m CONFIG_JOYSTICK_ADI=m CONFIG_JOYSTICK_COBRA=m CONFIG_JOYSTICK_GF2K=m CONFIG_JOYSTICK_GRIP=m CONFIG_JOYSTICK_GRIP_MP=m CONFIG_JOYSTICK_GUILLEMOT=m CONFIG_JOYSTICK_INTERACT=m CONFIG_JOYSTICK_SIDEWINDER=m CONFIG_JOYSTICK_TMDC=m CONFIG_JOYSTICK_IFORCE=m CONFIG_JOYSTICK_IFORCE_USB=y CONFIG_JOYSTICK_IFORCE_232=y CONFIG_JOYSTICK_WARRIOR=m CONFIG_JOYSTICK_MAGELLAN=m CONFIG_JOYSTICK_SPACEORB=m CONFIG_JOYSTICK_SPACEBALL=m CONFIG_JOYSTICK_STINGER=m CONFIG_JOYSTICK_TWIDJOY=m CONFIG_JOYSTICK_DB9=m CONFIG_JOYSTICK_GAMECON=m CONFIG_JOYSTICK_TURBOGRAFX=m CONFIG_JOYSTICK_JOYDUMP=m CONFIG_JOYSTICK_XPAD=m CONFIG_JOYSTICK_XPAD_FF=y CONFIG_JOYSTICK_XPAD_LEDS=y CONFIG_INPUT_TABLET=y CONFIG_TABLET_USB_ACECAD=m CONFIG_TABLET_USB_AIPTEK=m CONFIG_TABLET_USB_GTCO=m CONFIG_TABLET_USB_KBTAB=m CONFIG_TABLET_USB_WACOM=m CONFIG_INPUT_TOUCHSCREEN=y CONFIG_TOUCHSCREEN_ADS7846=m CONFIG_TOUCHSCREEN_FUJITSU=m CONFIG_TOUCHSCREEN_GUNZE=m CONFIG_TOUCHSCREEN_ELO=m CONFIG_TOUCHSCREEN_MTOUCH=m CONFIG_TOUCHSCREEN_MK712=m CONFIG_TOUCHSCREEN_PENMOUNT=m CONFIG_TOUCHSCREEN_TOUCHRIGHT=m CONFIG_TOUCHSCREEN_TOUCHWIN=m CONFIG_TOUCHSCREEN_UCB1400=m CONFIG_TOUCHSCREEN_USB_COMPOSITE=m CONFIG_TOUCHSCREEN_USB_EGALAX=y CONFIG_TOUCHSCREEN_USB_PANJIT=y CONFIG_TOUCHSCREEN_USB_3M=y CONFIG_TOUCHSCREEN_USB_ITM=y CONFIG_TOUCHSCREEN_USB_ETURBO=y CONFIG_TOUCHSCREEN_USB_GUNZE=y CONFIG_TOUCHSCREEN_USB_DMC_TSC10=y CONFIG_TOUCHSCREEN_USB_IRTOUCH=y CONFIG_TOUCHSCREEN_USB_IDEALTEK=y CONFIG_TOUCHSCREEN_USB_GENERAL_TOUCH=y CONFIG_TOUCHSCREEN_USB_GOTOP=y CONFIG_INPUT_MISC=y CONFIG_INPUT_PCSPKR=m CONFIG_INPUT_WISTRON_BTNS=m CONFIG_INPUT_ATLAS_BTNS=m CONFIG_INPUT_ATI_REMOTE=m CONFIG_INPUT_ATI_REMOTE2=m CONFIG_INPUT_KEYSPAN_REMOTE=m CONFIG_INPUT_POWERMATE=m CONFIG_INPUT_YEALINK=m CONFIG_INPUT_UINPUT=m # # Hardware I/O ports # CONFIG_SERIO=y CONFIG_SERIO_I8042=y CONFIG_SERIO_SERPORT=m CONFIG_SERIO_CT82C710=m CONFIG_SERIO_PARKBD=m CONFIG_SERIO_PCIPS2=m CONFIG_SERIO_LIBPS2=y CONFIG_SERIO_RAW=m CONFIG_GAMEPORT=m CONFIG_GAMEPORT_NS558=m CONFIG_GAMEPORT_L4=m CONFIG_GAMEPORT_EMU10K1=m CONFIG_GAMEPORT_FM801=m # # Character devices # CONFIG_VT=y CONFIG_VT_CONSOLE=y CONFIG_HW_CONSOLE=y # CONFIG_VT_HW_CONSOLE_BINDING is not set CONFIG_SERIAL_NONSTANDARD=y CONFIG_ROCKETPORT=m CONFIG_CYCLADES=m # CONFIG_CYZ_INTR is not set # CONFIG_DIGIEPCA is not set # CONFIG_ESPSERIAL is not set # CONFIG_MOXA_INTELLIO is not set CONFIG_MOXA_SMARTIO=m CONFIG_MOXA_SMARTIO_NEW=m # CONFIG_ISI is not set CONFIG_SYNCLINK=m CONFIG_SYNCLINKMP=m CONFIG_SYNCLINK_GT=m CONFIG_N_HDLC=m # CONFIG_SPECIALIX is not set CONFIG_SX=m # CONFIG_RIO is not set CONFIG_STALDRV=y # # Serial drivers # CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_FIX_EARLYCON_MEM=y CONFIG_SERIAL_8250_PCI=y CONFIG_SERIAL_8250_PNP=y CONFIG_SERIAL_8250_CS=m CONFIG_SERIAL_8250_NR_UARTS=32 CONFIG_SERIAL_8250_RUNTIME_UARTS=4 CONFIG_SERIAL_8250_EXTENDED=y CONFIG_SERIAL_8250_MANY_PORTS=y CONFIG_SERIAL_8250_FOURPORT=m CONFIG_SERIAL_8250_ACCENT=m CONFIG_SERIAL_8250_BOCA=m CONFIG_SERIAL_8250_EXAR_ST16C554=m CONFIG_SERIAL_8250_HUB6=m CONFIG_SERIAL_8250_SHARE_IRQ=y # CONFIG_SERIAL_8250_DETECT_IRQ is not set CONFIG_SERIAL_8250_RSA=y # # Non-8250 serial port support # CONFIG_SERIAL_CORE=y CONFIG_SERIAL_CORE_CONSOLE=y CONFIG_SERIAL_JSM=m CONFIG_UNIX98_PTYS=y # CONFIG_LEGACY_PTYS is not set CONFIG_PRINTER=m # CONFIG_LP_CONSOLE is not set CONFIG_PPDEV=m CONFIG_HVC_DRIVER=y CONFIG_VIRTIO_CONSOLE=y CONFIG_IPMI_HANDLER=m # CONFIG_IPMI_PANIC_EVENT is not set CONFIG_IPMI_DEVICE_INTERFACE=m CONFIG_IPMI_SI=m CONFIG_IPMI_WATCHDOG=m CONFIG_IPMI_POWEROFF=m CONFIG_HW_RANDOM=y CONFIG_HW_RANDOM_INTEL=m CONFIG_HW_RANDOM_AMD=m CONFIG_HW_RANDOM_GEODE=m CONFIG_HW_RANDOM_VIA=m CONFIG_NVRAM=m CONFIG_RTC=m CONFIG_GEN_RTC=m CONFIG_GEN_RTC_X=y CONFIG_DTLK=m CONFIG_R3964=m CONFIG_APPLICOM=m CONFIG_SONYPI=m # # PCMCIA character devices # CONFIG_SYNCLINK_CS=m CONFIG_CARDMAN_4000=m CONFIG_CARDMAN_4040=m CONFIG_MWAVE=m CONFIG_SCx200_GPIO=m CONFIG_PC8736x_GPIO=m CONFIG_NSC_GPIO=m CONFIG_CS5535_GPIO=m CONFIG_RAW_DRIVER=m CONFIG_MAX_RAW_DEVS=256 CONFIG_HPET=y # CONFIG_HPET_RTC_IRQ is not set CONFIG_HPET_MMAP=y CONFIG_HANGCHECK_TIMER=m CONFIG_TCG_TPM=m CONFIG_TCG_TIS=m CONFIG_TCG_NSC=m CONFIG_TCG_ATMEL=m CONFIG_TCG_INFINEON=m CONFIG_TELCLOCK=m CONFIG_DEVPORT=y CONFIG_I2C=m CONFIG_I2C_BOARDINFO=y CONFIG_I2C_CHARDEV=m # # I2C Algorithms # CONFIG_I2C_ALGOBIT=m CONFIG_I2C_ALGOPCF=m CONFIG_I2C_ALGOPCA=m # # I2C Hardware Bus support # CONFIG_I2C_ALI1535=m CONFIG_I2C_ALI1563=m CONFIG_I2C_ALI15X3=m CONFIG_I2C_AMD756=m CONFIG_I2C_AMD756_S4882=m CONFIG_I2C_AMD8111=m CONFIG_I2C_I801=m CONFIG_I2C_I810=m CONFIG_I2C_PIIX4=m CONFIG_I2C_NFORCE2=m CONFIG_I2C_OCORES=m CONFIG_I2C_PARPORT=m CONFIG_I2C_PARPORT_LIGHT=m CONFIG_I2C_PROSAVAGE=m CONFIG_I2C_SAVAGE4=m CONFIG_I2C_SIMTEC=m CONFIG_SCx200_I2C=m CONFIG_SCx200_I2C_SCL=12 CONFIG_SCx200_I2C_SDA=13 CONFIG_SCx200_ACB=m CONFIG_I2C_SIS5595=m CONFIG_I2C_SIS630=m CONFIG_I2C_SIS96X=m CONFIG_I2C_TAOS_EVM=m CONFIG_I2C_STUB=m CONFIG_I2C_TINY_USB=m CONFIG_I2C_VIA=m CONFIG_I2C_VIAPRO=m CONFIG_I2C_VOODOO3=m CONFIG_I2C_PCA_ISA=m # # Miscellaneous I2C Chip support # CONFIG_SENSORS_DS1337=m CONFIG_SENSORS_DS1374=m CONFIG_DS1682=m CONFIG_SENSORS_EEPROM=m CONFIG_SENSORS_PCF8574=m CONFIG_SENSORS_PCA9539=m CONFIG_SENSORS_PCF8591=m CONFIG_SENSORS_MAX6875=m CONFIG_SENSORS_TSL2550=m # CONFIG_I2C_DEBUG_CORE is not set # CONFIG_I2C_DEBUG_ALGO is not set # CONFIG_I2C_DEBUG_BUS is not set # CONFIG_I2C_DEBUG_CHIP is not set # # SPI support # CONFIG_SPI=y # CONFIG_SPI_DEBUG is not set CONFIG_SPI_MASTER=y # # SPI Master Controller Drivers # CONFIG_SPI_BITBANG=m CONFIG_SPI_BUTTERFLY=m CONFIG_SPI_LM70_LLP=m # # SPI Protocol Masters # CONFIG_SPI_AT25=m # CONFIG_SPI_SPIDEV is not set CONFIG_SPI_TLE62X0=m CONFIG_W1=m CONFIG_W1_CON=y # # 1-wire Bus Masters # CONFIG_W1_MASTER_MATROX=m CONFIG_W1_MASTER_DS2490=m CONFIG_W1_MASTER_DS2482=m # # 1-wire Slaves # CONFIG_W1_SLAVE_THERM=m CONFIG_W1_SLAVE_SMEM=m CONFIG_W1_SLAVE_DS2433=m # CONFIG_W1_SLAVE_DS2433_CRC is not set CONFIG_W1_SLAVE_DS2760=m CONFIG_POWER_SUPPLY=y # CONFIG_POWER_SUPPLY_DEBUG is not set CONFIG_PDA_POWER=m CONFIG_BATTERY_DS2760=m CONFIG_HWMON=y CONFIG_HWMON_VID=m CONFIG_SENSORS_ABITUGURU=m CONFIG_SENSORS_ABITUGURU3=m CONFIG_SENSORS_AD7418=m CONFIG_SENSORS_ADM1021=m CONFIG_SENSORS_ADM1025=m CONFIG_SENSORS_ADM1026=m CONFIG_SENSORS_ADM1029=m CONFIG_SENSORS_ADM1031=m CONFIG_SENSORS_ADM9240=m CONFIG_SENSORS_ADT7470=m CONFIG_SENSORS_K8TEMP=m CONFIG_SENSORS_ASB100=m CONFIG_SENSORS_ATXP1=m CONFIG_SENSORS_DS1621=m CONFIG_SENSORS_I5K_AMB=m CONFIG_SENSORS_F71805F=m CONFIG_SENSORS_F71882FG=m CONFIG_SENSORS_F75375S=m CONFIG_SENSORS_FSCHER=m CONFIG_SENSORS_FSCPOS=m CONFIG_SENSORS_FSCHMD=m CONFIG_SENSORS_GL518SM=m CONFIG_SENSORS_GL520SM=m CONFIG_SENSORS_CORETEMP=m CONFIG_SENSORS_IBMPEX=m CONFIG_SENSORS_IT87=m CONFIG_SENSORS_LM63=m CONFIG_SENSORS_LM70=m CONFIG_SENSORS_LM75=m CONFIG_SENSORS_LM77=m CONFIG_SENSORS_LM78=m CONFIG_SENSORS_LM80=m CONFIG_SENSORS_LM83=m CONFIG_SENSORS_LM85=m CONFIG_SENSORS_LM87=m CONFIG_SENSORS_LM90=m CONFIG_SENSORS_LM92=m CONFIG_SENSORS_LM93=m CONFIG_SENSORS_MAX1619=m CONFIG_SENSORS_MAX6650=m CONFIG_SENSORS_PC87360=m CONFIG_SENSORS_PC87427=m CONFIG_SENSORS_SIS5595=m CONFIG_SENSORS_DME1737=m CONFIG_SENSORS_SMSC47M1=m CONFIG_SENSORS_SMSC47M192=m CONFIG_SENSORS_SMSC47B397=m CONFIG_SENSORS_THMC50=m CONFIG_SENSORS_VIA686A=m CONFIG_SENSORS_VT1211=m CONFIG_SENSORS_VT8231=m CONFIG_SENSORS_W83781D=m CONFIG_SENSORS_W83791D=m CONFIG_SENSORS_W83792D=m CONFIG_SENSORS_W83793=m CONFIG_SENSORS_W83L785TS=m CONFIG_SENSORS_W83627HF=m CONFIG_SENSORS_W83627EHF=m CONFIG_SENSORS_HDAPS=m CONFIG_SENSORS_APPLESMC=m # CONFIG_HWMON_DEBUG_CHIP is not set CONFIG_WATCHDOG=y # CONFIG_WATCHDOG_NOWAYOUT is not set # # Watchdog Device Drivers # CONFIG_SOFT_WATCHDOG=m CONFIG_ACQUIRE_WDT=m CONFIG_ADVANTECH_WDT=m CONFIG_ALIM1535_WDT=m CONFIG_ALIM7101_WDT=m CONFIG_SC520_WDT=m CONFIG_EUROTECH_WDT=m CONFIG_IB700_WDT=m CONFIG_IBMASR=m CONFIG_WAFER_WDT=m CONFIG_I6300ESB_WDT=m CONFIG_ITCO_WDT=m # CONFIG_ITCO_VENDOR_SUPPORT is not set CONFIG_IT8712F_WDT=m CONFIG_SC1200_WDT=m CONFIG_SCx200_WDT=m CONFIG_PC87413_WDT=m CONFIG_60XX_WDT=m CONFIG_SBC8360_WDT=m CONFIG_SBC7240_WDT=m CONFIG_CPU5_WDT=m CONFIG_SMSC37B787_WDT=m CONFIG_W83627HF_WDT=m CONFIG_W83697HF_WDT=m CONFIG_W83877F_WDT=m CONFIG_W83977F_WDT=m CONFIG_MACHZ_WDT=m CONFIG_SBC_EPX_C3_WATCHDOG=m # # ISA-based Watchdog Cards # CONFIG_PCWATCHDOG=m CONFIG_MIXCOMWD=m CONFIG_WDT=m CONFIG_WDT_501=y # # PCI-based Watchdog Cards # CONFIG_PCIPCWATCHDOG=m CONFIG_WDTPCI=m CONFIG_WDT_501_PCI=y # # USB-based Watchdog Cards # CONFIG_USBPCWATCHDOG=m # # Sonics Silicon Backplane # CONFIG_SSB_POSSIBLE=y CONFIG_SSB=m CONFIG_SSB_PCIHOST_POSSIBLE=y CONFIG_SSB_PCIHOST=y CONFIG_SSB_PCMCIAHOST_POSSIBLE=y CONFIG_SSB_PCMCIAHOST=y # CONFIG_SSB_DEBUG is not set CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y CONFIG_SSB_DRIVER_PCICORE=y # # Multifunction device drivers # CONFIG_MFD_SM501=m # # Multimedia devices # CONFIG_VIDEO_DEV=m CONFIG_VIDEO_V4L1=y CONFIG_VIDEO_V4L1_COMPAT=y CONFIG_VIDEO_V4L2=y CONFIG_VIDEO_CAPTURE_DRIVERS=y # CONFIG_VIDEO_ADV_DEBUG is not set CONFIG_VIDEO_HELPER_CHIPS_AUTO=y CONFIG_VIDEO_TVAUDIO=m CONFIG_VIDEO_TDA7432=m CONFIG_VIDEO_TDA9840=m CONFIG_VIDEO_TDA9875=m CONFIG_VIDEO_TEA6415C=m CONFIG_VIDEO_TEA6420=m CONFIG_VIDEO_MSP3400=m CONFIG_VIDEO_CS53L32A=m CONFIG_VIDEO_WM8775=m CONFIG_VIDEO_WM8739=m CONFIG_VIDEO_VP27SMPX=m CONFIG_VIDEO_BT819=m CONFIG_VIDEO_BT856=m CONFIG_VIDEO_KS0127=m CONFIG_VIDEO_OV7670=m CONFIG_VIDEO_SAA7110=m CONFIG_VIDEO_SAA7111=m CONFIG_VIDEO_SAA7114=m CONFIG_VIDEO_SAA711X=m CONFIG_VIDEO_TVP5150=m CONFIG_VIDEO_VPX3220=m CONFIG_VIDEO_CX25840=m CONFIG_VIDEO_CX2341X=m CONFIG_VIDEO_SAA7127=m CONFIG_VIDEO_SAA7185=m CONFIG_VIDEO_ADV7170=m CONFIG_VIDEO_ADV7175=m CONFIG_VIDEO_UPD64031A=m CONFIG_VIDEO_UPD64083=m CONFIG_VIDEO_VIVI=m CONFIG_VIDEO_BT848=m CONFIG_VIDEO_BT848_DVB=y CONFIG_VIDEO_SAA6588=m CONFIG_VIDEO_PMS=m CONFIG_VIDEO_BWQCAM=m CONFIG_VIDEO_CQCAM=m CONFIG_VIDEO_W9966=m CONFIG_VIDEO_CPIA=m CONFIG_VIDEO_CPIA_PP=m CONFIG_VIDEO_CPIA_USB=m CONFIG_VIDEO_CPIA2=m CONFIG_VIDEO_SAA5246A=m CONFIG_VIDEO_SAA5249=m CONFIG_TUNER_3036=m CONFIG_VIDEO_STRADIS=m CONFIG_VIDEO_ZORAN_ZR36060=m CONFIG_VIDEO_ZORAN=m CONFIG_VIDEO_ZORAN_BUZ=m CONFIG_VIDEO_ZORAN_DC10=m CONFIG_VIDEO_ZORAN_DC30=m CONFIG_VIDEO_ZORAN_LML33=m CONFIG_VIDEO_ZORAN_LML33R10=m CONFIG_VIDEO_ZORAN_AVS6EYES=m CONFIG_VIDEO_MEYE=m CONFIG_VIDEO_SAA7134=m CONFIG_VIDEO_SAA7134_ALSA=m CONFIG_VIDEO_SAA7134_OSS=m CONFIG_VIDEO_SAA7134_DVB=m CONFIG_VIDEO_MXB=m CONFIG_VIDEO_DPC=m CONFIG_VIDEO_HEXIUM_ORION=m CONFIG_VIDEO_HEXIUM_GEMINI=m CONFIG_VIDEO_CX88=m CONFIG_VIDEO_CX88_ALSA=m CONFIG_VIDEO_CX88_BLACKBIRD=m CONFIG_VIDEO_CX88_DVB=m CONFIG_VIDEO_CX88_VP3054=m CONFIG_VIDEO_CX23885=m CONFIG_VIDEO_IVTV=m CONFIG_VIDEO_FB_IVTV=m CONFIG_VIDEO_CAFE_CCIC=m CONFIG_V4L_USB_DRIVERS=y CONFIG_VIDEO_PVRUSB2=m CONFIG_VIDEO_PVRUSB2_29XXX=y CONFIG_VIDEO_PVRUSB2_24XXX=y CONFIG_VIDEO_PVRUSB2_SYSFS=y # CONFIG_VIDEO_PVRUSB2_DEBUGIFC is not set CONFIG_VIDEO_EM28XX=m CONFIG_VIDEO_USBVISION=m CONFIG_VIDEO_USBVIDEO=m CONFIG_USB_VICAM=m CONFIG_USB_IBMCAM=m CONFIG_USB_KONICAWC=m CONFIG_USB_QUICKCAM_MESSENGER=m CONFIG_USB_ET61X251=m CONFIG_VIDEO_OVCAMCHIP=m # CONFIG_USB_W9968CF is not set CONFIG_USB_OV511=m CONFIG_USB_SE401=m CONFIG_USB_SN9C102=m CONFIG_USB_STV680=m CONFIG_USB_ZC0301=m CONFIG_USB_PWC=m # CONFIG_USB_PWC_DEBUG is not set CONFIG_USB_ZR364XX=m CONFIG_RADIO_ADAPTERS=y CONFIG_RADIO_CADET=m CONFIG_RADIO_RTRACK=m CONFIG_RADIO_RTRACK2=m CONFIG_RADIO_AZTECH=m CONFIG_RADIO_GEMTEK=m CONFIG_RADIO_GEMTEK_PCI=m CONFIG_RADIO_MAXIRADIO=m CONFIG_RADIO_MAESTRO=m CONFIG_RADIO_SF16FMI=m CONFIG_RADIO_SF16FMR2=m CONFIG_RADIO_TERRATEC=m CONFIG_RADIO_TRUST=m CONFIG_RADIO_TYPHOON=m CONFIG_RADIO_TYPHOON_PROC_FS=y CONFIG_RADIO_ZOLTRIX=m CONFIG_USB_DSBR=m CONFIG_DVB_CORE=m CONFIG_DVB_CORE_ATTACH=y CONFIG_DVB_CAPTURE_DRIVERS=y # # Supported SAA7146 based PCI Adapters # CONFIG_DVB_AV7110=m CONFIG_DVB_AV7110_OSD=y CONFIG_DVB_BUDGET=m CONFIG_DVB_BUDGET_CI=m CONFIG_DVB_BUDGET_AV=m CONFIG_DVB_BUDGET_PATCH=m # # Supported USB Adapters # CONFIG_DVB_USB=m # CONFIG_DVB_USB_DEBUG is not set CONFIG_DVB_USB_A800=m CONFIG_DVB_USB_DIBUSB_MB=m CONFIG_DVB_USB_DIBUSB_MB_FAULTY=y CONFIG_DVB_USB_DIBUSB_MC=m CONFIG_DVB_USB_DIB0700=m CONFIG_DVB_USB_UMT_010=m CONFIG_DVB_USB_CXUSB=m CONFIG_DVB_USB_M920X=m CONFIG_DVB_USB_GL861=m CONFIG_DVB_USB_AU6610=m CONFIG_DVB_USB_DIGITV=m CONFIG_DVB_USB_VP7045=m CONFIG_DVB_USB_VP702X=m CONFIG_DVB_USB_GP8PSK=m CONFIG_DVB_USB_NOVA_T_USB2=m CONFIG_DVB_USB_TTUSB2=m CONFIG_DVB_USB_DTT200U=m CONFIG_DVB_USB_OPERA1=m CONFIG_DVB_USB_AF9005=m CONFIG_DVB_USB_AF9005_REMOTE=m CONFIG_DVB_TTUSB_DEC=m CONFIG_DVB_CINERGYT2=m # CONFIG_DVB_CINERGYT2_TUNING is not set # # Supported FlexCopII (B2C2) Adapters # CONFIG_DVB_B2C2_FLEXCOP=m CONFIG_DVB_B2C2_FLEXCOP_PCI=m CONFIG_DVB_B2C2_FLEXCOP_USB=m # CONFIG_DVB_B2C2_FLEXCOP_DEBUG is not set # # Supported BT878 Adapters # CONFIG_DVB_BT8XX=m # # Supported Pluto2 Adapters # CONFIG_DVB_PLUTO2=m # # Supported DVB Frontends # # # Customise DVB Frontends # # CONFIG_DVB_FE_CUSTOMISE is not set # # DVB-S (satellite) frontends # CONFIG_DVB_STV0299=m CONFIG_DVB_CX24110=m CONFIG_DVB_CX24123=m CONFIG_DVB_TDA8083=m CONFIG_DVB_MT312=m CONFIG_DVB_VES1X93=m CONFIG_DVB_S5H1420=m CONFIG_DVB_TDA10086=m # # DVB-T (terrestrial) frontends # CONFIG_DVB_SP8870=m CONFIG_DVB_SP887X=m CONFIG_DVB_CX22700=m CONFIG_DVB_CX22702=m CONFIG_DVB_L64781=m CONFIG_DVB_TDA1004X=m CONFIG_DVB_NXT6000=m CONFIG_DVB_MT352=m CONFIG_DVB_ZL10353=m CONFIG_DVB_DIB3000MB=m CONFIG_DVB_DIB3000MC=m CONFIG_DVB_DIB7000M=m CONFIG_DVB_DIB7000P=m # # DVB-C (cable) frontends # CONFIG_DVB_VES1820=m CONFIG_DVB_TDA10021=m CONFIG_DVB_TDA10023=m CONFIG_DVB_STV0297=m # # ATSC (North American/Korean Terrestrial/Cable DTV) frontends # CONFIG_DVB_NXT200X=m CONFIG_DVB_OR51211=m CONFIG_DVB_OR51132=m CONFIG_DVB_BCM3510=m CONFIG_DVB_LGDT330X=m CONFIG_DVB_S5H1409=m # # Tuners/PLL support # CONFIG_DVB_PLL=m CONFIG_DVB_TDA826X=m CONFIG_DVB_TDA827X=m CONFIG_DVB_TUNER_QT1010=m CONFIG_DVB_TUNER_MT2060=m CONFIG_DVB_TUNER_MT2266=m CONFIG_DVB_TUNER_MT2131=m CONFIG_DVB_TUNER_DIB0070=m # # Miscellaneous devices # CONFIG_DVB_LNBP21=m CONFIG_DVB_ISL6421=m CONFIG_DVB_TUA6100=m CONFIG_VIDEO_SAA7146=m CONFIG_VIDEO_SAA7146_VV=m CONFIG_VIDEO_TUNER=m # CONFIG_VIDEO_TUNER_CUSTOMIZE is not set CONFIG_TUNER_MT20XX=m CONFIG_TUNER_TDA8290=m CONFIG_TUNER_TEA5761=m CONFIG_TUNER_TEA5767=m CONFIG_TUNER_SIMPLE=m CONFIG_VIDEOBUF_GEN=m CONFIG_VIDEOBUF_DMA_SG=m CONFIG_VIDEOBUF_VMALLOC=m CONFIG_VIDEOBUF_DVB=m CONFIG_VIDEO_BTCX=m CONFIG_VIDEO_IR_I2C=m CONFIG_VIDEO_IR=m CONFIG_VIDEO_TVEEPROM=m CONFIG_DAB=y CONFIG_USB_DABUSB=m # # Graphics support # CONFIG_AGP=m CONFIG_AGP_ALI=m CONFIG_AGP_ATI=m CONFIG_AGP_AMD=m CONFIG_AGP_AMD64=m CONFIG_AGP_INTEL=m CONFIG_AGP_NVIDIA=m CONFIG_AGP_SIS=m CONFIG_AGP_SWORKS=m CONFIG_AGP_VIA=m CONFIG_AGP_EFFICEON=m CONFIG_DRM=m CONFIG_DRM_TDFX=m CONFIG_DRM_R128=m CONFIG_DRM_RADEON=m CONFIG_DRM_I810=m CONFIG_DRM_I830=m CONFIG_DRM_I915=m CONFIG_DRM_MGA=m CONFIG_DRM_SIS=m CONFIG_DRM_VIA=m CONFIG_DRM_SAVAGE=m CONFIG_VGASTATE=m CONFIG_VIDEO_OUTPUT_CONTROL=m CONFIG_FB=y CONFIG_FIRMWARE_EDID=y CONFIG_FB_DDC=m CONFIG_FB_CFB_FILLRECT=y CONFIG_FB_CFB_COPYAREA=y CONFIG_FB_CFB_IMAGEBLIT=y # CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set CONFIG_FB_SYS_FILLRECT=m CONFIG_FB_SYS_COPYAREA=m CONFIG_FB_SYS_IMAGEBLIT=m CONFIG_FB_SYS_FOPS=m CONFIG_FB_DEFERRED_IO=y CONFIG_FB_SVGALIB=m # CONFIG_FB_MACMODES is not set CONFIG_FB_BACKLIGHT=y CONFIG_FB_MODE_HELPERS=y CONFIG_FB_TILEBLITTING=y # # Frame buffer hardware drivers # CONFIG_FB_CIRRUS=m CONFIG_FB_PM2=m CONFIG_FB_PM2_FIFO_DISCONNECT=y CONFIG_FB_CYBER2000=m CONFIG_FB_ARC=m # CONFIG_FB_ASILIANT is not set # CONFIG_FB_IMSTT is not set CONFIG_FB_VGA16=m # CONFIG_FB_UVESA is not set CONFIG_FB_VESA=y CONFIG_FB_EFI=y # CONFIG_FB_IMAC is not set CONFIG_FB_HECUBA=m CONFIG_FB_HGA=m # CONFIG_FB_HGA_ACCEL is not set CONFIG_FB_S1D13XXX=m CONFIG_FB_NVIDIA=m CONFIG_FB_NVIDIA_I2C=y # CONFIG_FB_NVIDIA_DEBUG is not set CONFIG_FB_NVIDIA_BACKLIGHT=y # CONFIG_FB_RIVA is not set CONFIG_FB_I810=m # CONFIG_FB_I810_GTF is not set CONFIG_FB_LE80578=m CONFIG_FB_CARILLO_RANCH=m CONFIG_FB_INTEL=m # CONFIG_FB_INTEL_DEBUG is not set CONFIG_FB_INTEL_I2C=y CONFIG_FB_MATROX=m CONFIG_FB_MATROX_MILLENIUM=y CONFIG_FB_MATROX_MYSTIQUE=y CONFIG_FB_MATROX_G=y CONFIG_FB_MATROX_I2C=m CONFIG_FB_MATROX_MAVEN=m CONFIG_FB_MATROX_MULTIHEAD=y CONFIG_FB_RADEON=m CONFIG_FB_RADEON_I2C=y CONFIG_FB_RADEON_BACKLIGHT=y # CONFIG_FB_RADEON_DEBUG is not set CONFIG_FB_ATY128=m CONFIG_FB_ATY128_BACKLIGHT=y CONFIG_FB_ATY=m CONFIG_FB_ATY_CT=y CONFIG_FB_ATY_GENERIC_LCD=y CONFIG_FB_ATY_GX=y CONFIG_FB_ATY_BACKLIGHT=y CONFIG_FB_S3=m CONFIG_FB_SAVAGE=m CONFIG_FB_SAVAGE_I2C=y # CONFIG_FB_SAVAGE_ACCEL is not set CONFIG_FB_SIS=m CONFIG_FB_SIS_300=y CONFIG_FB_SIS_315=y CONFIG_FB_NEOMAGIC=m CONFIG_FB_KYRO=m CONFIG_FB_3DFX=m # CONFIG_FB_3DFX_ACCEL is not set CONFIG_FB_VOODOO1=m CONFIG_FB_VT8623=m CONFIG_FB_CYBLA=m CONFIG_FB_TRIDENT=m # CONFIG_FB_TRIDENT_ACCEL is not set CONFIG_FB_ARK=m CONFIG_FB_PM3=m CONFIG_FB_GEODE=y CONFIG_FB_GEODE_LX=m CONFIG_FB_GEODE_GX=m # CONFIG_FB_GEODE_GX_SET_FBSIZE is not set CONFIG_FB_GEODE_GX1=m CONFIG_FB_SM501=m CONFIG_FB_VIRTUAL=m CONFIG_BACKLIGHT_LCD_SUPPORT=y # CONFIG_LCD_CLASS_DEVICE is not set CONFIG_BACKLIGHT_CLASS_DEVICE=y # CONFIG_BACKLIGHT_CORGI is not set CONFIG_BACKLIGHT_PROGEAR=m # # Display device support # CONFIG_DISPLAY_SUPPORT=m # # Display hardware drivers # # # Console display driver support # CONFIG_VGA_CONSOLE=y # CONFIG_VGACON_SOFT_SCROLLBACK is not set CONFIG_VIDEO_SELECT=y CONFIG_MDA_CONSOLE=m CONFIG_DUMMY_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE=y # CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY is not set CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y # CONFIG_FONTS is not set CONFIG_FONT_8x8=y CONFIG_FONT_8x16=y # CONFIG_LOGO is not set # # Sound # CONFIG_SOUND=m # # Advanced Linux Sound Architecture # CONFIG_SND=m CONFIG_SND_TIMER=m CONFIG_SND_PCM=m CONFIG_SND_HWDEP=m CONFIG_SND_RAWMIDI=m CONFIG_SND_SEQUENCER=m CONFIG_SND_SEQ_DUMMY=m CONFIG_SND_OSSEMUL=y CONFIG_SND_MIXER_OSS=m CONFIG_SND_PCM_OSS=m CONFIG_SND_PCM_OSS_PLUGINS=y CONFIG_SND_SEQUENCER_OSS=y CONFIG_SND_RTCTIMER=m CONFIG_SND_SEQ_RTCTIMER_DEFAULT=y # CONFIG_SND_DYNAMIC_MINORS is not set CONFIG_SND_SUPPORT_OLD_API=y CONFIG_SND_VERBOSE_PROCFS=y # CONFIG_SND_VERBOSE_PRINTK is not set # CONFIG_SND_DEBUG is not set # # Generic devices # CONFIG_SND_MPU401_UART=m CONFIG_SND_OPL3_LIB=m CONFIG_SND_OPL4_LIB=m CONFIG_SND_VX_LIB=m CONFIG_SND_AC97_CODEC=m CONFIG_SND_DUMMY=m CONFIG_SND_VIRMIDI=m CONFIG_SND_MTPAV=m CONFIG_SND_MTS64=m CONFIG_SND_SERIAL_U16550=m CONFIG_SND_MPU401=m CONFIG_SND_PORTMAN2X4=m CONFIG_SND_AD1848_LIB=m CONFIG_SND_CS4231_LIB=m CONFIG_SND_SB_COMMON=m CONFIG_SND_SB8_DSP=m CONFIG_SND_SB16_DSP=m # # ISA devices # CONFIG_SND_ADLIB=m CONFIG_SND_AD1816A=m CONFIG_SND_AD1848=m CONFIG_SND_ALS100=m CONFIG_SND_AZT2320=m CONFIG_SND_CMI8330=m CONFIG_SND_CS4231=m CONFIG_SND_CS4232=m CONFIG_SND_CS4236=m CONFIG_SND_DT019X=m CONFIG_SND_ES968=m CONFIG_SND_ES1688=m CONFIG_SND_ES18XX=m CONFIG_SND_SC6000=m CONFIG_SND_GUS_SYNTH=m CONFIG_SND_GUSCLASSIC=m CONFIG_SND_GUSEXTREME=m CONFIG_SND_GUSMAX=m CONFIG_SND_INTERWAVE=m CONFIG_SND_INTERWAVE_STB=m CONFIG_SND_OPL3SA2=m CONFIG_SND_OPTI92X_AD1848=m CONFIG_SND_OPTI92X_CS4231=m CONFIG_SND_OPTI93X=m CONFIG_SND_MIRO=m CONFIG_SND_SB8=m CONFIG_SND_SB16=m CONFIG_SND_SBAWE=m CONFIG_SND_SB16_CSP=y CONFIG_SND_SB16_CSP_FIRMWARE_IN_KERNEL=y CONFIG_SND_SGALAXY=m CONFIG_SND_SSCAPE=m CONFIG_SND_WAVEFRONT=m CONFIG_SND_WAVEFRONT_FIRMWARE_IN_KERNEL=y # # PCI devices # CONFIG_SND_AD1889=m CONFIG_SND_ALS300=m CONFIG_SND_ALS4000=m CONFIG_SND_ALI5451=m CONFIG_SND_ATIIXP=m CONFIG_SND_ATIIXP_MODEM=m CONFIG_SND_AU8810=m CONFIG_SND_AU8820=m CONFIG_SND_AU8830=m CONFIG_SND_AZT3328=m CONFIG_SND_BT87X=m # CONFIG_SND_BT87X_OVERCLOCK is not set CONFIG_SND_CA0106=m CONFIG_SND_CMIPCI=m CONFIG_SND_CS4281=m CONFIG_SND_CS5530=m CONFIG_SND_CS5535AUDIO=m CONFIG_SND_DARLA20=m CONFIG_SND_GINA20=m CONFIG_SND_LAYLA20=m CONFIG_SND_DARLA24=m CONFIG_SND_GINA24=m CONFIG_SND_LAYLA24=m CONFIG_SND_MONA=m CONFIG_SND_MIA=m CONFIG_SND_ECHO3G=m CONFIG_SND_INDIGO=m CONFIG_SND_INDIGOIO=m CONFIG_SND_INDIGODJ=m CONFIG_SND_EMU10K1=m CONFIG_SND_EMU10K1X=m CONFIG_SND_ENS1370=m CONFIG_SND_ENS1371=m CONFIG_SND_ES1938=m CONFIG_SND_ES1968=m CONFIG_SND_FM801=m CONFIG_SND_FM801_TEA575X_BOOL=y CONFIG_SND_FM801_TEA575X=m CONFIG_SND_HDA_INTEL=m # CONFIG_SND_HDA_HWDEP is not set CONFIG_SND_HDA_CODEC_REALTEK=y CONFIG_SND_HDA_CODEC_ANALOG=y CONFIG_SND_HDA_CODEC_SIGMATEL=y CONFIG_SND_HDA_CODEC_VIA=y CONFIG_SND_HDA_CODEC_ATIHDMI=y CONFIG_SND_HDA_CODEC_CONEXANT=y CONFIG_SND_HDA_CODEC_CMEDIA=y CONFIG_SND_HDA_CODEC_SI3054=y CONFIG_SND_HDA_GENERIC=y CONFIG_SND_HDA_POWER_SAVE=y CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0 CONFIG_SND_HDSP=m CONFIG_SND_HDSPM=m CONFIG_SND_ICE1712=m CONFIG_SND_ICE1724=m CONFIG_SND_INTEL8X0=m CONFIG_SND_INTEL8X0M=m CONFIG_SND_KORG1212=m CONFIG_SND_MAESTRO3=m CONFIG_SND_MIXART=m CONFIG_SND_NM256=m CONFIG_SND_PCXHR=m CONFIG_SND_RIPTIDE=m CONFIG_SND_RME32=m CONFIG_SND_RME96=m CONFIG_SND_RME9652=m CONFIG_SND_SONICVIBES=m CONFIG_SND_TRIDENT=m CONFIG_SND_VIA82XX=m CONFIG_SND_VIA82XX_MODEM=m CONFIG_SND_VX222=m CONFIG_SND_YMFPCI=m CONFIG_SND_AC97_POWER_SAVE=y CONFIG_SND_AC97_POWER_SAVE_DEFAULT=0 # # SPI devices # # # USB devices # CONFIG_SND_USB_AUDIO=m CONFIG_SND_USB_USX2Y=m CONFIG_SND_USB_CAIAQ=m CONFIG_SND_USB_CAIAQ_INPUT=y # # PCMCIA devices # CONFIG_SND_VXPOCKET=m CONFIG_SND_PDAUDIOCF=m # # System on Chip audio support # # CONFIG_SND_SOC is not set # # SoC Audio support for SuperH # # # Open Sound System # CONFIG_SOUND_PRIME=m CONFIG_SOUND_TRIDENT=m # CONFIG_SOUND_MSNDCLAS is not set # CONFIG_SOUND_MSNDPIN is not set CONFIG_SOUND_OSS=m # CONFIG_SOUND_TRACEINIT is not set # CONFIG_SOUND_DMAP is not set CONFIG_SOUND_SSCAPE=m CONFIG_SOUND_VMIDI=m CONFIG_SOUND_TRIX=m CONFIG_SOUND_MSS=m CONFIG_SOUND_MPU401=m CONFIG_SOUND_PAS=m CONFIG_SOUND_PSS=m CONFIG_PSS_MIXER=y CONFIG_SOUND_SB=m CONFIG_SOUND_YM3812=m CONFIG_SOUND_UART6850=m CONFIG_SOUND_AEDSP16=m CONFIG_SC6600=y CONFIG_SC6600_JOY=y CONFIG_SC6600_CDROM=4 CONFIG_SC6600_CDROMBASE=0x0 # CONFIG_AEDSP16_MSS is not set # CONFIG_AEDSP16_SBPRO is not set CONFIG_SOUND_KAHLUA=m CONFIG_AC97_BUS=m CONFIG_HID_SUPPORT=y CONFIG_HID=m # CONFIG_HID_DEBUG is not set CONFIG_HIDRAW=y # # USB Input Devices # CONFIG_USB_HID=m CONFIG_USB_HIDINPUT_POWERBOOK=y # CONFIG_HID_FF is not set CONFIG_USB_HIDDEV=y # # USB HID Boot Protocol drivers # CONFIG_USB_KBD=m CONFIG_USB_MOUSE=m CONFIG_USB_SUPPORT=y CONFIG_USB_ARCH_HAS_HCD=y CONFIG_USB_ARCH_HAS_OHCI=y CONFIG_USB_ARCH_HAS_EHCI=y CONFIG_USB=m # CONFIG_USB_DEBUG is not set # # Miscellaneous USB options # CONFIG_USB_DEVICEFS=y CONFIG_USB_DEVICE_CLASS=y # CONFIG_USB_DYNAMIC_MINORS is not set CONFIG_USB_SUSPEND=y # CONFIG_USB_PERSIST is not set # CONFIG_USB_OTG is not set # # USB Host Controller Drivers # CONFIG_USB_EHCI_HCD=m CONFIG_USB_EHCI_SPLIT_ISO=y CONFIG_USB_EHCI_ROOT_HUB_TT=y CONFIG_USB_EHCI_TT_NEWSCHED=y CONFIG_USB_ISP116X_HCD=m CONFIG_USB_OHCI_HCD=m # CONFIG_USB_OHCI_HCD_SSB is not set # CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set # CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set CONFIG_USB_OHCI_LITTLE_ENDIAN=y CONFIG_USB_UHCI_HCD=m CONFIG_USB_U132_HCD=m CONFIG_USB_SL811_HCD=m CONFIG_USB_SL811_CS=m CONFIG_USB_R8A66597_HCD=m # # USB Device Class drivers # CONFIG_USB_ACM=m CONFIG_USB_PRINTER=m # # NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' # # # may also be needed; see USB_STORAGE Help for more information # CONFIG_USB_STORAGE=m # CONFIG_USB_STORAGE_DEBUG is not set CONFIG_USB_STORAGE_DATAFAB=y CONFIG_USB_STORAGE_FREECOM=y CONFIG_USB_STORAGE_ISD200=y CONFIG_USB_STORAGE_DPCM=y CONFIG_USB_STORAGE_USBAT=y CONFIG_USB_STORAGE_SDDR09=y CONFIG_USB_STORAGE_SDDR55=y CONFIG_USB_STORAGE_JUMPSHOT=y CONFIG_USB_STORAGE_ALAUDA=y CONFIG_USB_STORAGE_KARMA=y # CONFIG_USB_LIBUSUAL is not set # # USB Imaging devices # CONFIG_USB_MDC800=m CONFIG_USB_MICROTEK=m CONFIG_USB_MON=y # # USB port drivers # CONFIG_USB_USS720=m # # USB Serial Converter support # CONFIG_USB_SERIAL=m CONFIG_USB_SERIAL_GENERIC=y CONFIG_USB_SERIAL_AIRCABLE=m CONFIG_USB_SERIAL_AIRPRIME=m CONFIG_USB_SERIAL_ARK3116=m CONFIG_USB_SERIAL_BELKIN=m CONFIG_USB_SERIAL_CH341=m # CONFIG_USB_SERIAL_WHITEHEAT is not set CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m CONFIG_USB_SERIAL_CP2101=m CONFIG_USB_SERIAL_CYPRESS_M8=m CONFIG_USB_SERIAL_EMPEG=m CONFIG_USB_SERIAL_FTDI_SIO=m CONFIG_USB_SERIAL_FUNSOFT=m CONFIG_USB_SERIAL_VISOR=m CONFIG_USB_SERIAL_IPAQ=m CONFIG_USB_SERIAL_IR=m # CONFIG_USB_SERIAL_EDGEPORT is not set # CONFIG_USB_SERIAL_EDGEPORT_TI is not set CONFIG_USB_SERIAL_GARMIN=m CONFIG_USB_SERIAL_IPW=m CONFIG_USB_SERIAL_KEYSPAN_PDA=m CONFIG_USB_SERIAL_KLSI=m CONFIG_USB_SERIAL_KOBIL_SCT=m CONFIG_USB_SERIAL_MCT_U232=m CONFIG_USB_SERIAL_MOS7720=m CONFIG_USB_SERIAL_MOS7840=m CONFIG_USB_SERIAL_NAVMAN=m CONFIG_USB_SERIAL_PL2303=m CONFIG_USB_SERIAL_OTI6858=m CONFIG_USB_SERIAL_HP4X=m CONFIG_USB_SERIAL_SAFE=m # CONFIG_USB_SERIAL_SAFE_PADDED is not set CONFIG_USB_SERIAL_SIERRAWIRELESS=m # CONFIG_USB_SERIAL_TI is not set CONFIG_USB_SERIAL_CYBERJACK=m CONFIG_USB_SERIAL_XIRCOM=m CONFIG_USB_SERIAL_OPTION=m CONFIG_USB_SERIAL_OMNINET=m CONFIG_USB_SERIAL_DEBUG=m CONFIG_USB_EZUSB=y # # USB Miscellaneous drivers # CONFIG_USB_ADUTUX=m CONFIG_USB_AUERSWALD=m CONFIG_USB_RIO500=m CONFIG_USB_LEGOTOWER=m CONFIG_USB_LCD=m CONFIG_USB_BERRY_CHARGE=m CONFIG_USB_LED=m CONFIG_USB_CYPRESS_CY7C63=m CONFIG_USB_CYTHERM=m CONFIG_USB_PHIDGET=m CONFIG_USB_PHIDGETKIT=m CONFIG_USB_PHIDGETMOTORCONTROL=m CONFIG_USB_PHIDGETSERVO=m CONFIG_USB_IDMOUSE=m CONFIG_USB_FTDI_ELAN=m CONFIG_USB_APPLEDISPLAY=m CONFIG_USB_SISUSBVGA=m CONFIG_USB_SISUSBVGA_CON=y CONFIG_USB_LD=m CONFIG_USB_TRANCEVIBRATOR=m CONFIG_USB_IOWARRIOR=m CONFIG_USB_TEST=m # # USB DSL modem support # CONFIG_USB_ATM=m CONFIG_USB_SPEEDTOUCH=m CONFIG_USB_CXACRU=m CONFIG_USB_UEAGLEATM=m CONFIG_USB_XUSBATM=m # # USB Gadget Support # CONFIG_USB_GADGET=m # CONFIG_USB_GADGET_DEBUG is not set # CONFIG_USB_GADGET_DEBUG_FILES is not set # CONFIG_USB_GADGET_DEBUG_FS is not set CONFIG_USB_GADGET_SELECTED=y # CONFIG_USB_GADGET_AMD5536UDC is not set # CONFIG_USB_GADGET_ATMEL_USBA is not set # CONFIG_USB_GADGET_FSL_USB2 is not set CONFIG_USB_GADGET_NET2280=y CONFIG_USB_NET2280=m # CONFIG_USB_GADGET_PXA2XX is not set # CONFIG_USB_GADGET_M66592 is not set # CONFIG_USB_GADGET_GOKU is not set # CONFIG_USB_GADGET_LH7A40X is not set # CONFIG_USB_GADGET_OMAP is not set # CONFIG_USB_GADGET_S3C2410 is not set # CONFIG_USB_GADGET_AT91 is not set # CONFIG_USB_GADGET_DUMMY_HCD is not set CONFIG_USB_GADGET_DUALSPEED=y CONFIG_USB_ZERO=m CONFIG_USB_ETH=m CONFIG_USB_ETH_RNDIS=y CONFIG_USB_GADGETFS=m CONFIG_USB_FILE_STORAGE=m # CONFIG_USB_FILE_STORAGE_TEST is not set CONFIG_USB_G_SERIAL=m CONFIG_USB_MIDI_GADGET=m CONFIG_MMC=m # CONFIG_MMC_DEBUG is not set # CONFIG_MMC_UNSAFE_RESUME is not set # # MMC/SD Card Drivers # CONFIG_MMC_BLOCK=m CONFIG_MMC_BLOCK_BOUNCE=y CONFIG_SDIO_UART=m # # MMC/SD Host Controller Drivers # CONFIG_MMC_SDHCI=m CONFIG_MMC_RICOH_MMC=m CONFIG_MMC_WBSD=m CONFIG_MMC_TIFM_SD=m CONFIG_NEW_LEDS=y CONFIG_LEDS_CLASS=m # # LED drivers # CONFIG_LEDS_NET48XX=m CONFIG_LEDS_WRAP=m # # LED Triggers # CONFIG_LEDS_TRIGGERS=y CONFIG_LEDS_TRIGGER_TIMER=m CONFIG_LEDS_TRIGGER_IDE_DISK=y CONFIG_LEDS_TRIGGER_HEARTBEAT=m CONFIG_INFINIBAND=m CONFIG_INFINIBAND_USER_MAD=m CONFIG_INFINIBAND_USER_ACCESS=m CONFIG_INFINIBAND_USER_MEM=y CONFIG_INFINIBAND_ADDR_TRANS=y CONFIG_INFINIBAND_MTHCA=m CONFIG_INFINIBAND_MTHCA_DEBUG=y CONFIG_INFINIBAND_AMSO1100=m # CONFIG_INFINIBAND_AMSO1100_DEBUG is not set CONFIG_INFINIBAND_CXGB3=m # CONFIG_INFINIBAND_CXGB3_DEBUG is not set CONFIG_MLX4_INFINIBAND=m CONFIG_INFINIBAND_IPOIB=m # CONFIG_INFINIBAND_IPOIB_CM is not set CONFIG_INFINIBAND_IPOIB_DEBUG=y # CONFIG_INFINIBAND_IPOIB_DEBUG_DATA is not set CONFIG_INFINIBAND_SRP=m CONFIG_INFINIBAND_ISER=m CONFIG_EDAC=y # # Reporting subsystems # # CONFIG_EDAC_DEBUG is not set CONFIG_EDAC_MM_EDAC=m CONFIG_EDAC_AMD76X=m CONFIG_EDAC_E7XXX=m CONFIG_EDAC_E752X=m CONFIG_EDAC_I82875P=m CONFIG_EDAC_I82975X=m CONFIG_EDAC_I3000=m CONFIG_EDAC_I82860=m CONFIG_EDAC_R82600=m CONFIG_EDAC_I5000=m CONFIG_RTC_LIB=m CONFIG_RTC_CLASS=m # # RTC interfaces # CONFIG_RTC_INTF_SYSFS=y CONFIG_RTC_INTF_PROC=y CONFIG_RTC_INTF_DEV=y # CONFIG_RTC_INTF_DEV_UIE_EMUL is not set # CONFIG_RTC_DRV_TEST is not set # # I2C RTC drivers # CONFIG_RTC_DRV_DS1307=m CONFIG_RTC_DRV_DS1374=m CONFIG_RTC_DRV_DS1672=m CONFIG_RTC_DRV_MAX6900=m CONFIG_RTC_DRV_RS5C372=m CONFIG_RTC_DRV_ISL1208=m CONFIG_RTC_DRV_X1205=m CONFIG_RTC_DRV_PCF8563=m CONFIG_RTC_DRV_PCF8583=m CONFIG_RTC_DRV_M41T80=m # CONFIG_RTC_DRV_M41T80_WDT is not set # # SPI RTC drivers # CONFIG_RTC_DRV_RS5C348=m CONFIG_RTC_DRV_MAX6902=m # # Platform RTC drivers # CONFIG_RTC_DRV_CMOS=m CONFIG_RTC_DRV_DS1553=m CONFIG_RTC_DRV_STK17TA8=m CONFIG_RTC_DRV_DS1742=m CONFIG_RTC_DRV_M48T86=m CONFIG_RTC_DRV_M48T59=m CONFIG_RTC_DRV_V3020=m # # on-CPU RTC drivers # # CONFIG_DMADEVICES is not set # CONFIG_AUXDISPLAY is not set CONFIG_VIRTUALIZATION=y CONFIG_KVM=m CONFIG_KVM_INTEL=m CONFIG_KVM_AMD=m CONFIG_LGUEST=m # # Userspace I/O # CONFIG_UIO=m CONFIG_UIO_CIF=m CONFIG_VIRTIO=y CONFIG_VIRTIO_RING=y # # Firmware Drivers # CONFIG_EDD=m CONFIG_EFI_VARS=m CONFIG_DELL_RBU=m CONFIG_DCDBAS=m CONFIG_DMIID=y # # File systems # CONFIG_EXT2_FS=m CONFIG_EXT2_FS_XATTR=y CONFIG_EXT2_FS_POSIX_ACL=y CONFIG_EXT2_FS_SECURITY=y # CONFIG_EXT2_FS_XIP is not set CONFIG_EXT3_FS=m CONFIG_EXT3_FS_XATTR=y CONFIG_EXT3_FS_POSIX_ACL=y CONFIG_EXT3_FS_SECURITY=y CONFIG_EXT4DEV_FS=m CONFIG_EXT4DEV_FS_XATTR=y CONFIG_EXT4DEV_FS_POSIX_ACL=y CONFIG_EXT4DEV_FS_SECURITY=y CONFIG_JBD=m # CONFIG_JBD_DEBUG is not set CONFIG_JBD2=m # CONFIG_JBD2_DEBUG is not set CONFIG_FS_MBCACHE=m CONFIG_REISERFS_FS=m # CONFIG_REISERFS_CHECK is not set # CONFIG_REISERFS_PROC_INFO is not set CONFIG_REISERFS_FS_XATTR=y CONFIG_REISERFS_FS_POSIX_ACL=y CONFIG_REISERFS_FS_SECURITY=y CONFIG_JFS_FS=m CONFIG_JFS_POSIX_ACL=y CONFIG_JFS_SECURITY=y # CONFIG_JFS_DEBUG is not set # CONFIG_JFS_STATISTICS is not set CONFIG_FS_POSIX_ACL=y CONFIG_XFS_FS=m CONFIG_XFS_QUOTA=y CONFIG_XFS_SECURITY=y CONFIG_XFS_POSIX_ACL=y CONFIG_XFS_RT=y CONFIG_GFS2_FS=m CONFIG_GFS2_FS_LOCKING_NOLOCK=m CONFIG_GFS2_FS_LOCKING_DLM=m CONFIG_OCFS2_FS=m CONFIG_OCFS2_DEBUG_MASKLOG=y # CONFIG_OCFS2_DEBUG_FS is not set CONFIG_MINIX_FS=m CONFIG_ROMFS_FS=m CONFIG_INOTIFY=y CONFIG_INOTIFY_USER=y CONFIG_QUOTA=y CONFIG_QUOTA_NETLINK_INTERFACE=y CONFIG_PRINT_QUOTA_WARNING=y CONFIG_QFMT_V1=m CONFIG_QFMT_V2=m CONFIG_QUOTACTL=y CONFIG_DNOTIFY=y CONFIG_AUTOFS_FS=m CONFIG_AUTOFS4_FS=m CONFIG_FUSE_FS=m CONFIG_GENERIC_ACL=y # # CD-ROM/DVD Filesystems # CONFIG_ISO9660_FS=m CONFIG_JOLIET=y CONFIG_ZISOFS=y CONFIG_UDF_FS=m CONFIG_UDF_NLS=y # # DOS/FAT/NT Filesystems # CONFIG_FAT_FS=m CONFIG_MSDOS_FS=m CONFIG_VFAT_FS=m CONFIG_FAT_DEFAULT_CODEPAGE=437 CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" CONFIG_NTFS_FS=m # CONFIG_NTFS_DEBUG is not set CONFIG_NTFS_RW=y # # Pseudo filesystems # CONFIG_PROC_FS=y CONFIG_PROC_KCORE=y CONFIG_PROC_SYSCTL=y CONFIG_SYSFS=y CONFIG_TMPFS=y CONFIG_TMPFS_POSIX_ACL=y CONFIG_HUGETLBFS=y CONFIG_HUGETLB_PAGE=y CONFIG_CONFIGFS_FS=m # # Miscellaneous filesystems # CONFIG_ADFS_FS=m # CONFIG_ADFS_FS_RW is not set CONFIG_AFFS_FS=m CONFIG_ECRYPT_FS=m CONFIG_HFS_FS=m CONFIG_HFSPLUS_FS=m CONFIG_BEFS_FS=m # CONFIG_BEFS_DEBUG is not set CONFIG_BFS_FS=m CONFIG_EFS_FS=m CONFIG_JFFS2_FS=m CONFIG_JFFS2_FS_DEBUG=0 CONFIG_JFFS2_FS_WRITEBUFFER=y # CONFIG_JFFS2_FS_WBUF_VERIFY is not set # CONFIG_JFFS2_SUMMARY is not set CONFIG_JFFS2_FS_XATTR=y CONFIG_JFFS2_FS_POSIX_ACL=y CONFIG_JFFS2_FS_SECURITY=y # CONFIG_JFFS2_COMPRESSION_OPTIONS is not set CONFIG_JFFS2_ZLIB=y # CONFIG_JFFS2_LZO is not set CONFIG_JFFS2_RTIME=y # CONFIG_JFFS2_RUBIN is not set CONFIG_CRAMFS=m CONFIG_VXFS_FS=m CONFIG_HPFS_FS=m CONFIG_QNX4FS_FS=m CONFIG_SYSV_FS=m CONFIG_UFS_FS=m # CONFIG_UFS_FS_WRITE is not set # CONFIG_UFS_DEBUG is not set CONFIG_NETWORK_FILESYSTEMS=y CONFIG_NFS_FS=m CONFIG_NFS_V3=y CONFIG_NFS_V3_ACL=y CONFIG_NFS_V4=y # CONFIG_NFS_DIRECTIO is not set CONFIG_NFSD=m CONFIG_NFSD_V2_ACL=y CONFIG_NFSD_V3=y CONFIG_NFSD_V3_ACL=y CONFIG_NFSD_V4=y CONFIG_NFSD_TCP=y CONFIG_LOCKD=m CONFIG_LOCKD_V4=y CONFIG_EXPORTFS=m CONFIG_NFS_ACL_SUPPORT=m CONFIG_NFS_COMMON=y CONFIG_SUNRPC=m CONFIG_SUNRPC_GSS=m CONFIG_SUNRPC_XPRT_RDMA=m CONFIG_SUNRPC_BIND34=y CONFIG_RPCSEC_GSS_KRB5=m CONFIG_RPCSEC_GSS_SPKM3=m # CONFIG_SMB_FS is not set CONFIG_CIFS=m # CONFIG_CIFS_STATS is not set # CONFIG_CIFS_WEAK_PW_HASH is not set CONFIG_CIFS_XATTR=y CONFIG_CIFS_POSIX=y # CONFIG_CIFS_DEBUG2 is not set # CONFIG_CIFS_EXPERIMENTAL is not set CONFIG_NCP_FS=m # CONFIG_NCPFS_PACKET_SIGNING is not set # CONFIG_NCPFS_IOCTL_LOCKING is not set # CONFIG_NCPFS_STRONG is not set CONFIG_NCPFS_NFS_NS=y CONFIG_NCPFS_OS2_NS=y # CONFIG_NCPFS_SMALLDOS is not set CONFIG_NCPFS_NLS=y CONFIG_NCPFS_EXTRAS=y CONFIG_CODA_FS=m # CONFIG_CODA_FS_OLD_API is not set CONFIG_AFS_FS=m # CONFIG_AFS_DEBUG is not set CONFIG_9P_FS=m # # Partition Types # CONFIG_PARTITION_ADVANCED=y CONFIG_ACORN_PARTITION=y # CONFIG_ACORN_PARTITION_CUMANA is not set # CONFIG_ACORN_PARTITION_EESOX is not set CONFIG_ACORN_PARTITION_ICS=y # CONFIG_ACORN_PARTITION_ADFS is not set # CONFIG_ACORN_PARTITION_POWERTEC is not set CONFIG_ACORN_PARTITION_RISCIX=y CONFIG_OSF_PARTITION=y CONFIG_AMIGA_PARTITION=y CONFIG_ATARI_PARTITION=y CONFIG_MAC_PARTITION=y CONFIG_MSDOS_PARTITION=y CONFIG_BSD_DISKLABEL=y CONFIG_MINIX_SUBPARTITION=y CONFIG_SOLARIS_X86_PARTITION=y CONFIG_UNIXWARE_DISKLABEL=y CONFIG_LDM_PARTITION=y # CONFIG_LDM_DEBUG is not set CONFIG_SGI_PARTITION=y CONFIG_ULTRIX_PARTITION=y CONFIG_SUN_PARTITION=y CONFIG_KARMA_PARTITION=y CONFIG_EFI_PARTITION=y # CONFIG_SYSV68_PARTITION is not set CONFIG_NLS=y CONFIG_NLS_DEFAULT="iso8859-1" CONFIG_NLS_CODEPAGE_437=m CONFIG_NLS_CODEPAGE_737=m CONFIG_NLS_CODEPAGE_775=m CONFIG_NLS_CODEPAGE_850=m CONFIG_NLS_CODEPAGE_852=m CONFIG_NLS_CODEPAGE_855=m CONFIG_NLS_CODEPAGE_857=m CONFIG_NLS_CODEPAGE_860=m CONFIG_NLS_CODEPAGE_861=m CONFIG_NLS_CODEPAGE_862=m CONFIG_NLS_CODEPAGE_863=m CONFIG_NLS_CODEPAGE_864=m CONFIG_NLS_CODEPAGE_865=m CONFIG_NLS_CODEPAGE_866=m CONFIG_NLS_CODEPAGE_869=m CONFIG_NLS_CODEPAGE_936=m CONFIG_NLS_CODEPAGE_950=m CONFIG_NLS_CODEPAGE_932=m CONFIG_NLS_CODEPAGE_949=m CONFIG_NLS_CODEPAGE_874=m CONFIG_NLS_ISO8859_8=m CONFIG_NLS_CODEPAGE_1250=m CONFIG_NLS_CODEPAGE_1251=m CONFIG_NLS_ASCII=m CONFIG_NLS_ISO8859_1=m CONFIG_NLS_ISO8859_2=m CONFIG_NLS_ISO8859_3=m CONFIG_NLS_ISO8859_4=m CONFIG_NLS_ISO8859_5=m CONFIG_NLS_ISO8859_6=m CONFIG_NLS_ISO8859_7=m CONFIG_NLS_ISO8859_9=m CONFIG_NLS_ISO8859_13=m CONFIG_NLS_ISO8859_14=m CONFIG_NLS_ISO8859_15=m CONFIG_NLS_KOI8_R=m CONFIG_NLS_KOI8_U=m CONFIG_NLS_UTF8=m CONFIG_DLM=m CONFIG_DLM_DEBUG=y CONFIG_INSTRUMENTATION=y CONFIG_PROFILING=y CONFIG_OPROFILE=m # CONFIG_KPROBES is not set # CONFIG_MARKERS is not set # # Kernel hacking # CONFIG_TRACE_IRQFLAGS_SUPPORT=y # CONFIG_PRINTK_TIME is not set CONFIG_ENABLE_WARN_DEPRECATED=y CONFIG_ENABLE_MUST_CHECK=y CONFIG_MAGIC_SYSRQ=y CONFIG_UNUSED_SYMBOLS=y CONFIG_DEBUG_FS=y # CONFIG_HEADERS_CHECK is not set CONFIG_DEBUG_KERNEL=y # CONFIG_DEBUG_SHIRQ is not set CONFIG_DETECT_SOFTLOCKUP=y # CONFIG_SCHED_DEBUG is not set # CONFIG_SCHEDSTATS is not set CONFIG_TIMER_STATS=y # CONFIG_DEBUG_SLAB is not set # CONFIG_DEBUG_RT_MUTEXES is not set # CONFIG_RT_MUTEX_TESTER is not set # CONFIG_DEBUG_SPINLOCK is not set # CONFIG_DEBUG_MUTEXES is not set # CONFIG_DEBUG_LOCK_ALLOC is not set # CONFIG_PROVE_LOCKING is not set # CONFIG_LOCK_STAT is not set # CONFIG_DEBUG_SPINLOCK_SLEEP is not set # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set # CONFIG_DEBUG_KOBJECT is not set # CONFIG_DEBUG_HIGHMEM is not set CONFIG_DEBUG_BUGVERBOSE=y # CONFIG_DEBUG_INFO is not set # CONFIG_DEBUG_VM is not set # CONFIG_DEBUG_LIST is not set # CONFIG_DEBUG_SG is not set # CONFIG_FRAME_POINTER is not set # CONFIG_FORCED_INLINING is not set # CONFIG_BOOT_PRINTK_DELAY is not set # CONFIG_RCU_TORTURE_TEST is not set # CONFIG_FAULT_INJECTION is not set # CONFIG_SAMPLES is not set CONFIG_EARLY_PRINTK=y # CONFIG_DEBUG_STACKOVERFLOW is not set # CONFIG_DEBUG_STACK_USAGE is not set # # Page alloc debug is incompatible with Software Suspend on i386 # # CONFIG_DEBUG_RODATA is not set # CONFIG_4KSTACKS is not set CONFIG_X86_FIND_SMP_CONFIG=y CONFIG_X86_MPPARSE=y CONFIG_DOUBLEFAULT=y # # Security options # CONFIG_KEYS=y # CONFIG_KEYS_DEBUG_PROC_KEYS is not set CONFIG_SECURITY=y CONFIG_SECURITY_NETWORK=y CONFIG_SECURITY_NETWORK_XFRM=y CONFIG_SECURITY_CAPABILITIES=y CONFIG_SECURITY_FILE_CAPABILITIES=y CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX_BOOTPARAM=y CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0 CONFIG_SECURITY_SELINUX_DISABLE=y CONFIG_SECURITY_SELINUX_DEVELOP=y CONFIG_SECURITY_SELINUX_AVC_STATS=y CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1 # CONFIG_SECURITY_SELINUX_ENABLE_SECMARK_DEFAULT is not set # CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set CONFIG_XOR_BLOCKS=m CONFIG_ASYNC_CORE=m CONFIG_ASYNC_MEMCPY=m CONFIG_ASYNC_XOR=m CONFIG_CRYPTO=y CONFIG_CRYPTO_ALGAPI=y CONFIG_CRYPTO_AEAD=m CONFIG_CRYPTO_BLKCIPHER=m CONFIG_CRYPTO_HASH=y CONFIG_CRYPTO_MANAGER=y CONFIG_CRYPTO_HMAC=y CONFIG_CRYPTO_XCBC=m CONFIG_CRYPTO_NULL=m CONFIG_CRYPTO_MD4=m CONFIG_CRYPTO_MD5=y CONFIG_CRYPTO_SHA1=m CONFIG_CRYPTO_SHA256=m CONFIG_CRYPTO_SHA512=m CONFIG_CRYPTO_WP512=m CONFIG_CRYPTO_TGR192=m CONFIG_CRYPTO_GF128MUL=m CONFIG_CRYPTO_ECB=m CONFIG_CRYPTO_CBC=m CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_XTS=m # CONFIG_CRYPTO_CRYPTD is not set CONFIG_CRYPTO_DES=m CONFIG_CRYPTO_FCRYPT=m CONFIG_CRYPTO_BLOWFISH=m CONFIG_CRYPTO_TWOFISH=m CONFIG_CRYPTO_TWOFISH_COMMON=m CONFIG_CRYPTO_TWOFISH_586=m CONFIG_CRYPTO_SERPENT=m CONFIG_CRYPTO_AES=m CONFIG_CRYPTO_AES_586=m CONFIG_CRYPTO_CAST5=m CONFIG_CRYPTO_CAST6=m CONFIG_CRYPTO_TEA=m CONFIG_CRYPTO_ARC4=m CONFIG_CRYPTO_KHAZAD=m CONFIG_CRYPTO_ANUBIS=m CONFIG_CRYPTO_SEED=m CONFIG_CRYPTO_DEFLATE=m CONFIG_CRYPTO_MICHAEL_MIC=m CONFIG_CRYPTO_CRC32C=m CONFIG_CRYPTO_CAMELLIA=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_AUTHENC=m CONFIG_CRYPTO_HW=y CONFIG_CRYPTO_DEV_PADLOCK=m CONFIG_CRYPTO_DEV_PADLOCK_AES=m CONFIG_CRYPTO_DEV_PADLOCK_SHA=m CONFIG_CRYPTO_DEV_GEODE=m # # Library routines # CONFIG_BITREVERSE=y CONFIG_CRC_CCITT=m CONFIG_CRC16=m CONFIG_CRC_ITU_T=m CONFIG_CRC32=y CONFIG_CRC7=m CONFIG_LIBCRC32C=m CONFIG_AUDIT_GENERIC=y CONFIG_ZLIB_INFLATE=m CONFIG_ZLIB_DEFLATE=m CONFIG_GENERIC_ALLOCATOR=y CONFIG_REED_SOLOMON=m CONFIG_REED_SOLOMON_DEC16=y CONFIG_TEXTSEARCH=y CONFIG_TEXTSEARCH_KMP=m CONFIG_TEXTSEARCH_BM=m CONFIG_TEXTSEARCH_FSM=m CONFIG_PLIST=y CONFIG_HAS_IOMEM=y CONFIG_HAS_IOPORT=y CONFIG_HAS_DMA=y CONFIG_CHECK_SIGNATURE=y SysRq : Show State task PC stack pid father init S f7c41b38 0 1 0 f7c3f7b0 00000082 00000002 f7c41b38 f7c41b30 00000000 33e47d83 00000236 f7c3f918 c2010940 00000000 00082722 f7c41b5c c04c6ec0 000000fd 00000001 00000000 00000000 f7c41b5c 00082c03 0000000b f7c41f9c c02bbe7f 0000000b Call Trace: [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] do_get_write_access+0x2f9/0x332 [jbd] [] __ext3_get_inode_loc+0x10f/0x2dd [ext3] [] security_sock_rcv_skb+0xc/0xd [] tcp_v4_rcv+0x463/0x887 [] journal_get_write_access+0x21/0x26 [jbd] [] __alloc_skb+0x49/0xf5 [] __netdev_alloc_skb+0x1c/0x35 [] e1000_alloc_rx_buffers+0x1cc/0x296 [e1000] [] netif_receive_skb+0x2e4/0x350 [] e1000_clean_rx_irq+0x422/0x452 [e1000] [] e1000_clean_rx_irq+0x0/0x452 [e1000] [] e1000_clean+0x1e3/0x20d [e1000] [] __d_lookup+0x96/0xd9 [] do_lookup+0x4f/0x140 [] dput+0x15/0xdc [] __link_path_walk+0xa23/0xb46 [] core_sys_select+0x285/0x2a2 [] do_path_lookup+0x162/0x1c4 [] getname+0x59/0xad [] cp_new_stat64+0xfc/0x10e [] sys_select+0xa4/0x187 [] sysenter_past_esp+0x6b/0xa1 ======================= kthreadd S 00000086 0 2 0 f7c3f1d0 00000046 00000003 00000086 dfac9e54 dfac9e98 e00667ec 0000012f f7c3f338 c2010940 00000000 dfac9ea0 0000032c 00000000 00000286 00000000 c011ee96 00000000 c0349d58 000015d1 dfac9e80 00000000 c0134e37 c0134dd6 Call Trace: [] complete+0x36/0x44 [] kthreadd+0x61/0xf5 [] kthreadd+0x0/0xf5 [] kernel_thread_helper+0x7/0x10 ======================= migration/0 S 00000096 0 3 2 f7c3e610 00000046 00000250 00000096 f0999f30 f0999f64 50b71309 00000237 f7c3e778 c2010940 00000000 f0999f6c 00000000 00000000 00000296 00000000 c011ee96 00000000 c2010d58 c012145b c2010940 00000000 c012159a 00000000 Call Trace: [] complete+0x36/0x44 [] migration_thread+0x0/0x1c4 [] migration_thread+0x13f/0x1c4 [] complete+0x36/0x44 [] migration_thread+0x0/0x1c4 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= ksoftirqd/0 S f7c69fb8 0 4 2 f7c3e030 00000046 00000002 f7c69fb8 f7c69fb0 00000000 00000003 c01330f4 f7c3e198 c2010940 00000000 0008259f c012921e 00000001 000000ff 00000000 00000000 00000000 00000000 c0129113 00000000 00000000 c0129135 00000000 Call Trace: [] __rcu_process_callbacks+0xfa/0x174 [] tasklet_action+0x58/0xb8 [] ksoftirqd+0x0/0xa4 [] ksoftirqd+0x22/0xa4 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= watchdog/0 S f7c6dfb8 0 5 2 f7c6b7f0 00000046 00000002 f7c6dfb8 f7c6dfb0 00000000 c011dac1 df72d50a f7c6b958 c2010940 00000000 0008274a 00000246 00000000 000000fd 00000001 00000000 00000000 00000000 c01556a2 00000000 00000000 c01556dd 00000063 Call Trace: [] __update_rq_clock+0x1a/0xf3 [] watchdog+0x0/0x4a [] watchdog+0x3b/0x4a [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= migration/1 S 00000096 0 6 2 f7c6b210 00000046 00000003 00000096 f05fff30 f05fff64 37f5ffba 00000237 f7c6b378 c2018940 00000001 f05fff6c 00000000 00000000 00000296 00000000 c011ee96 00000000 c2018d58 c012145b c2018940 00000000 c012159a 00000000 Call Trace: [] complete+0x36/0x44 [] migration_thread+0x0/0x1c4 [] migration_thread+0x13f/0x1c4 [] complete+0x36/0x44 [] migration_thread+0x0/0x1c4 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= ksoftirqd/1 S 00000282 0 7 2 f7c6ac30 00000046 00000000 00000282 c2015ee0 00000000 36b8173e 00000237 f7c6ad98 c2018940 00000001 00000005 00000000 00000000 c0368a68 00000046 00000246 c0129113 00000001 c0129113 00000000 00000000 c0129135 00000001 Call Trace: [] ksoftirqd+0x0/0xa4 [] ksoftirqd+0x0/0xa4 [] ksoftirqd+0x22/0xa4 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= watchdog/1 S f7c75fb8 0 8 2 f7c6a650 00000046 00000002 f7c75fb8 f7c75fb0 00000000 c011dac1 e4ae4558 f7c6a7b8 c2018940 00000001 00082757 00000246 00000000 000000ff 00000000 00000000 00000000 00000001 c01556a2 00000000 00000000 c01556dd 00000063 Call Trace: [] __update_rq_clock+0x1a/0xf3 [] watchdog+0x0/0x4a [] watchdog+0x3b/0x4a [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= events/0 S c200d9c0 0 9 2 f7c77830 00000046 00016dd6 c200d9c0 00000002 f7c79f9c 7e3c70da 00000237 f7c77998 c2010940 00000000 0008274a 14076365 00000000 000000ff 00000000 00000000 00016dd6 f7c4bec0 c01326e3 f7c79fd0 00000000 c013276b 00000000 Call Trace: [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= events/1 R running 0 10 2 khelper S 00000060 0 11 2 f7c77250 00000046 c0104b00 00000060 00000286 00000000 5f1147fa 0000012e f7c773b8 c2018940 00000001 c0131996 000005f9 00000000 c0131ecc 00000001 00000246 c0135188 f7c4be40 c01326e3 f7c9bfd0 00000000 c013276b 00000000 Call Trace: [] kernel_thread_helper+0x0/0x10 [] __call_usermodehelper+0x30/0x4b [] run_workqueue+0xdc/0x109 [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= kblockd/0 S f7ce7fa0 0 43 2 f7cd8230 00000046 00000002 f7ce7fa0 f7ce7f98 00000000 498b2be3 00000237 f7cd8398 c2010940 00000000 f74191f8 00000000 00000000 c0131ecc 00000000 00000246 c0135188 f7c7d840 c01326e3 f7ce7fd0 00000000 c013276b 00000000 Call Trace: [] run_workqueue+0xdc/0x109 [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= kblockd/1 S f7ce9fa0 0 44 2 f7c76c70 00000046 00000002 f7ce9fa0 f7ce9f98 00000000 37459602 00000214 f7c76dd8 c2018940 00000001 f74191f8 00007cc0 00000000 c0131ecc 00000000 00000246 c0135188 f7c7d800 c01326e3 f7ce9fd0 00000000 c013276b 00000000 Call Trace: [] run_workqueue+0xdc/0x109 [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= kacpid S f7d11fa0 0 47 2 f7d0f7b0 00000046 00000002 f7d11fa0 f7d11f98 00000000 f7d0f7b0 c0344160 f7d0f918 c2010940 00000000 fffedb65 f7d11fb8 00000000 000000ff 00000000 00000000 00000000 f7c7d600 c01326e3 f7d11fd0 00000000 c013276b 00000000 Call Trace: [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= kacpi_notify S f7d15fa0 0 48 2 f7d137f0 00000046 00000002 f7d15fa0 f7d15f98 00000000 f7d137f0 c0344160 f7d13958 c2010940 00000000 fffedb67 f7d15fb8 00000000 000000ff 00000000 00000000 00000000 f7c4bd80 c01326e3 f7d15fd0 00000000 c013276b 00000000 Call Trace: [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= kseriod S c035ca68 0 126 2 f7cd8810 00000046 000007fb c035ca68 f7d6bf7c c02bacf2 072393ba 00000003 f7cd8978 c2010940 00000000 00000000 00023232 00000000 c035ca68 00000000 00000246 c0135188 f7d6bfb4 c024613f f7d6bfc8 00000246 c02463d8 c2010940 Call Trace: [] klist_next+0x24/0x6c [] prepare_to_wait+0x12/0x49 [] serio_thread+0x0/0x311 [] serio_thread+0x299/0x311 [] autoremove_wake_function+0x0/0x35 [] serio_thread+0x0/0x311 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= pdflush S f7dbdf98 0 165 2 f7d54cf0 00000046 00000002 f7dbdf98 f7dbdf90 00000000 c01601fa 00000000 f7d54e58 c2010940 00000000 fffedc41 c011d6ab c2010960 000000ff 00000000 00000000 00000000 00000001 f7dbdfc4 00000000 00000000 c01602c7 f7c41f48 Call Trace: [] pdflush+0x0/0x1d2 [] enqueue_task+0x52/0x5d [] pdflush+0xcd/0x1d2 [] pdflush+0x0/0x1d2 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= pdflush S f7dbff98 0 166 2 f7d12c30 00000046 00000002 f7dbff98 f7dbff90 00000000 00082a7e 00000000 f7d12d98 c2010940 00000000 00082598 000003ed 00000000 000000ff 00000000 00000000 00000000 00000001 f7dbffc4 00000000 00000000 c01602c7 f7c41f48 Call Trace: [] pdflush+0xcd/0x1d2 [] pdflush+0x0/0x1d2 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= kswapd0 S f7dc1f40 0 167 2 f7d552d0 00000046 00000002 f7dc1f40 f7dc1f38 00000000 c0162b51 f7dc1fb4 f7d55438 c2010940 00000000 fffedc41 00000000 00000000 000000ff 00000000 00000000 00000000 f7d552d0 c0162b51 f7dc1fb4 c034e460 c0162c0d f7d554e0 Call Trace: [] kswapd+0x0/0x40a [] kswapd+0x0/0x40a [] kswapd+0xbc/0x40a [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] autoremove_wake_function+0x0/0x35 [] kswapd+0x0/0x40a [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= aio/0 S f7d79fa0 0 168 2 f7d54710 00000046 00000002 f7d79fa0 f7d79f98 00000000 f7d54710 c0344160 f7d54878 c2010940 00000000 fffedc43 f7d79fb8 00000000 000000ff 00000000 00000000 00000000 f7ce0240 c01326e3 f7d79fd0 00000000 c013276b 00000000 Call Trace: [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= aio/1 S 00000000 0 169 2 f7d12650 00000046 f7d13420 00000000 f7d12650 00000000 352ee0b6 00000000 f7d127b8 c2018940 00000001 f7d73fc0 00001bd3 00000000 00000001 00000001 00000246 c0135188 f7d16fc0 c01326e3 f7d73fd0 00000000 c013276b 00000000 Call Trace: [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= ksuspend_usbd S f74fdfa0 0 567 2 f7fd79b0 00000046 00000002 f74fdfa0 f74fdf98 00000000 049bfc26 00000001 f7fd7b18 c2010940 00000000 fffee0b3 f88595d3 00000000 000000ff 00000000 00000000 00000000 f7ff4900 c01326e3 f74fdfd0 00000000 c013276b 00000000 Call Trace: [] usb_autosuspend_work+0x0/0xc [usbcore] [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= khubd S c20159c0 0 590 2 f7fe2030 00000046 00000b14 c20159c0 00000002 f7527f3c b3400da2 00000000 f7fe2198 c2018940 00000001 fffede71 e7dda0f8 ffffffff 000000ff 00000000 00000000 00000b14 f8871588 00000000 f7527fbc 00000000 f8855571 f7527fc0 Call Trace: [] hub_thread+0xa34/0xad8 [usbcore] [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] autoremove_wake_function+0x0/0x35 [] hub_thread+0x0/0xad8 [usbcore] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= scsi_tgtd/0 S 00000000 0 647 2 f741ed30 00000046 f7c3f3e0 00000000 f741ed30 00000000 8609bb4a 00000000 f741ee98 c2010940 00000000 00000092 000011d8 00000000 6f645cd3 00000000 00000246 c0135188 f7fb4280 c01326e3 f7535fd0 00000000 c013276b 00000000 Call Trace: [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= scsi_tgtd/1 S 00000000 0 660 2 f7d1e690 00000046 f7fe2240 00000000 f7d1e690 00000000 708eb084 00000000 f7d1e7f8 c2018940 00000001 00000092 00726adb 00000000 867da4d5 00000000 00000246 c0135188 f74acf40 c01326e3 f7529fd0 00000000 c013276b 00000000 Call Trace: [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= scsi_eh_0 S f7567f64 0 738 2 f7406cb0 00000046 00000002 f7567f64 f7567f5c 00000000 00000000 00000001 f7406e18 c2018940 00000001 fffede50 00000000 00000000 000000ff 00000000 00000000 00000000 f7d52000 f7d52000 00000000 00000000 f882cc62 f7406cb0 Call Trace: [] scsi_error_handler+0x46/0x455 [scsi_mod] [] schedule+0x588/0x5ec [] scsi_error_handler+0x0/0x455 [scsi_mod] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= qla2xxx_0_dpc S f7517f98 0 752 2 f7fe2bf0 00000046 00000002 f7517f98 f7517f90 00000000 f7d522b8 040800e0 f7fe2d58 c2010940 00000000 fffedf94 f7d52278 00000100 000000ff 00000000 00000000 00000000 f7d52278 f891d5f5 f7d52278 00000000 f891d626 f7451d38 Call Trace: [] qla2x00_do_dpc+0x0/0x459 [qla2xxx] [] qla2x00_do_dpc+0x31/0x459 [qla2xxx] [] qla2x00_do_dpc+0x0/0x459 [qla2xxx] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= scsi_wq_0 S c200d9c0 0 753 2 f7cb4d30 00000046 00000afb c200d9c0 00000002 f7623f9c 1520edb6 00000001 f7cb4e98 c2010940 00000000 fffedf9a 1710f1a6 00000000 000000ff 00000000 00000000 00000afb f7ff4e00 c01326e3 f7623fd0 00000000 c013276b 00000000 Call Trace: [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= fc_wq_0 S 00000000 0 754 2 f7412130 00000046 f7fca400 00000000 f7412130 00000000 d72ba61d 00000000 f7412298 c2010940 00000000 00000092 000009b2 00000000 d72b84b3 00000000 00000246 c0135188 f7ff4640 c01326e3 f75b7fd0 00000000 c013276b 00000000 Call Trace: [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= fc_dl_0 S 00000000 0 755 2 f741f310 00000046 f7fca400 00000000 f741f310 00000000 d72cfc9a 00000000 f741f478 c2010940 00000000 00000092 00000c7a 00000000 d72bcc2a 00000000 00000246 c0135188 f7ff4940 c01326e3 f751ffd0 00000000 c013276b 00000000 Call Trace: [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= kjournald S f7471f94 0 1265 2 f7407870 00000046 00000002 f7471f94 f7471f8c 00000000 f703d800 00000003 f74079d8 c2010940 00000000 00082650 00000001 00000286 000000fd 00000001 00000000 00000000 f74c8e14 f74c8e00 f7471fcc 00000000 f896bab2 00000000 Call Trace: [] kjournald+0x15e/0x1d4 [jbd] [] autoremove_wake_function+0x0/0x35 [] kjournald+0x0/0x1d4 [jbd] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= udevd S f76afb38 0 1464 1 f7d61310 00000082 00000002 f76afb38 f76afb30 00000000 14f30f21 0000012f f7d61478 c2018940 00000001 0003d589 00000000 00000000 000000ff 00000000 00000000 00000000 7fffffff f7d66080 00000008 f76aff9c c02bbe22 000000ff Call Trace: [] schedule_timeout+0x13/0x8d [] apic_timer_interrupt+0x28/0x30 [] remove_wait_queue+0xf/0x34 [] add_wait_queue+0x12/0x32 [] inotify_poll+0x3a/0x40 [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] update_curr+0x62/0xef [] task_tick_fair+0x11/0x42 [] lapic_next_event+0xc/0x10 [] clockevents_program_event+0xe0/0xee [] number+0x159/0x22f [] skb_copy_datagram_iovec+0x53/0x1d3 [] vsnprintf+0x442/0x47e [] flush_tlb_page+0x3e/0x62 [] do_wp_page+0x1e0/0x4c8 [] flush_tlb_page+0x3e/0x62 [] handle_mm_fault+0x609/0x688 [] number+0x159/0x22f [] core_sys_select+0x285/0x2a2 [] flush_tlb_page+0x3e/0x62 [] do_wp_page+0x1e0/0x4c8 [] mntput_no_expire+0x11/0x66 [] link_path_walk+0xa9/0xb3 [] handle_mm_fault+0x609/0x688 [] default_wake_function+0x0/0x8 [] permission+0xa3/0xef [] shrink_dcache_parent+0x22/0xc0 [] remove_wait_queue+0xf/0x34 [] do_wait+0x9b1/0xa47 [] sys_select+0xd6/0x187 [] sys_wait4+0x31/0x34 [] sysenter_past_esp+0x6b/0xa1 ======================= portmap S c2016540 0 2538 1 f7cd93d0 00000082 c2016560 c2016540 00000000 c025f8ec 33dcc7c7 00000236 f7cd9538 c2018940 00000001 c2016554 00001089 00000000 f705bb80 00000038 00000000 00082225 7fffffff ffffffff f74643c0 00000000 c02bbe22 00000246 Call Trace: [] process_backlog+0x6e/0xc9 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] add_wait_queue+0x12/0x32 [] tcp_poll+0x19/0x120 [] udp_poll+0x10/0xd5 [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] udp_sendmsg+0x4b1/0x5a7 [] inet_sendmsg+0x3b/0x45 [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x194/0x1f9 [] sys_recvmsg+0x150/0x1d2 [] mod_timer+0x19/0x36 [] sk_reset_timer+0xc/0x16 [] __tcp_push_pending_frames+0x72d/0x7d2 [] do_sync_read+0xc7/0x10a [] _spin_lock_bh+0x8/0x18 [] sys_socketcall+0x240/0x261 [] mntput_no_expire+0x11/0x66 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= syslogd S 00001000 0 2662 1 f7d2a0f0 00000086 f8a976c2 00001000 f896776d c20f8b00 67e0d33d 00000237 f7d2a258 c2010940 00000000 f70ffb60 00000000 00000000 0074800d f78b8158 00000000 f7949ba8 7fffffff f745f980 00000001 f70fff9c c02bbe22 f7949d68 Call Trace: [] __ext3_get_inode_loc+0x10f/0x2dd [ext3] [] do_get_write_access+0x2f9/0x332 [jbd] [] schedule_timeout+0x13/0x8d [] __ext3_journal_dirty_metadata+0x16/0x3a [ext3] [] remove_wait_queue+0xf/0x34 [] add_wait_queue+0x12/0x32 [] datagram_poll+0x14/0xb1 [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] __ext3_get_inode_loc+0x10f/0x2dd [ext3] [] __ext3_journal_dirty_metadata+0x16/0x3a [ext3] [] journal_get_write_access+0x21/0x26 [jbd] [] ext3_mark_iloc_dirty+0x27b/0x2e1 [ext3] [] ext3_mark_inode_dirty+0x20/0x27 [ext3] [] __wake_up+0x32/0x42 [] journal_stop+0x15d/0x169 [jbd] [] __ext3_journal_stop+0x19/0x34 [ext3] [] ext3_ordered_write_end+0xf4/0x122 [ext3] [] generic_file_buffered_write+0x1ac/0x5a1 [] generic_getxattr+0x3e/0x44 [] generic_getxattr+0x0/0x44 [] __generic_file_aio_write_nolock+0x491/0x4f0 [] core_sys_select+0x285/0x2a2 [] ext3_file_write+0x24/0x8f [ext3] [] do_sync_readv_writev+0xc1/0xfe [] autoremove_wake_function+0x0/0x35 [] do_notify_resume+0x50c/0x60c [] rw_copy_check_uvector+0x50/0xaa [] do_readv_writev+0xb5/0x180 [] pipe_write+0x0/0x3e6 [] sys_select+0xd6/0x187 [] recalc_sigpending+0xb/0x30 [] sigprocmask+0xa1/0xc7 [] sys_rt_sigprocmask+0x4b/0xc7 [] sysenter_past_esp+0x6b/0xa1 ======================= klogd R running 0 2668 1 sshd S f7787b38 0 2770 1 f7ffa690 00200082 00000002 f7787b38 f7787b30 00000000 00000000 f78b8040 f7ffa7f8 c2010940 00000000 ffff0872 f896776d c018fec4 000000ff 00000000 00000000 00000000 7fffffff f762ac00 00000004 f7787f9c c02bbe22 00001000 Call Trace: [] do_get_write_access+0x2f9/0x332 [jbd] [] __mark_inode_dirty+0x24/0x144 [] schedule_timeout+0x13/0x8d [] ext3_new_blocks+0x4c6/0x5cd [ext3] [] add_wait_queue+0x12/0x32 [] tcp_poll+0x19/0x120 [] free_poll_entry+0xe/0x16 [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] do_get_write_access+0x2f9/0x332 [jbd] [] __ext3_get_inode_loc+0x10f/0x2dd [ext3] [] __find_get_block_slow+0x3a/0x11a [] __ext3_journal_dirty_metadata+0x16/0x3a [ext3] [] journal_get_write_access+0x21/0x26 [jbd] [] ext3_mark_iloc_dirty+0x27b/0x2e1 [ext3] [] ext3_mark_inode_dirty+0x20/0x27 [ext3] [] __ext3_journal_stop+0x19/0x34 [ext3] [] __mark_inode_dirty+0x24/0x144 [] block_write_end+0x4d/0x55 [] journal_stop+0x15d/0x169 [jbd] [] ext3_generic_write_end+0x72/0x7c [ext3] [] __ext3_journal_stop+0x19/0x34 [ext3] [] ext3_ordered_write_end+0xf4/0x122 [ext3] [] generic_file_buffered_write+0x1ac/0x5a1 [] __alloc_skb+0x49/0xf5 [] core_sys_select+0x285/0x2a2 [] __alloc_pages+0x59/0x2d5 [] kunmap_atomic+0x66/0x95 [] flush_tlb_page+0x3e/0x62 [] do_wp_page+0x1e0/0x4c8 [] handle_mm_fault+0x609/0x688 [] __wake_up+0x32/0x42 [] skb_dequeue+0x39/0x3f [] sys_select+0xd6/0x187 [] mntput_no_expire+0x11/0x66 [] filp_close+0x51/0x58 [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= rpc.statd S f77fbb38 0 2804 1 f7407290 00000086 00000002 f77fbb38 f77fbb30 00000000 00000001 c20159c0 f74073f8 c2018940 00000001 ffff0254 c2015a08 00000000 000000ff 00000000 00000000 00000000 7fffffff f7c54cc0 00000008 f77fbf9c c02bbe22 00000246 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] add_wait_queue+0x12/0x32 [] tcp_poll+0x19/0x120 [] udp_poll+0x10/0xd5 [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] local_bh_enable+0x83/0x91 [] dev_queue_xmit+0x287/0x2ae [] ip_cork_release+0x19/0x76 [] ip_push_pending_frames+0x33c/0x346 [] ip_generic_getfrag+0x0/0x96 [] udp_push_pending_frames+0x2b7/0x31e [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] memcpy_toiovec+0x27/0x4a [] skb_copy_datagram_iovec+0x53/0x1d3 [] skb_recv_datagram+0x75/0x1a3 [] skb_release_all+0xa3/0xfa [] udp_recvmsg+0x183/0x1d0 [] sock_common_recvmsg+0x3e/0x54 [] sock_recvmsg+0xe5/0x100 [] autoremove_wake_function+0x0/0x35 [] core_sys_select+0x285/0x2a2 [] move_addr_to_user+0x50/0x68 [] sys_recvfrom+0x108/0x12b [] sys_bind+0x6e/0x99 [] inet_create+0x1eb/0x27e [] _spin_lock_bh+0x8/0x18 [] inet_sock_destruct+0x16a/0x1af [] _spin_lock_bh+0x8/0x18 [] sys_select+0xd6/0x187 [] mntput_no_expire+0x11/0x66 [] filp_close+0x51/0x58 [] sysenter_past_esp+0x6b/0xa1 ======================= rpciod/0 S f713dfa0 0 2809 2 f7d1f830 00000046 00000002 f713dfa0 f713df98 00000000 f8c70647 00000000 f7d1f998 c2010940 00000000 0003d596 f79fe500 f770c240 000000ff 00000000 00000000 00000000 f770c240 c01326e3 f713dfd0 00000000 c013276b 00000000 Call Trace: [] xs_udp_connect_worker4+0x0/0xda [sunrpc] [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= rpciod/1 S f713ffa0 0 2810 2 f741f8f0 00000046 00000002 f713ffa0 f713ff98 00000000 f8c70647 00000000 f741fa58 c2018940 00000001 0003d597 f79fe500 f770c280 000000ff 00000000 00000000 00000000 f770c280 c01326e3 f713ffd0 00000000 c013276b 00000000 Call Trace: [] xs_udp_connect_worker4+0x0/0xda [sunrpc] [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= rpc.idmapd S f711df10 0 2826 1 f7fd6230 00000086 00000002 f711df10 f711df08 00000000 edab2ccb 00000235 f7fd6398 c2018940 00000001 000825c7 f711df34 f7c70000 000000ff 00000000 00000000 00000000 f711df34 00082a71 000003e7 00000202 c02bbe7f 00000001 Call Trace: [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] sys_epoll_wait+0x150/0x3be [] do_gettimeofday+0x31/0xce [] default_wake_function+0x0/0x8 [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= getty S f715feac 0 3141 1 f7c760b0 00000082 f715fea4 f715feac f715fea4 00000000 00000000 c0234dd1 f7c76218 c2018940 00000001 ffff04a8 00000020 00000008 000000ff 00000000 00000000 00000400 7fffffff f7011000 f715ff40 bf8fdd6b c02bbe22 c2015134 Call Trace: [] do_con_write+0x1774/0x17cd [] schedule_timeout+0x13/0x8d [] __pagevec_free+0x14/0x1a [] release_pages+0x13e/0x146 [] vgacon_set_cursor_size+0xe1/0xf9 [] add_wait_queue+0x12/0x32 [] read_chan+0x310/0x58c [] default_wake_function+0x0/0x8 [] read_chan+0x0/0x58c [] tty_read+0x64/0xac [] tty_read+0x0/0xac [] vfs_read+0x9f/0x14b [] sys_read+0x41/0x67 [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= getty S 00000000 0 3142 1 f7cc01b0 00000082 ffffffff 00000000 c0231d4a f74cba00 e72cb073 00000009 f7cc0318 c2010940 00000000 f71d5edc 0005d7d6 00000000 f7197800 f71af000 2034eda0 00000008 7fffffff f71af000 f71d5f40 bf8fdd6b c02bbe22 c200d134 Call Trace: [] notify_update+0x1f/0x22 [] schedule_timeout+0x13/0x8d [] __pagevec_free+0x14/0x1a [] release_pages+0x13e/0x146 [] add_wait_queue+0x12/0x32 [] read_chan+0x310/0x58c [] default_wake_function+0x0/0x8 [] read_chan+0x0/0x58c [] tty_read+0x64/0xac [] tty_read+0x0/0xac [] vfs_read+0x9f/0x14b [] sys_read+0x41/0x67 [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= getty S f71c7eac 0 3143 1 f7d0e030 00000082 00000002 f71c7eac f71c7ea4 00000000 00000000 00000286 f7d0e198 c2010940 00000000 ffff04aa c04ec260 00000001 000000ff 00000000 00000000 00000000 7fffffff f740f800 f71c7f40 bfc608cb c02bbe22 f7fa3e58 Call Trace: [] schedule_timeout+0x13/0x8d [] tty_wakeup+0x4c/0x50 [] tasklet_action+0x58/0xb8 [] add_wait_queue+0x12/0x32 [] read_chan+0x310/0x58c [] default_wake_function+0x0/0x8 [] read_chan+0x0/0x58c [] tty_read+0x64/0xac [] tty_read+0x0/0xac [] vfs_read+0x9f/0x14b [] sys_read+0x41/0x67 [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= sshd S f71cfb38 0 3487 2770 f7d618f0 00000082 00000002 f71cfb38 f71cfb30 00000000 a26a51cf 00000237 f7d61a58 c2010940 00000000 000827aa 000087bf 00000000 00000003 c03560b0 c03560b0 00000000 7fffffff f7433900 00000008 f71cff9c c02bbe22 00000003 Call Trace: [] schedule_timeout+0x13/0x8d [] tty_ldisc_deref+0x55/0x64 [] tty_poll+0x53/0x60 [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] __qdisc_run+0x9e/0x164 [] dev_queue_xmit+0x287/0x2ae [] ip_finish_output+0x1d3/0x20b [] ip_queue_xmit+0x31e/0x361 [] e1000_clean+0x1e3/0x20d [e1000] [] tcp_transmit_skb+0x661/0x694 [] lock_timer_base+0x19/0x35 [] __mod_timer+0x9a/0xa4 [] sk_reset_timer+0xc/0x16 [] __tcp_push_pending_frames+0x72d/0x7d2 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] tcp_sendmsg+0x927/0xa18 [] core_sys_select+0x285/0x2a2 [] sock_aio_write+0xe3/0xef [] do_sync_write+0xc7/0x10a [] autoremove_wake_function+0x0/0x35 [] read_chan+0x0/0x58c [] sys_select+0xd6/0x187 [] sys_write+0x41/0x67 [] sysenter_past_esp+0x6b/0xa1 ======================= bash R running 0 4005 3487 gfs2_scand S c200d9c0 0 4328 2 f7fd6810 00000046 000161a5 c200d9c0 00000002 f7205f70 7e71c2c0 00000237 f7fd6978 c2010940 00000000 0008274a 13ffabdb 00000000 000000ff 00000000 00000000 000161a5 f7205f98 00082c1f 00000000 00000000 c02bbe7f 00000000 Call Trace: [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs2_scand+0x0/0x5b [gfs2] [] gfs2_scand+0x4e/0x5b [gfs2] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= glock_workque S f71f1fa0 0 4331 2 f7fd73d0 00000046 00000002 f71f1fa0 f71f1f98 00000000 f7fd73d0 f7490040 f7fd7538 c2010940 00000000 0000dd5d f71f1fb8 00000000 000000ff 00000000 00000000 00000000 f71c8e00 c01326e3 f71f1fd0 00000000 c013276b 00000000 Call Trace: [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= glock_workque S f7127fa0 0 4332 2 f7d2b290 00000046 00000002 f7127fa0 f7127f98 00000000 f7d2b290 f7424580 f7d2b3f8 c2018940 00000001 0000dd6b f7127fb8 00000000 000000ff 00000000 00000000 00000000 f71c8fc0 c01326e3 f7127fd0 00000000 c013276b 00000000 Call Trace: [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= ccsd S c200df80 0 4342 1 f7412710 00000086 c012bf83 c200df80 00000002 c0138050 67dce113 00000237 f7412878 c2010940 00000000 0000000a 00000000 00000000 00000046 00000000 00000000 00000001 7fffffff f7729500 00000009 f716ff9c c02bbe22 00000246 Call Trace: [] run_timer_softirq+0x30/0x195 [] hrtimer_interrupt+0x198/0x1c0 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] kunmap_atomic+0x54/0x95 [] kunmap_atomic+0x66/0x95 [] sched_clock+0x8/0x18 [] __update_rq_clock+0x1a/0xf3 [] update_curr+0x62/0xef [] task_tick_fair+0x11/0x42 [] enqueue_entity+0x2b/0x3d [] update_curr+0x62/0xef [] __switch_to+0xfa/0x11c [] schedule+0x588/0x5ec [] schedule_timeout+0x13/0x8d [] __alloc_skb+0x49/0xf5 [] sock_alloc_send_skb+0x6e/0x196 [] sock_def_readable+0x12/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] core_sys_select+0x285/0x2a2 [] sock_aio_write+0xe3/0xef [] __wake_up+0x32/0x42 [] do_sync_write+0xc7/0x10a [] __wake_up+0x32/0x42 [] sys_select+0xd6/0x187 [] mntput_no_expire+0x11/0x66 [] filp_close+0x51/0x58 [] sysenter_past_esp+0x6b/0xa1 ======================= ccsd S 00000020 0 4343 1 f7d2acb0 00000086 c028c04c 00000020 00000000 f1028080 f7f1e368 00000232 f7d2ae18 c2010940 00000000 5cb5d57b 00020fdd 00000000 f71ee130 c029018c 00000034 c011ec17 7fffffff f7729080 0000000a f77a3f9c c02bbe22 f0dafbf8 Call Trace: [] tcp_send_ack+0xe8/0xec [] tcp_v4_do_rcv+0x2b/0x348 [] task_rq_lock+0x3b/0x5e [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] sk_reset_timer+0xc/0x16 [] tcp_rcv_established+0x5c9/0x62b [] net_rx_action+0x9f/0x198 [] tcp_v4_do_rcv+0x2b/0x348 [] tcp_v4_rcv+0x814/0x887 [] dev_queue_xmit+0x287/0x2ae [] ip_local_deliver_finish+0x113/0x1b6 [] ip_rcv_finish+0x272/0x291 [] __ext3_get_inode_loc+0x10f/0x2dd [ext3] [] netif_receive_skb+0x2e4/0x350 [] process_backlog+0x6e/0xc9 [] net_rx_action+0x9f/0x198 [] tcp_rcv_established+0x101/0x62b [] local_bh_enable+0x83/0x91 [] dev_queue_xmit+0x287/0x2ae [] ip_finish_output+0x1d3/0x20b [] ip_queue_xmit+0x31e/0x361 [] core_sys_select+0x285/0x2a2 [] tcp_transmit_skb+0x661/0x694 [] journal_stop+0x15d/0x169 [jbd] [] mod_timer+0x19/0x36 [] sk_reset_timer+0xc/0x16 [] __tcp_push_pending_frames+0x72d/0x7d2 [] release_pages+0x13e/0x146 [] _spin_lock_bh+0x8/0x18 [] tcp_close+0x527/0x546 [] sys_select+0xd6/0x187 [] mntput_no_expire+0x11/0x66 [] filp_close+0x51/0x58 [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200d9c0 0 4347 1 f7d60750 00000082 00000d04 c200d9c0 00000002 f7141bd4 9e30a1b7 00000237 f7d608b8 c2010940 00000000 0008282a 14826f0a 00000000 000000ff 00000000 00000000 00000d04 f7141bfc 000827f8 f7c52080 00000000 c02bbe7f 00000246 Call Trace: [] schedule_timeout+0x70/0x8d [] add_wait_queue+0x12/0x32 [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] move_addr_to_user+0x50/0x68 [] sys_recvmsg+0x1be/0x1d2 [] sk_free+0xa9/0xb6 [] do_gettimeofday+0x31/0xce [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S f78b8158 0 4348 1 f7d60170 00000082 001a0007 f78b8158 00000000 f78edb70 eb8fde28 0000007a f7d602d8 c2010940 00000000 00000000 00000000 00000000 f78b80b0 f78edb70 c1042160 00000000 7fffffff ffffffff 00000000 00000000 c02bbe22 001a000e Call Trace: [] schedule_timeout+0x13/0x8d [] __find_get_block_slow+0x110/0x11a [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] __find_get_block+0x176/0x180 [] __getblk+0x23/0x27e [] __d_lookup+0x96/0xd9 [] cache_alloc_refill+0x58/0x476 [] unix_find_other+0x11a/0x16c [] __alloc_skb+0x49/0xf5 [] unix_write_space+0xf/0x63 [] sock_wfree+0x21/0x36 [] skb_release_all+0xa3/0xfa [] unix_dgram_sendmsg+0x42d/0x467 [] sock_sendmsg+0xc9/0xe4 [] find_lock_page+0x19/0x7f [] filemap_fault+0x213/0x384 [] __alloc_pages+0x59/0x2d5 [] permission+0xa3/0xef [] flush_tlb_page+0x3e/0x62 [] __do_fault+0x32d/0x36e [] try_to_wake_up+0x2b7/0x2c1 [] handle_mm_fault+0x2cf/0x688 [] pick_next_task_fair+0x1f/0x2d [] update_curr+0x62/0xef [] do_page_fault+0x1f7/0x5a8 [] activate_task+0x1c/0x28 [] sched_setscheduler+0x25b/0x2ae [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S f7167e1c 0 4349 1 f7d2b870 00000082 00000002 f7167e1c f7167e14 00000000 c0169120 08185100 f7d2b9d8 c2010940 00000000 0000ded5 00000000 00000000 000000ff 00000000 00000000 00000000 f741a774 f7167e9c c04d8870 08185100 c013d69a c034f728 Call Trace: [] find_extend_vma+0x12/0x49 [] futex_wait+0x192/0x2a1 [] kunmap_atomic+0x54/0x95 [] kunmap_atomic+0x66/0x95 [] flush_tlb_page+0x3e/0x62 [] task_rq_lock+0x3b/0x5e [] try_to_wake_up+0x2b7/0x2c1 [] default_wake_function+0x0/0x8 [] do_futex+0x69/0x957 [] set_next_entity+0x11/0x38 [] mntput_no_expire+0x11/0x66 [] sys_futex+0xc2/0xd4 [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200d9c0 0 4353 1 f7cb5310 00000082 00016832 c200d9c0 00000002 f71fdbd4 28699197 0000012f f7cb5478 c2010940 00000000 0003d032 001f1993 00000000 000000ff 00000000 00000000 00016832 7fffffff ffffffff f7cc4680 00000000 c02bbe22 00000000 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] __wake_up_common+0x32/0x5c [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] try_to_wake_up+0x2b7/0x2c1 [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] _spin_lock_bh+0x8/0x18 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] sys_socketcall+0x240/0x261 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200e540 0 4354 1 f7d12070 00000082 c200e560 c200e540 00000000 c025f8ec 284caf3c 0000012f f7d121d8 c2010940 00000000 c200e554 000082c7 00000000 f7fb9000 f7fb9000 f753e980 0003d297 7fffffff ffffffff f7cc4c80 00000000 c02bbe22 00000000 Call Trace: [] process_backlog+0x6e/0xc9 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] _spin_lock_bh+0x8/0x18 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] schedule+0x588/0x5ec [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] __wake_up_common+0x32/0x5c [] get_futex_key+0x6e/0x122 [] futex_wake+0xa6/0xb0 [] do_futex+0x7c/0x957 [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S f76d7bd8 0 4361 1 f7cb4170 00000082 00000002 f76d7bd8 f76d7bd0 00000000 3bed7aa8 0000012f f7cb42d8 c2018940 00000001 0000e0b7 00000000 00000000 000000ff 00000000 00000000 00000000 7fffffff ffffffff f7d1a840 00000000 c02bbe22 00000000 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] __wake_up_common+0x32/0x5c [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] number+0x159/0x22f [] enqueue_task_fair+0x16/0x24 [] activate_task+0x1c/0x28 [] try_to_wake_up+0x2b7/0x2c1 [] _spin_lock_bh+0x8/0x18 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] sys_socketcall+0x240/0x261 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c2016540 0 4362 1 f7cb4750 00000082 c2016560 c2016540 00000000 c025f8ec 4f86a901 0000012f f7cb48b8 c2010940 00000000 c2016554 000009ee 00000000 f7fb9000 f7fb9000 f708a380 0003d33c 7fffffff ffffffff f7d1af00 00000000 c02bbe22 00000001 Call Trace: [] process_backlog+0x6e/0xc9 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] _spin_lock_bh+0x8/0x18 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] update_curr+0x62/0xef [] sched_clock+0x8/0x18 [] enqueue_task+0x52/0x5d [] activate_task+0x1c/0x28 [] try_to_wake_up+0x2b7/0x2c1 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] wake_futex+0x3b/0x45 [] futex_wake+0xa6/0xb0 [] do_futex+0x7c/0x957 [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_futex+0xc2/0xd4 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200d9c0 0 4363 1 f7cb58f0 00000082 0002b572 c200d9c0 00000002 f71edbd4 69d6f449 0000007b f7cb5a58 c2010940 00000000 0000e0b7 14cd9b81 00000000 000000ff 00000000 00000000 0002b572 7fffffff ffffffff f7d1a3c0 00000000 c02bbe22 00000001 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] autoremove_wake_function+0x15/0x35 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] schedule+0x588/0x5ec [] find_extend_vma+0x12/0x49 [] get_futex_key+0x6e/0x122 [] enqueue_task+0x52/0x5d [] activate_task+0x1c/0x28 [] try_to_wake_up+0x2b7/0x2c1 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] wake_futex+0x3b/0x45 [] futex_wake+0xa6/0xb0 [] do_futex+0x7c/0x957 [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] sys_futex+0xc2/0xd4 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S f7225bd8 0 4364 1 f7cc0d70 00000082 00000002 f7225bd8 f7225bd0 00000000 000000e6 00000001 f7cc0ed8 c2010940 00000000 0000e0b7 c020d6f9 f7c2e5c4 000000ff 00000000 00000000 00000000 7fffffff ffffffff f743fa80 00000000 c02bbe22 00000000 Call Trace: [] acpi_ex_pci_config_space_handler+0x0/0x5c [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] __wake_up_common+0x32/0x5c [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] try_to_wake_up+0x2b7/0x2c1 [] finish_task_switch+0x25/0x81 [] _spin_lock_bh+0x8/0x18 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] sys_socketcall+0x240/0x261 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S f740e480 0 4365 1 f7cc1930 00000082 f7419158 f740e480 f7419168 00011210 a5b3d03e 0000007b f7cc1a98 c2010940 00000000 f7237c00 000f350f 00000000 f7419158 00000000 c01d5d4e 00000004 7fffffff ffffffff f743f300 00000000 c02bbe22 f7617380 Call Trace: [] get_request+0x20e/0x2bb [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] sync_page+0x0/0x40 [] autoremove_wake_function+0x0/0x35 [] __lock_page+0x58/0x5e [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] __wake_up_common+0x32/0x5c [] __wake_up_sync+0x35/0x4b [] _spin_lock_bh+0x8/0x18 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] pick_next_task_fair+0x1f/0x2d [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200e540 0 4366 1 f7fe37b0 00000082 c200e560 c200e540 00000000 c025f8ec a5a2c89c 0000007b f7fe3918 c2010940 00000000 c200e554 0000b1c9 00000000 f7fb9000 f7fb9000 f7c56c80 0000e1b3 7fffffff ffffffff f7cdc0c0 00000000 c02bbe22 00000000 Call Trace: [] process_backlog+0x6e/0xc9 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] _spin_lock_bh+0x8/0x18 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] try_to_wake_up+0x2b7/0x2c1 [] __sigqueue_alloc+0x33/0x59 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] pick_next_task_fair+0x1f/0x2d [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S f7159bd8 0 4496 1 f7ffb250 00000082 00000002 f7159bd8 f7159bd0 00000000 9e539f00 0000012b f7ffb3b8 c2018940 00000001 0003c411 000031fe 00000000 000000ff 00000000 00000000 00000000 7fffffff ffffffff f7d18bc0 00000000 c02bbe22 00000000 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] __wake_up_common+0x32/0x5c [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] __sigqueue_alloc+0x33/0x59 [] __wake_up_sync+0x35/0x4b [] _spin_lock_bh+0x8/0x18 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] pick_next_task_fair+0x1f/0x2d [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] sys_socketcall+0x240/0x261 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200d9c0 0 4498 1 f7ffac70 00000082 0002b572 c200d9c0 00000002 f7181bd4 b2d4e5a4 0000012b f7ffadd8 c2010940 00000000 0003c411 148872bf 00000000 000000ff 00000000 00000000 0002b572 7fffffff ffffffff f7cb65c0 00000000 c02bbe22 00000000 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] enqueue_task+0x52/0x5d [] activate_task+0x1c/0x28 [] try_to_wake_up+0x2b7/0x2c1 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] wake_futex+0x3b/0x45 [] futex_wake+0xa6/0xb0 [] do_futex+0x7c/0x957 [] sock_setsockopt+0x47e/0x496 [] sys_futex+0xc2/0xd4 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200d9c0 0 5266 1 f7250230 00000082 00016434 c200d9c0 00000002 f7393bd4 7364a80d 0000012e f7250398 c2010940 00000000 0003cf9d 00000000 00000000 000000ff 00000000 00000000 00016434 7fffffff ffffffff f7d1a780 00000000 c02bbe22 00000202 Call Trace: [] schedule_timeout+0x13/0x8d [] __lock_page+0x58/0x5e [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] autoremove_wake_function+0x15/0x35 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] set_next_entity+0x11/0x38 [] autoremove_wake_function+0x0/0x35 [] sync_page+0x0/0x40 [] autoremove_wake_function+0x0/0x35 [] __lock_page+0x58/0x5e [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] __sigqueue_alloc+0x33/0x59 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] pick_next_task_fair+0x1f/0x2d [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200e540 0 5268 1 f74138b0 00000082 c200e560 c200e540 00000000 c025f8ec 735d498c 0000012e f7413a18 c2010940 00000000 c200e554 0000f2e9 00000000 f7fb9000 f7fb9000 f773ae80 0003cf9d 7fffffff ffffffff f753c6c0 00000000 c02bbe22 00000000 Call Trace: [] process_backlog+0x6e/0xc9 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] _spin_lock_bh+0x8/0x18 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] set_next_entity+0x11/0x38 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] __lock_page+0x58/0x5e [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] pick_next_task_fair+0x1f/0x2d [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200d9c0 0 5282 1 f741e170 00000082 00016832 c200d9c0 00000002 f727dbd4 971fd024 0000012e f741e2d8 c2010940 00000000 0003d032 14989283 00000000 000000ff 00000000 00000000 00016832 7fffffff ffffffff f753c180 00000000 c02bbe22 00000202 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] autoremove_wake_function+0x15/0x35 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] __sigqueue_alloc+0x33/0x59 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] pick_next_task_fair+0x1f/0x2d [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200e540 0 5285 1 f7269250 00000082 c200e560 c200e540 00000000 c025f8ec 9706f1bd 0000012e f72693b8 c2010940 00000000 c200e554 0000db1f 00000000 f7fb9000 f7fb9000 f75ae780 0003d033 7fffffff ffffffff f7439500 00000000 c02bbe22 00000000 Call Trace: [] process_backlog+0x6e/0xc9 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] _spin_lock_bh+0x8/0x18 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] set_next_entity+0x11/0x38 [] autoremove_wake_function+0x0/0x35 [] sync_page+0x0/0x40 [] autoremove_wake_function+0x0/0x35 [] __lock_page+0x58/0x5e [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] __sigqueue_alloc+0x33/0x59 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] update_curr+0x62/0xef [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] pick_next_task_fair+0x1f/0x2d [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S 00000000 0 5364 1 f73cb290 00000082 00000000 00000000 00000000 00000000 277b9dbd 0000012f f73cb3f8 c2010940 00000000 00000000 001c13f2 00000000 00000000 00000000 00000000 00000000 7fffffff ffffffff f746d0c0 00000000 c02bbe22 00000000 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] autoremove_wake_function+0x15/0x35 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] lapic_next_event+0xc/0x10 [] clockevents_program_event+0xe0/0xee [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] _spin_lock_bh+0x8/0x18 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200e540 0 5365 1 f73ca0f0 00000082 c200e560 c200e540 00000000 c025f8ec 27590f8b 0000012f f73ca258 c2010940 00000000 c200e554 0000c51b 00000000 f7fb9000 f7fb9000 f7fdf180 0003d293 7fffffff ffffffff f7442500 00000000 c02bbe22 00000000 Call Trace: [] process_backlog+0x6e/0xc9 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] _spin_lock_bh+0x8/0x18 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] kunmap_atomic+0x54/0x95 [] kunmap_atomic+0x66/0x95 [] get_page_from_freelist+0x30a/0x387 [] sys_sendmsg+0x1e5/0x1f9 [] vm_normal_page+0xd/0x3e [] follow_page+0x16f/0x1c7 [] get_user_pages+0x291/0x2fd [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S 00000000 0 5374 1 f7cd99b0 00000082 00000000 00000000 00000000 00000000 2841a22d 0000012f f7cd9b18 c2010940 00000000 00000000 00181e1a 00000000 00000000 00000000 00000000 00000000 7fffffff ffffffff f7465140 00000000 c02bbe22 00000000 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] _spin_lock_bh+0x8/0x18 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= aisexec S c200e540 0 5376 1 f7264610 00000082 c200e560 c200e540 00000000 c025f8ec 28220b0f 0000012f f7264778 c2010940 00000000 c200e554 0000a4ef 00000000 f7fb9000 f7fb9000 f753e880 0003d296 7fffffff ffffffff f7433c00 00000000 c02bbe22 00000000 Call Trace: [] process_backlog+0x6e/0xc9 [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] _spin_lock_bh+0x8/0x18 [] autoremove_wake_function+0x15/0x35 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_sendmsg+0xc9/0xe4 [] autoremove_wake_function+0x0/0x35 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x1e5/0x1f9 [] sys_recvmsg+0x1be/0x1d2 [] _spin_lock_bh+0x8/0x18 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] sock_setsockopt+0x47e/0x496 [] sched_clock+0x8/0x18 [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= groupd S c200d9c0 0 4351 1 f7d13210 00000082 00000d79 c200d9c0 00000002 f74b3bd4 284f89cd 0000012f f7d13378 c2010940 00000000 0003d032 00000000 00000000 000000ff 00000000 00000000 00000d79 7fffffff ffffffff f7ccac00 00000000 c02bbe22 00000001 Call Trace: [] schedule_timeout+0x13/0x8d [] balance_tasks+0x4d/0xf2 [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] __wake_up_common+0x32/0x5c [] __wake_up+0x32/0x42 [] sock_def_readable+0x39/0x63 [] unix_stream_sendmsg+0x24c/0x30b [] sock_aio_write+0xe3/0xef [] autoremove_wake_function+0x0/0x35 [] vfs_write+0x113/0x14d [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= fenced S f715bbd8 0 4355 1 f7d2a6d0 00000082 00000002 f715bbd8 f715bbd0 00000000 a581433e 0000007b f7d2a838 c2010940 00000000 0000e88b 00000000 00000000 000000ff 00000000 00000000 00000000 7fffffff ffffffff f7d1a900 00000000 c02bbe22 c01de09c Call Trace: [] schedule_timeout+0x13/0x8d [] __next_cpu+0x12/0x21 [] find_busiest_group+0x216/0x60d [] balance_tasks+0x4d/0xf2 [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] try_to_wake_up+0x2b7/0x2c1 [] sched_clock+0x8/0x18 [] check_preempt_wakeup+0x1e/0x7a [] try_to_wake_up+0x2b7/0x2c1 [] __wake_up_common+0x32/0x5c [] __wake_up_sync+0x35/0x4b [] unix_write_space+0x3a/0x63 [] sock_wfree+0x21/0x36 [] skb_release_all+0xa3/0xfa [] unix_stream_recvmsg+0x3aa/0x4bd [] mntput_no_expire+0x11/0x66 [] flush_tlb_page+0x3e/0x62 [] sock_aio_read+0xf5/0x101 [] do_wp_page+0x1e0/0x4c8 [] handle_mm_fault+0x609/0x688 [] skb_dequeue+0x39/0x3f [] skb_queue_purge+0x11/0x17 [] unix_sock_destructor+0xe/0xd2 [] sk_free+0xa0/0xb6 [] skb_dequeue+0x39/0x3f [] d_kill+0x37/0x46 [] dput+0x15/0xdc [] __fput+0x122/0x14c [] mntput_no_expire+0x11/0x66 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= dlm_controld S c20159c0 0 4357 1 f7c76690 00000082 00000761 c20159c0 00000002 f71ebbd4 c8037da6 0000012f f7c767f8 c2018940 00000001 0003d589 ec641c75 ffffffff 000000ff 00000000 00000000 00000761 7fffffff ffffffff f7433780 00000000 c02bbe22 f7433780 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] unix_poll+0x17/0x8b [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] task_tick_rt+0xb/0x4c [] lapic_next_event+0xc/0x10 [] clockevents_program_event+0xe0/0xee [] memcpy_toiovec+0x27/0x4a [] skb_copy_datagram_iovec+0x53/0x1d3 [] __wake_up+0x32/0x42 [] netlink_recvmsg+0x262/0x281 [] sock_recvmsg+0xe5/0x100 [] unix_write_space+0x3a/0x63 [] autoremove_wake_function+0x0/0x35 [] sock_aio_read+0xf5/0x101 [] enqueue_task_fair+0x16/0x24 [] do_sync_read+0xc7/0x10a [] autoremove_wake_function+0x0/0x35 [] sys_recv+0x37/0x3b [] sys_socketcall+0x19c/0x261 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= gfs_controld S c20159c0 0 4359 1 f7cc1350 00000086 0001621a c20159c0 00000002 f7203bd4 c80342b4 0000012f f7cc14b8 c2018940 00000001 0003d589 ec63ed5b ffffffff 000000ff 00000000 00000000 0001621a 7fffffff ffffffff f7d1a6c0 00000000 c02bbe22 00000001 Call Trace: [] schedule_timeout+0x13/0x8d [] add_wait_queue+0x12/0x32 [] add_wait_queue+0x12/0x32 [] do_sys_poll+0x222/0x2dc [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] memcpy_toiovec+0x27/0x4a [] skb_copy_datagram_iovec+0x53/0x1d3 [] __wake_up+0x32/0x42 [] netlink_recvmsg+0x262/0x281 [] autoremove_wake_function+0x0/0x35 [] sock_recvmsg+0xe5/0x100 [] autoremove_wake_function+0x0/0x35 [] sys_sendmsg+0x194/0x1f9 [] sys_recvmsg+0x120/0x1d2 [] enqueue_task_fair+0x16/0x24 [] skb_dequeue+0x39/0x3f [] skb_queue_purge+0x11/0x17 [] sk_free+0xa0/0xb6 [] sys_recv+0x37/0x3b [] sys_socketcall+0x19c/0x261 [] sys_poll+0x3a/0x6d [] sysenter_past_esp+0x6b/0xa1 ======================= clurgmgrd S c20159c0 0 4487 1 f7fe31d0 00000086 00000818 c20159c0 00000002 f70f1f1c f7fd6e20 00000000 f7fe3338 c2018940 00000001 0003c412 97619d64 c200d99c 000000ff 00000000 00000000 00000818 00001188 00000001 f7fe31c8 00000000 c012719a 00000246 Call Trace: [] do_wait+0x964/0xa47 [] do_sigaction+0x8a/0x136 [] default_wake_function+0x0/0x8 [] sys_wait4+0x31/0x34 [] sys_waitpid+0x27/0x2b [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= clurgmgrd S f70c1b38 0 4488 4487 f7fd6df0 00000082 00000002 f70c1b38 f70c1b30 00000000 3cef85c3 00000237 f7fd6f58 c2010940 00000000 f70c1b5c 00000000 00000000 00000286 c012c43e 00000000 00000286 f70c1b5c 00082fef 0000000d f70c1f9c c02bbe7f f726edc4 Call Trace: [] __mod_timer+0x9a/0xa4 [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] update_curr+0x62/0xef [] enqueue_entity+0x1d/0x3d [] enqueue_task_fair+0x16/0x24 [] __find_get_block_slow+0x110/0x11a [] do_wp_page+0x1e0/0x4c8 [] update_curr+0x62/0xef [] update_curr+0x62/0xef [] update_curr+0x62/0xef [] __switch_to+0x9d/0x11c [] schedule+0x588/0x5ec [] mntput_no_expire+0x11/0x66 [] schedule_timeout+0x13/0x8d [] unix_write_space+0xf/0x63 [] sock_wfree+0x21/0x36 [] skb_release_all+0xa3/0xfa [] unix_stream_recvmsg+0x3aa/0x4bd [] core_sys_select+0x285/0x2a2 [] __rmqueue+0x13/0x174 [] alloc_pid+0x211/0x2c4 [] update_curr+0x62/0xef [] update_curr+0x62/0xef [] __switch_to+0x9d/0x11c [] sys_select+0xa4/0x187 [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= clurgmgrd S f73b3b38 0 5151 4487 f7250df0 00000082 00000002 f73b3b38 f73b3b30 00000000 36648b42 00000237 f7250f58 c2018940 00000001 f73b3b5c 000195c9 00000000 00000286 c012c43e 00000000 00000286 f73b3b5c 00082856 00000006 f73b3f9c c02bbe7f 00000000 Call Trace: [] __mod_timer+0x9a/0xa4 [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] do_select+0x372/0x3c9 [] __pollwait+0x0/0xac [] default_wake_function+0x0/0x8 [] default_wake_function+0x0/0x8 [] __find_get_block_slow+0x110/0x11a [] mntput_no_expire+0x11/0x66 [] link_path_walk+0xa9/0xb3 [] apic_wait_icr_idle+0xe/0x15 [] native_flush_tlb_others+0x68/0x95 [] flush_tlb_page+0x3e/0x62 [] do_wp_page+0x1e0/0x4c8 [] __kfree_skb+0x8/0x63 [] e1000_unmap_and_free_tx_resource+0x1b/0x23 [e1000] [] e1000_clean_tx_irq+0xbc/0x2c5 [e1000] [] handle_mm_fault+0x609/0x688 [] e1000_clean_rx_irq+0x0/0x452 [e1000] [] e1000_clean+0x1e3/0x20d [e1000] [] do_page_fault+0x1f7/0x5a8 [] do_IRQ+0x5a/0x70 [] do_page_fault+0x0/0x5a8 [] error_code+0x72/0x78 [] unix_accept+0x24/0xe0 [] core_sys_select+0x285/0x2a2 [] __alloc_pages+0x59/0x2d5 [] kunmap_atomic+0x66/0x95 [] flush_tlb_page+0x3e/0x62 [] do_wp_page+0x1e0/0x4c8 [] pipe_read+0x2e0/0x338 [] e1000_unmap_and_free_tx_resource+0x1b/0x23 [e1000] [] e1000_clean_tx_irq+0xbc/0x2c5 [e1000] [] do_sync_read+0xc7/0x10a [] handle_mm_fault+0x609/0x688 [] autoremove_wake_function+0x0/0x35 [] e1000_clean+0x1e3/0x20d [e1000] [] sys_select+0xa4/0x187 [] sysenter_past_esp+0x6b/0xa1 ======================= clurgmgrd S f725fe1c 0 5152 4487 f7d558b0 00000082 00000002 f725fe1c f725fe14 00000000 c01374ba 00000001 f7d55a18 c2010940 00000000 00082610 00000286 c0137b92 000000fd 00000001 00000000 00000000 f757b074 f725ff98 f725fe48 0807ae70 c013d6f2 00000000 Call Trace: [] enqueue_hrtimer+0xd7/0xe2 [] hrtimer_start+0x100/0x10c [] futex_wait+0x1ea/0x2a1 [] hrtimer_wakeup+0x0/0x18 [] futex_wait+0x1de/0x2a1 [] default_wake_function+0x0/0x8 [] do_futex+0x69/0x957 [] lock_timer_base+0x19/0x35 [] __mod_timer+0x9a/0xa4 [] getnstimeofday+0x30/0xb8 [] ktime_get_ts+0x16/0x44 [] sys_futex+0xc2/0xd4 [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= clurgmgrd S b73c9bf8 0 23130 4487 f0a058b0 00000082 b7d3494c b73c9bf8 000000f0 ffffffda 5db68e13 00000237 f0a05a18 c2010940 00000000 f7ffcb7c 00000000 00000000 b7d3494c f757b074 c2010974 c01bd464 00005af2 00000001 f7d558a8 00000000 c012719a 00000246 Call Trace: [] security_task_wait+0xc/0xd [] do_wait+0x964/0xa47 [] do_fork+0x120/0x1cc [] default_wake_function+0x0/0x8 [] sys_wait4+0x31/0x34 [] sys_waitpid+0x27/0x2b [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= clurgmgrd S 7db76065 0 23131 4487 f0a04cf0 00000082 c16eeaac 7db76065 7db76065 ffffffda 5d728a38 00000237 f0a04e58 c2010940 00000000 f7ffcb78 00000000 00000000 b7d3494c c200d920 c2010974 c01bd464 00005af0 00000001 f0a058a8 00000000 c012719a 00000246 Call Trace: [] security_task_wait+0xc/0xd [] do_wait+0x964/0xa47 [] do_fork+0x120/0x1cc [] default_wake_function+0x0/0x8 [] sys_wait4+0x31/0x34 [] sys_waitpid+0x27/0x2b [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= dlm_astd S 00000000 0 4489 2 f7d6d930 00000046 c011d4e6 00000000 00000003 f7fc3994 3caea2de 00000237 f7d6da98 c2018940 00000001 000013d0 00001ad2 00000000 f09b1ec0 f7fc3800 f881e7f5 00000292 00000000 f8a4a145 f881e7f5 00000000 f8a4a16c 00000000 Call Trace: [] __wake_up_common+0x32/0x5c [] gdlm_bast+0x0/0x90 [lock_dlm] [] dlm_astd+0x0/0x13e [dlm] [] gdlm_bast+0x0/0x90 [lock_dlm] [] dlm_astd+0x27/0x13e [dlm] [] dlm_astd+0x0/0x13e [dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= dlm_scand S f716df74 0 4490 2 f7d6c1b0 00000046 00000002 f716df74 f716df6c 00000000 dfa174b8 f8a4af41 f7d6c318 c2010940 00000000 0008276a f716df98 c04c6ec0 000000fd 00000001 00000000 00000000 f716df98 00082c42 00000000 00000000 c02bbe7f 00000282 Call Trace: [] search_bucket+0x2d/0x47 [dlm] [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] dlm_scand+0x0/0x57 [dlm] [] dlm_scand+0x4a/0x57 [dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= dlm_recv/0 S 00000000 0 4491 2 f7d6d350 00000046 f7fd7000 00000000 f7d6d350 00000000 b2a74cd3 0000012b f7d6d4b8 c2010940 00000000 00000092 00000a91 00000000 b2a727cf 0000012b 00000246 c0135188 f7795640 c01326e3 f7211fd0 00000000 c013276b 00000000 Call Trace: [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= dlm_recv/1 S c20159c0 0 4492 2 f7d6c790 00000046 00000897 c20159c0 00000002 f719df9c f7d6c790 f711e900 f7d6c8f8 c2018940 00000001 0003c412 f7c43fb0 f70c1d00 000000ff 00000000 00000000 00000897 f7fb4140 c01326e3 f719dfd0 00000000 c013276b 00000000 Call Trace: [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= dlm_send S 00000000 0 4493 2 f7ffa0b0 00000046 f7fd7000 00000000 f7ffa0b0 00000000 b2a83987 0000012b f7ffa218 c2010940 00000000 00000092 0000083f 00000000 b2a7bde8 0000012b 00000246 c0135188 f70a4b40 c01326e3 f7715fd0 00000000 c013276b 00000000 Call Trace: [] prepare_to_wait+0x12/0x49 [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= dlm_recoverd S c20159c0 0 4494 2 f7ffb830 00000046 0002b7f0 c20159c0 00000002 f71e7f88 9e5e3c67 0000012b f7ffb998 c2018940 00000001 0003c412 eb7b0536 ffffffff 000000ff 00000000 00000000 0002b7f0 00000000 f705f000 00000001 00000000 f8a56bf0 00000000 Call Trace: [] dlm_recoverd+0x49/0x4cc [dlm] [] dlm_recoverd+0x0/0x4cc [dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= lock_dlm1 S 00000000 0 5278 2 f741e750 00000046 c011d4e6 00000000 00000003 f0fc7e78 3cae9d88 00000237 f741e8b8 c2018940 00000001 00000000 00000000 00000000 f8945000 00000286 f8ca47d3 00000246 f7fc3950 f7fc3800 f7381fc8 f0f99c40 f881f357 00000000 Call Trace: [] __wake_up_common+0x32/0x5c [] gfs_glock_cb+0xae/0x143 [gfs] [] gdlm_thread+0xcd/0x600 [lock_dlm] [] schedule+0x588/0x5ec [] default_wake_function+0x0/0x8 [] gdlm_thread1+0x0/0xa [lock_dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= lock_dlm2 S 00000000 0 5279 2 f7264bf0 00000046 c011d4e6 00000000 00000003 f0fc7ddc 3cae9832 00000237 f7264d58 c2018940 00000001 00000000 00000000 00000000 f8945000 00000286 f8ca47d3 00000246 f7fc3950 f7fc3800 f725dfc8 f09b1ec0 f881f357 00000000 Call Trace: [] __wake_up_common+0x32/0x5c [] gfs_glock_cb+0xae/0x143 [gfs] [] gdlm_thread+0xcd/0x600 [lock_dlm] [] schedule+0x588/0x5ec [] default_wake_function+0x0/0x8 [] gdlm_thread2+0x0/0x7 [lock_dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= dlm_recoverd S c20159c0 0 5280 2 f7cd8df0 00000046 00000cfb c20159c0 00000002 f73cff88 82b864b4 0000012e f7cd8f58 c2018940 00000001 0003d030 eb7f2c46 ffffffff 000000ff 00000000 00000000 00000cfb 00000000 f77b1400 00000001 00000000 f8a56bf0 00000000 Call Trace: [] dlm_recoverd+0x49/0x4cc [dlm] [] dlm_recoverd+0x0/0x4cc [dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_scand S f7253f74 0 5286 2 f7d60d30 00000046 00000002 f7253f74 f7253f6c 00000000 f89529ec f8ca4928 f7d60e98 c2010940 00000000 000825af f7253f98 c04c6ec0 000000fd 00000001 00000000 00000000 f7253f98 00082a91 00000000 00000000 c02bbe7f f8ca4928 Call Trace: [] scan_glock+0x0/0x129 [gfs] [] schedule_timeout+0x70/0x8d [] scan_glock+0x0/0x129 [gfs] [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_scand+0x0/0x3d [gfs] [] gfs_scand+0x30/0x3d [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_glockd S c200d9c0 0 5288 2 f7256c30 00000046 000005d0 c200d9c0 f0e8c930 f0e8c81c 551630e2 00000237 f7256d98 c2010940 00000000 f0a94660 000e59d5 00000000 00000000 f8ca342f 00000246 c0135188 f8945000 f8c984c1 f73d5fd0 00000000 f8c98545 00000000 Call Trace: [] unlock_on_glock+0x18/0x1f [gfs] [] prepare_to_wait+0x12/0x49 [] gfs_glockd+0x0/0xa8 [gfs] [] gfs_glockd+0x84/0xa8 [gfs] [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_recoverd S f6febf74 0 5292 2 f7d1f250 00000046 00000002 f6febf74 f6febf6c 00000000 f3932c31 00000182 f7d1f3b8 c2010940 00000000 0007f16b f6febf98 c04c6ec0 000000fd 00000001 00000000 00000000 f6febf98 00082bc2 00000000 00000000 c02bbe7f f6febfb8 Call Trace: [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_recoverd+0x0/0x3d [gfs] [] gfs_recoverd+0x30/0x3d [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_logd S f6fedf3c 0 5293 2 f7d0ebf0 00000046 00000002 f6fedf3c f6fedf34 00000000 c200d9c0 00000000 f7d0ed58 c2010940 00000000 0008276a f6fedf60 c04c6ec0 000000ff 00000000 00000000 00000000 f6fedf60 00082855 f8945000 00000000 c02bbe7f 03916d40 Call Trace: [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_logd+0x92/0xa8 [gfs] [] gfs_logd+0x0/0xa8 [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_quotad S f6feff54 0 5294 2 f73cb870 00000046 00000002 f6feff54 f6feff4c 00000000 7e3c9079 00000237 f73cb9d8 c2010940 00000000 f6feff78 14077efa 00000000 00000286 c012c43e 00000000 00000286 f6feff78 00082c1f f8945000 0007f15b c02bbe7f f8cb93f3 Call Trace: [] __mod_timer+0x9a/0xa4 [] schedule_timeout+0x70/0x8d [] quota_unlock+0x14/0x79 [gfs] [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_quotad+0x146/0x15d [gfs] [] gfs_quotad+0x0/0x15d [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_inoded S f6ff1f74 0 5295 2 f7257210 00000046 00000002 f6ff1f74 f6ff1f6c 00000000 15a59ebd 00000235 f7257378 c2010940 00000000 f6ff1f98 00000ec5 00000000 00000286 c012c43e 00000000 00000286 f6ff1f98 00082bc9 00000000 00000000 c02bbe7f 00000000 Call Trace: [] __mod_timer+0x9a/0xa4 [] schedule_timeout+0x70/0x8d [] call_rwsem_down_write_failed+0x6/0x8 [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_inoded+0x0/0x3d [gfs] [] gfs_inoded+0x30/0x3d [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= lock_dlm1 S f64389a8 0 5369 2 f7256070 00000046 000000c0 f64389a8 f0ef2140 f8bd2000 fcfe4bc3 00000233 f72561d8 c2010940 00000000 f8bd2000 00000000 00000000 f8bd2000 f724bfb8 f8ca47d3 00000246 f74a3350 f74a3200 f724bfc8 f61a69c0 f881f357 00000000 Call Trace: [] gfs_glock_cb+0xae/0x143 [gfs] [] gdlm_thread+0xcd/0x600 [lock_dlm] [] schedule+0x588/0x5ec [] default_wake_function+0x0/0x8 [] gdlm_thread1+0x0/0xa [lock_dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= lock_dlm2 S f64d0e40 0 5370 2 dfa0b8f0 00000046 000000c0 f64d0e40 f080cfc0 f8bd2000 fcfe41f2 00000233 dfa0ba58 c2010940 00000000 f8bd2000 00003977 00000000 f8bd2000 f723ffb8 f8ca47d3 00000246 f74a3350 f74a3200 f723ffc8 f6942c40 f881f357 00000000 Call Trace: [] gfs_glock_cb+0xae/0x143 [gfs] [] gdlm_thread+0xcd/0x600 [lock_dlm] [] schedule+0x588/0x5ec [] default_wake_function+0x0/0x8 [] gdlm_thread2+0x0/0x7 [lock_dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= dlm_recoverd S 00000001 0 5372 2 dfa078b0 00000046 f71e50a0 00000001 f71e5000 f8a5164d 286f333c 0000012f dfa07a18 c2010940 00000000 f768ffe0 00012930 00000000 f71e5000 f8a562da f8a51735 000000ff 00000000 f71e5000 00000001 00000000 f8a56bf0 00000000 Call Trace: [] dlm_recover_waiters_post+0x304/0x30e [dlm] [] wait_status+0x8b/0xd8 [dlm] [] dlm_grant_after_purge+0x1f/0xb5 [dlm] [] dlm_recoverd+0x49/0x4cc [dlm] [] dlm_recoverd+0x0/0x4cc [dlm] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_scand S f7151f74 0 5380 2 f70be1b0 00000046 00000002 f7151f74 f7151f6c 00000000 4079ac3a 00000236 f70be318 c2010940 00000000 0008274a f7151f98 c04c6ec0 000000fd 00000001 00000000 00000000 f7151f98 00082c20 00000000 00000000 c02bbe7f f8ca4928 Call Trace: [] schedule_timeout+0x70/0x8d [] scan_glock+0x0/0x129 [gfs] [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_scand+0x0/0x3d [gfs] [] gfs_scand+0x30/0x3d [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_glockd S f080cfc0 0 5385 2 f72519b0 00000046 f64d0e40 f080cfc0 f8cd43c0 f8ca2a54 fcfd871e 00000233 f7251b18 c2010940 00000000 f64d0e40 00001f60 00000000 00000000 f8ca342f 00000246 c0135188 f8bd2000 f8c984c1 dfa0ffd0 00000000 f8c98545 00000000 Call Trace: [] run_queue+0x159/0x30f [gfs] [] unlock_on_glock+0x18/0x1f [gfs] [] prepare_to_wait+0x12/0x49 [] gfs_glockd+0x0/0xa8 [gfs] [] gfs_glockd+0x84/0xa8 [gfs] [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_recoverd S dfaa5f74 0 5459 2 f7d54130 00000046 00000002 dfaa5f74 dfaa5f6c 00000000 e14aeda8 000001d6 f7d54298 c2010940 00000000 0007f26b dfaa5f98 c04c6ec0 000000ff 00000000 00000000 00000000 dfaa5f98 00082c84 00000000 00000000 c02bbe7f dfaa5fb8 Call Trace: [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_recoverd+0x0/0x3d [gfs] [] gfs_recoverd+0x30/0x3d [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_logd S dfaa7f3c 0 5460 2 dfa6c030 00000046 00000002 dfaa7f3c dfaa7f34 00000000 85639b11 00000237 dfa6c198 c2010940 00000000 dfaa7f60 00001491 00000000 00000286 c012c43e 00000000 00000286 dfaa7f60 00082855 f8bd2000 00000000 c02bbe7f f8bd2000 Call Trace: [] __mod_timer+0x9a/0xa4 [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_logd+0x92/0xa8 [gfs] [] gfs_logd+0x0/0xa8 [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_quotad S dfaa9f54 0 5461 2 f70bed70 00000046 00000002 dfaa9f54 dfaa9f4c 00000000 5e9f6b82 00000204 f70beed8 c2010940 00000000 0008282a dfaa9f78 c04c6ec0 000000fd 00000001 00000000 00000000 dfaa9f78 00082ca5 f8bd2000 0007f209 c02bbe7f 00000282 Call Trace: [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_quotad+0x146/0x15d [gfs] [] gfs_quotad+0x0/0x15d [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= gfs_inoded S dfaabf74 0 5462 2 f7d0e610 00000046 00000002 dfaabf74 dfaabf6c 00000000 5d311e61 00000204 f7d0e778 c2010940 00000000 00081e59 dfaabf98 c04c6ec0 000000ff 00000000 00000000 00000000 dfaabf98 00082c87 00000000 00000000 c02bbe7f dfaabf8c Call Trace: [] schedule_timeout+0x70/0x8d [] process_timeout+0x0/0x5 [] schedule_timeout+0x6b/0x8d [] gfs_inoded+0x0/0x3d [gfs] [] gfs_inoded+0x30/0x3d [gfs] [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= lockd S f68dbeec 0 5584 2 f7412cf0 00000046 00000002 f68dbeec f68dbee4 00000000 f7a24800 f691b2c0 f7412e58 c2010940 00000000 0003d596 f8c7555f 00000000 000000ff 00000000 00000000 00000000 7fffffff f6fdc000 f770c604 f68dbf6c c02bbe22 0000012f Call Trace: [] svc_udp_recvfrom+0x175/0x367 [sunrpc] [] schedule_timeout+0x13/0x8d [] svc_sock_release+0xdd/0x149 [sunrpc] [] add_wait_queue+0x12/0x32 [] svc_recv+0x23d/0x3b9 [sunrpc] [] sched_move_task+0xbf/0xc5 [] default_wake_function+0x0/0x8 [] lockd+0x111/0x225 [lockd] [] schedule_tail+0x18/0x52 [] ret_from_fork+0x6/0x1c [] lockd+0x0/0x225 [lockd] [] lockd+0x0/0x225 [lockd] [] kernel_thread_helper+0x7/0x10 ======================= nfsd4 S f68ddfa0 0 5585 2 f72513d0 00000046 00000002 f68ddfa0 f68ddf98 00000000 674aa5e9 00000216 f7251538 c2010940 00000000 0007f46b f8db0f74 00000000 000000ff 00000000 00000000 00000000 f71efec0 c01326e3 f68ddfd0 00000000 c013276b 00000000 Call Trace: [] laundromat_main+0x0/0x1d8 [nfsd] [] worker_thread+0x0/0xc5 [] worker_thread+0x88/0xc5 [] autoremove_wake_function+0x0/0x35 [] kthread+0x38/0x5e [] kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= nfsd D c20159c0 0 5586 2 f7269830 00000046 000001a7 c20159c0 00000002 f6999b58 40e26f3d 000001df f7269998 c2018940 00000001 0006b57b ec672099 ffffffff 000000ff 00000000 00000000 000001a7 7fffffff f6999cf0 f6999bd4 7fffffff c02bbe22 00008000 Call Trace: [] schedule_timeout+0x13/0x8d [] generic_file_aio_write_nolock+0x3f/0x92 [] wait_for_common+0x16/0x123 [] wait_for_common+0xb6/0x123 [] default_wake_function+0x0/0x8 [] glock_wait_internal+0x108/0x249 [gfs] [] gfs_glock_nq+0x36c/0x3a5 [gfs] [] gfs_glock_nq_init+0x18/0x2b [gfs] [] gfs_glock_nq_num+0x3f/0x88 [gfs] [] gfs_get_dentry+0xa3/0x2ba [gfs] [] skb_copy_datagram_iovec+0x53/0x1d3 [] __qdisc_run+0x9e/0x164 [] _spin_lock_bh+0x8/0x18 [] skb_release_all+0xa3/0xfa [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] tcp_recvmsg+0x630/0x73c [] gfs_fh_to_dentry+0x69/0x6f [gfs] [] exportfs_decode_fh+0x29/0x1a7 [exportfs] [] update_curr+0x62/0xef [] cache_check+0x59/0x3bd [sunrpc] [] enqueue_entity+0x2b/0x3d [] exp_get_by_name+0x43/0x52 [nfsd] [] sunrpc_cache_lookup+0x3e/0xf4 [sunrpc] [] cache_check+0x59/0x3bd [sunrpc] [] set_current_groups+0x14d/0x159 [] nfsd_setuser+0x125/0x175 [nfsd] [] nfsd_setuser_and_check_port+0x4f/0x57 [nfsd] [] __wake_up+0x32/0x42 [] exp_find+0x5b/0x63 [nfsd] [] rqst_exp_find+0x2e/0xa5 [nfsd] [] fh_verify+0x1e8/0x47f [nfsd] [] nfsd_acceptable+0x0/0xba [nfsd] [] load_balance_start_fair+0x0/0x5 [] update_curr+0x62/0xef [] nfsd_open+0x28/0x163 [nfsd] [] nfsd_write+0x90/0xe7 [nfsd] [] sunrpc_cache_lookup+0x3e/0xf4 [sunrpc] [] nfsd3_proc_write+0x109/0x123 [nfsd] [] nfsd_dispatch+0xd3/0x1a0 [nfsd] [] svcauth_unix_set_client+0x135/0x163 [sunrpc] [] svc_process+0x3be/0x670 [sunrpc] [] svc_recv+0x341/0x3b9 [sunrpc] [] nfsd+0x17f/0x28f [nfsd] [] nfsd+0x0/0x28f [nfsd] [] kernel_thread_helper+0x7/0x10 ======================= nfsd D 00000282 0 5587 2 f7cc0790 00000046 c013c948 00000282 f75a7000 00000000 40e1585d 000001df f7cc08f8 c2018940 00000001 f708a280 00002876 00000000 c027d2ab 1e702200 f69d1900 f778c4dc 7fffffff f69d9cf0 f69d9bd4 7fffffff c02bbe22 00000200 Call Trace: [] tick_program_event+0x33/0x52 [] ip_finish_output+0x1d3/0x20b [] schedule_timeout+0x13/0x8d [] smp_apic_timer_interrupt+0x71/0x7d [] qla2x00_start_scsi+0x2df/0x311 [qla2xxx] [] apic_timer_interrupt+0x28/0x30 [] wait_for_common+0xb6/0x123 [] default_wake_function+0x0/0x8 [] glock_wait_internal+0x108/0x249 [gfs] [] gfs_glock_nq+0x36c/0x3a5 [gfs] [] gfs_glock_nq_init+0x18/0x2b [gfs] [] gfs_glock_nq_num+0x3f/0x88 [gfs] [] gfs_get_dentry+0xa3/0x2ba [gfs] [] skb_copy_datagram_iovec+0x53/0x1d3 [] __qdisc_run+0x9e/0x164 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x52/0x8e [] tcp_recvmsg+0x630/0x73c [] gfs_fh_to_dentry+0x69/0x6f [gfs] [] exportfs_decode_fh+0x29/0x1a7 [exportfs] [] update_curr+0x62/0xef [] cache_check+0x59/0x3bd [sunrpc] [] enqueue_entity+0x2b/0x3d [] exp_get_by_name+0x43/0x52 [nfsd] [] sunrpc_cache_lookup+0x3e/0xf4 [sunrpc] [] cache_check+0x59/0x3bd [sunrpc] [] set_current_groups+0x14d/0x159 [] nfsd_setuser+0x125/0x175 [nfsd] [] nfsd_setuser_and_check_port+0x4f/0x57 [nfsd] [] __wake_up+0x32/0x42 [] exp_find+0x5b/0x63 [nfsd] [] rqst_exp_find+0x2e/0xa5 [nfsd] [] fh_verify+0x1e8/0x47f [nfsd] [] nfsd_acceptable+0x0/0xba [nfsd] [] nfsd_open+0x28/0x163 [nfsd] [] nfsd_write+0x90/0xe7 [nfsd] [] sunrpc_cache_lookup+0x3e/0xf4 [sunrpc] [] nfsd3_proc_write+0x109/0x123 [nfsd] [] nfsd_dispatch+0xd3/0x1a0 [nfsd] [] svcauth_unix_set_client+0x135/0x163 [sunrpc] [] svc_process+0x3be/0x670 [sunrpc] [] svc_recv+0x341/0x3b9 [sunrpc] [] nfsd+0x17f/0x28f [nfsd] [] nfsd+0x0/0x28f [nfsd] [] kernel_thread_helper+0x7/0x10 ======================= nfsd R running 0 5589 2 nfsd D f6ab7d6c 0 5590 2 f7268c70 00000046 f6ab7c40 f6ab7d6c f7ae47b8 c16cab00 40e232cd 000001df f7268dd8 c2018940 00000001 00000001 00002641 00000000 00000001 c011e0cf c2015920 f7269830 7fffffff f6ab7cf0 f6ab7bd4 7fffffff c02bbe22 f7269830 Call Trace: [] enqueue_entity+0x2b/0x3d [] schedule_timeout+0x13/0x8d [] activate_task+0x1c/0x28 [] try_to_wake_up+0x2b7/0x2c1 [] wait_for_common+0xb6/0x123 [] default_wake_function+0x0/0x8 [] glock_wait_internal+0x108/0x249 [gfs] [] gfs_glock_nq+0x36c/0x3a5 [gfs] [] gfs_glock_nq_init+0x18/0x2b [gfs] [] gfs_glock_nq_num+0x3f/0x88 [gfs] [] gfs_get_dentry+0xa3/0x2ba [gfs] [] skb_copy_datagram_iovec+0x53/0x1d3 [] __qdisc_run+0x9e/0x164 [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] tcp_recvmsg+0x630/0x73c [] gfs_fh_to_dentry+0x69/0x6f [gfs] [] exportfs_decode_fh+0x29/0x1a7 [exportfs] [] update_curr+0x62/0xef [] cache_check+0x59/0x3bd [sunrpc] [] enqueue_entity+0x2b/0x3d [] exp_get_by_name+0x43/0x52 [nfsd] [] sunrpc_cache_lookup+0x3e/0xf4 [sunrpc] [] cache_check+0x59/0x3bd [sunrpc] [] set_current_groups+0x14d/0x159 [] nfsd_setuser+0x125/0x175 [nfsd] [] nfsd_setuser_and_check_port+0x4f/0x57 [nfsd] [] __wake_up+0x32/0x42 [] exp_find+0x5b/0x63 [nfsd] [] rqst_exp_find+0x2e/0xa5 [nfsd] [] fh_verify+0x1e8/0x47f [nfsd] [] nfsd_acceptable+0x0/0xba [nfsd] [] load_balance_start_fair+0x0/0x5 [] load_balance_next_fair+0x0/0x5 [] nfsd_open+0x28/0x163 [nfsd] [] nfsd_write+0x90/0xe7 [nfsd] [] sunrpc_cache_lookup+0x3e/0xf4 [sunrpc] [] nfsd3_proc_write+0x109/0x123 [nfsd] [] nfsd_dispatch+0xd3/0x1a0 [nfsd] [] svcauth_unix_set_client+0x135/0x163 [sunrpc] [] svc_process+0x3be/0x670 [sunrpc] [] svc_recv+0x341/0x3b9 [sunrpc] [] nfsd+0x17f/0x28f [nfsd] [] nfsd+0x0/0x28f [nfsd] [] kernel_thread_helper+0x7/0x10 ======================= nfsd D f6af7d6c 0 5591 2 f7264030 00000046 f6af7c40 f6af7d6c f7ae47b8 c16cab00 40dfd904 000001df f7264198 c2018940 00000001 00000000 00002e66 00000000 f6af7e14 00000005 f8c989c4 0062ed3c 7fffffff f6af7cf0 f6af7bd4 7fffffff c02bbe22 00000000 Call Trace: [] gfs_dgetblk+0x26/0x2b [gfs] [] schedule_timeout+0x13/0x8d [] generic_file_aio_write_nolock+0x3f/0x92 [] wait_for_common+0x16/0x123 [] wait_for_common+0xb6/0x123 [] default_wake_function+0x0/0x8 [] glock_wait_internal+0x108/0x249 [gfs] [] gfs_glock_nq+0x36c/0x3a5 [gfs] [] gfs_glock_nq_init+0x18/0x2b [gfs] [] gfs_glock_nq_num+0x3f/0x88 [gfs] [] gfs_get_dentry+0xa3/0x2ba [gfs] [] skb_copy_datagram_iovec+0x53/0x1d3 [] __qdisc_run+0x9e/0x164 [] skb_release_all+0xa3/0xfa [] _spin_lock_bh+0x8/0x18 [] release_sock+0x12/0x8e [] tcp_recvmsg+0x630/0x73c [] dev_queue_xmit+0x287/0x2ae [] gfs_fh_to_dentry+0x69/0x6f [gfs] [] exportfs_decode_fh+0x29/0x1a7 [exportfs] [] update_curr+0x62/0xef [] cache_check+0x59/0x3bd [sunrpc] [] enqueue_entity+0x2b/0x3d [] exp_get_by_name+0x43/0x52 [nfsd] [] sunrpc_cache_lookup+0x3e/0xf4 [sunrpc] [] cache_check+0x59/0x3bd [sunrpc] [] set_current_groups+0x14d/0x159 [] nfsd_setuser+0x125/0x175 [nfsd] [] nfsd_setuser_and_check_port+0x4f/0x57 [nfsd] [] __wake_up+0x32/0x42 [] exp_find+0x5b/0x63 [nfsd] [] rqst_exp_find+0x2e/0xa5 [nfsd] [] fh_verify+0x1e8/0x47f [nfsd] [] nfsd_acceptable+0x0/0xba [nfsd] [] update_curr+0x62/0xef [] nfsd_open+0x28/0x163 [nfsd] [] nfsd_write+0x90/0xe7 [nfsd] [] sunrpc_cache_lookup+0x3e/0xf4 [sunrpc] [] nfsd3_proc_write+0x109/0x123 [nfsd] [] nfsd_dispatch+0xd3/0x1a0 [nfsd] [] svcauth_unix_set_client+0x135/0x163 [sunrpc] [] svc_process+0x3be/0x670 [sunrpc] [] svc_recv+0x341/0x3b9 [sunrpc] [] nfsd+0x17f/0x28f [nfsd] [] nfsd+0x0/0x28f [nfsd] [] kernel_thread_helper+0x7/0x10 ======================= nfsexport.sh S 7aee7065 0 23280 23131 f0c4b7b0 00000082 c16d5e4c 7aee7065 7aee7065 c200d980 5f756026 00000237 f0c4b918 c2010940 00000000 f744b080 00000000 00000000 00000000 c200d920 c2010974 c01bd464 ffffffff 00000001 f0c4b7a8 00000000 c012719a 00000246 Call Trace: [] security_task_wait+0xc/0xd [] do_wait+0x964/0xa47 [] do_sigaction+0x8a/0x136 [] do_fork+0x120/0x1cc [] default_wake_function+0x0/0x8 [] sys_wait4+0x31/0x34 [] sys_waitpid+0x27/0x2b [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= nfsexport.sh S 7bfb2065 0 23282 23130 f05d1870 00000086 c161076c 7bfb2065 7bfb2065 c200d980 5f81f760 00000237 f05d19d8 c2010940 00000000 f74f4080 0000dc82 00000000 00000000 c200d920 c2010974 c01bd464 ffffffff 00000001 f05d1868 00000000 c012719a c0103039 Call Trace: [] security_task_wait+0xc/0xd [] do_wait+0x964/0xa47 [] __switch_to+0x9d/0x11c [] do_sigaction+0x8a/0x136 [] do_fork+0x120/0x1cc [] default_wake_function+0x0/0x8 [] sys_wait4+0x31/0x34 [] sys_waitpid+0x27/0x2b [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= nfsexport.sh S 7b566065 0 23298 23280 f6b653d0 00000086 c1608b2c 7b566065 7b566065 c200d980 688d37c7 00000237 f6b65538 c2010940 00000000 f76bb080 00000000 00000000 00000000 c200d920 c2010974 c01bd464 ffffffff 00000001 f6b653c8 00000000 c012719a c0103039 Call Trace: [] security_task_wait+0xc/0xd [] do_wait+0x964/0xa47 [] __switch_to+0x9d/0x11c [] do_sigaction+0x8a/0x136 [] do_fork+0x120/0x1cc [] default_wake_function+0x0/0x8 [] sys_wait4+0x31/0x34 [] sys_waitpid+0x27/0x2b [] sysenter_past_esp+0x6b/0xa1 ======================= nfsexport.sh S f05e5f20 0 23300 23282 f6b64810 00000082 00000002 f05e5f20 f05e5f18 00000000 f6b4c680 00000001 f6b64978 c2010940 00000000 000826c1 a66f21b9 c200d99c 000000fd 00000001 00000000 00000000 ffffffff 00000001 f6b64808 00000000 c012719a c0103039 Call Trace: [] do_wait+0x964/0xa47 [] __switch_to+0x9d/0x11c [] do_sigaction+0x8a/0x136 [] do_fork+0x120/0x1cc [] default_wake_function+0x0/0x8 [] sys_wait4+0x31/0x34 [] sys_waitpid+0x27/0x2b [] sysenter_past_esp+0x6b/0xa1 ======================= sleep S c200e044 0 23354 23298 f6b4cc30 00000086 f6b53f38 c200e044 c200dfb8 00000000 687b0076 00000237 f6b4cd98 c2010940 00000000 00000237 00000000 00000000 f70c4900 c200dfb8 00000000 00000286 f6b53f38 00000001 00000001 00000000 c02bc21a 00000001 Call Trace: [] do_nanosleep+0x48/0x73 [] hrtimer_nanosleep+0x39/0xb7 [] hrtimer_wakeup+0x0/0x18 [] do_nanosleep+0x3d/0x73 [] sys_nanosleep+0x8b/0x96 [] sys_brk+0xc5/0xcd [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= sleep S f6b53f3c 0 23355 23300 f6b4c650 00000086 f6b55f38 f6b53f3c c200dfb8 00000000 688c5aad 00000237 f6b4c7b8 c2010940 00000000 00000237 00000000 00000000 f70c4ac0 c200dfb8 00000000 00000286 f6b55f38 00000001 00000001 00000000 c02bc21a 00000001 Call Trace: [] do_nanosleep+0x48/0x73 [] hrtimer_nanosleep+0x39/0xb7 [] hrtimer_wakeup+0x0/0x18 [] do_nanosleep+0x3d/0x73 [] sys_nanosleep+0x8b/0x96 [] sys_brk+0xc5/0xcd [] sysenter_past_esp+0x6b/0xa1 [] unix_create1+0xbb/0xfa ======================= Clocksource tsc unstable (delta = 9374030724 ns) Time: acpi_pm clocksource has been installed. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From p_pavlos at freemail.gr Wed Feb 13 10:33:47 2008 From: p_pavlos at freemail.gr (Pavlos Parissis) Date: Wed, 13 Feb 2008 12:33:47 +0200 Subject: [Linux-cluster] Netmask bit set to 32 in service ip Message-ID: <20080213123347.ad6mnbk1coo44kwo@easymail-app.hol.gr> Hi, I have a question which may sounds stupid. why the netmask for the service ip is set to 32 while the IP belongs to a network with different netmask? Regards, Pavlos : bond0: mtu 1500 qdisc noqueue link/ether 00:19:bb:3b:6c:b1 brd ff:ff:ff:ff:ff:ff inet 10.10.21.133/26 brd 10.10.21.191 scope global bond0 inet 10.10.21.138/32 scope global bond0 <<================= service ip 3: bond1: mtu 1500 qdisc noqueue link/ether 00:1a:4b:ff:9c:e8 brd ff:ff:ff:ff:ff:ff inet 10.10.21.69/27 brd 10.10.21.95 scope global bond1 inet 10.10.21.71/32 scope global bond1 <<================= service ip 4: bond2: mtu 1500 qdisc noqueue link/ether 00:1a:4b:ff:9c:ea brd ff:ff:ff:ff:ff:ff inet 10.10.21.228/27 brd 10.10.21.255 scope global bond2 5: eth0: mtu 1500 qdisc pfifo_fast master bond0 qlen 1000 link/ether 00:19:bb:3b:6c:b1 brd ff:ff:ff:ff:ff:ff 6: eth1: mtu 1500 qdisc pfifo_fast master bond0 qlen 1000 link/ether 00:19:bb:3b:6c:b1 brd ff:ff:ff:ff:ff:ff 7: eth2: mtu 1500 qdisc pfifo_fast master bond1 qlen 1000 link/ether 00:1a:4b:ff:9c:e8 brd ff:ff:ff:ff:ff:ff 8: eth3: mtu 1500 qdisc pfifo_fast master bond1 qlen 1000 link/ether 00:1a:4b:ff:9c:e8 brd ff:ff:ff:ff:ff:ff 9: eth4: mtu 1500 qdisc pfifo_fast master bond2 qlen 1000 link/ether 00:1a:4b:ff:9c:ea brd ff:ff:ff:ff:ff:ff 10: eth5: mtu 1500 qdisc pfifo_fast master bond2 qlen 1000 link/ether 00:1a:4b:ff:9c:ea brd ff:ff:ff:ff:ff:ff 11: eth6: mtu 1500 qdisc noop qlen 1000 link/ether 00:19:bb:3b:18:38 brd ff:ff:ff:ff:ff:ff 12: eth7: mtu 1500 qdisc noop qlen 1000 link/ether 00:19:bb:3b:18:3a brd ff:ff:ff:ff:ff:ff ocsi2# grep ' 10.10.21.138' /etc/cluster/cluster.conf ocsi2# grep '10.10.21.138' /etc/cluster/cluster.conf ocsi2# cat /etc/cluster/cluster.conf