<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
I am running CentOS with a GFS2 filesystem on a Dell EqualLogic
SAN. I created the filesystem by mapping an RDM through VMWare to
the guest OS. I used pvcreate, vgcreate, lvcreate, and mkfs.gfs2 to
create the filesystem and the underlying architecture. I've
included the log I created to document the process below.<br>
<br>
I've already increased the size of the LUN on the SAN. Now, how do
I increase the size of the GFS2 filesystem and the LVM beneath it?
Do I need to do something with the PV and VG as well? <br>
<br>
Thanks in advance for your help.<br>
<br>
Wes<br>
<br>
<br>
Here is the log of the process I used to create the filesystem:<br>
<br>
<blockquote>
<div class="field-items">
<div class="field-item odd">
<p>With the RDM created and all the daemons started (luci,
ricci, cman) now I can config GFS. Make sure they are
running on all of our nodes.<br>
We can even see the RDM on the guest systems:</p>
<pre>[root@test03]# ls /dev/sdb
/dev/sdb
[root@test04]# ls /dev/sdb
/dev/sdb</pre>
<p>So we are doing this using lvm clustering: <a
href="http://emrahbaysal.blogspot.com/2011/03/gfs-cluster-on-vmware-vsphere-rhel.html"
title="http://emrahbaysal.blogspot.com/2011/03/gfs-cluster-on-vmware-vsphere-rhel.html">http://emrahbaysal.blogspot.com/2011/03/gfs-cluster-on-vmware-vsphere-rh...</a><br>
and <a
href="http://linuxdynasty.org/215/howto-setup-gfs2-with-clustering/"
title="http://linuxdynasty.org/215/howto-setup-gfs2-with-clustering/">http://linuxdynasty.org/215/howto-setup-gfs2-with-clustering/</a><br>
<br>
We've already set up gfs daemons and fencing and whatnot.<br>
Before we start to create the LVM2 volumes and Proceed to
GFS2, we will need to enable clustering in LVM2.</p>
<pre>[root@test03]# lvmconf --enable-cluster</pre>
<p>I try to create the cluster FS<br>
[root@test03]# pvcreate /dev/sdb<br>
connect() failed on local socket: No such file or
directory<br>
Internal cluster locking initialisation failed.<br>
WARNING: Falling back to local file-based locking.<br>
Volume Groups with the clustered attribute will be
inaccessible.<br>
Physical volume "/dev/sdb" successfully created<br>
One internet source says:</p>
<pre>>> That indicates that you have cluster locking enabled but that the cluster LVM
>> daemon (clvmd) is not running.</pre>
<p>So let's start it,</p>
<pre>[root@test03]# service clvmd status
clvmd is stopped
[root@test03]# service clvmd start
Starting clvmd:
Activating VG(s): 2 logical volume(s) in volume group "VolGroup00" now active
clvmd not running on node test04
[ OK ]
[root@test03]# chkconfig clvmd on</pre>
<p>Okay, over on the other node:</p>
<pre>[root@test04]# service clvmd status
clvmd is stopped
[root@test04]# service clvmd start
Starting clvmd: clvmd could not connect to cluster manager
Consult syslog for more information
[root@test04]# service cman status
groupd is stopped
[root@test04]# service cman start
Starting cluster:
Loading modules... done
Mounting configfs... done
Starting ccsd... done
Starting cman... done
Starting daemons... done
Starting fencing... done
[ OK ]
[root@test04]# chkconfig cman on
[root@test04]# service luci status
luci is running...
[root@test04]# service ricci status
ricci (pid 4381) is running...
[root@test04]# chkconfig ricci on
[root@test04]# chkconfig luci on
[root@test04]# service clvmd start
Starting clvmd:
Activating VG(s): 2 logical volume(s) in volume group "VolGroup00" now active
[ OK ]</pre>
<p>And this time, no complaints:</p>
<pre>[root@test03]# service clvmd restart
Restarting clvmd: [ OK ]</pre>
<p>Try again with pvcreate:</p>
<pre>[root@test03]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created</pre>
<p>Create volume group:</p>
<pre>[root@test03]# vgcreate gdcache_vg /dev/sdb
Clustered volume group "gdcache_vg" successfully created</pre>
<p>Create logical volume:</p>
<pre>[root@test03]# lvcreate -n gdcache_lv -L 2T gdcache_vg
Logical volume "gdcache_lv" created</pre>
<p>Create GFS filesystem, ahem, GFS2 filesystem. I screwed
this up the first time.</p>
<pre>[root@test03]# mkfs.gfs2 -j 8 -p lock_dlm -t gdcluster:gdcache -j 4 /dev/mapper/gdcache_vg-gdcache_lv
This will destroy any data on /dev/mapper/gdcache_vg-gdcache_lv.
It appears to contain a gfs filesystem.
Are you sure you want to proceed? [y/n] y
Device: /dev/mapper/gdcache_vg-gdcache_lv
Blocksize: 4096
Device Size 2048.00 GB (536870912 blocks)
Filesystem Size: 2048.00 GB (536870910 blocks)
Journals: 4
Resource Groups: 8192
Locking Protocol: "lock_dlm"
Lock Table: "gdcluster:gdcache"
UUID: 0542628C-D8B8-2480-F67D-081435F38606</pre>
<p>Okay! And! Finally! We mount it!</p>
<pre>[root@test03]# mount /dev/mapper/gdcache_vg-gdcache_lv /data
/sbin/mount.gfs: fs is for a different cluster
/sbin/mount.gfs: error mounting lockproto lock_dlm</pre>
<p>Wawawwah. Bummer.<br>
/var/log/messages says:</p>
<pre>Jan 19 14:21:05 test03 gfs_controld[3369]: mount: fs requires cluster="gdcluster" current="gdao_cluster"</pre>
<p>Someone on the interwebs concurs:</p>
<p> the cluster name defined in /etc/cluster/cluster.conf is
different from the one tagged on the GFS volume.</p>
<p>Okay, so looking at cluster.conf:</p>
<pre>[root@test03]# vi /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="25" name="gdao_cluster"></pre>
<p>Let's change that to match how I named the cluster in the
above cfg_mkfs</p>
<pre>[root@test03]# vi /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="25" name="gdcluster"></pre>
<p>And restart some stuff:</p>
<pre>[root@test03]# /etc/init.d/gfs2 stop
[root@test03]# service luci stop
Shutting down luci: service ricci [ OK ]
[root@test03]# service ricci stop
Shutting down ricci: [ OK ]
[root@test03]# service cman stop
Stopping cluster:
Stopping fencing... done
Stopping cman... failed
/usr/sbin/cman_tool: Error leaving cluster: Device or resource busy
[FAILED]
[root@test03]# cman_tool leave force
[root@test03]# service cman stop
Stopping cluster:
Stopping fencing... done
Stopping cman... done
Stopping ccsd... done
Unmounting configfs... done
[ OK ]</pre>
<p>AAAARRRRGGGHGHHH</p>
<pre>[root@test03]# service ricci start
Starting ricci: [ OK ]
[root@test03]# service luci start
Starting luci: [ OK ]
Point your web browser to <a href="https://test03.gdao.ucsc.edu:8084" title="https://test03.gdao.ucsc.edu:8084">https://test03.gdao.ucsc.edu:8084</a> to access luci
[root@test03]# service gfs2 start
[root@test03]# service cman start
Starting cluster:
Loading modules... done
Mounting configfs... done
Starting ccsd... done
Starting cman... done
Starting daemons... done
Starting fencing... failed
[FAILED]</pre>
<p>I had to reboot. </p>
<pre>[root@test03]# service luci status
luci is running...
[root@test03]# service ricci status
ricci (pid 4385) is running...
[root@test03]# service cman status
cman is running.
[root@test03]# service gfs2 status</pre>
<p>Okay, again?</p>
<pre>[root@test03]# mount /dev/mapper/gdcache_vg-gdcache_lv /data</pre>
<p>Did that just work? And on test04</p>
<pre>[root@test04]# mount /dev/mapper/gdcache_vg-gdcache_lv /data</pre>
<p>Okay, how about a test:</p>
<pre>[root@test03]# touch /data/killme</pre>
<p>And then we look on the other node:</p>
<pre>[root@test04]# ls /data
killme</pre>
<p>Holy shit. <br>
I've been working so hard for this moment that I don't
completely know what to do now.<br>
Question is, now that I have two working nodes, can I
duplicate it?</p>
<p> Okay, finish up:</p>
<pre>[root@test03]# chkconfig rgmanager on
[root@test03]# service rgmanager start
Starting Cluster Service Manager: [ OK ]
[root@test03]# vi /etc/fstab
/dev/mapper/gdcache_vg-gdcache_lv /data gfs2 defaults,noatime,nodiratime 0 0</pre>
<p>and on the other node:</p>
<pre>[root@test04]# chkconfig rgmanager on
[root@test04]# service rgmanager start
Starting Cluster Service Manager:
[root@test04]# vi /etc/fstab
/dev/mapper/gdcache_vg-gdcache_lv /data gfs2 defaults,noatime,nodiratime 0 0</pre>
<p> And it works. Hell, yeah.<br>
</p>
</div>
</div>
</blockquote>
<br>
<br>
<br>
<br>
<br>
<br>
</body>
</html>