[Linux-cluster] Fwd: GFS volume hangs on 3 nodes after gfs_grow

Alan A alan.zg at gmail.com
Fri Sep 26 19:38:45 UTC 2008


Node3

/var/log/messages



Sep 26 13:28:13 dev03 clvmd: Cluster LVM daemon started - connected to CMAN

Sep 26 13:28:17 dev03 kernel: GFS 0.1.23-5.el5_2.2 (built Aug 14 2008
17:08:35) installed

Sep 26 13:28:17 dev03 kernel: Trying to join cluster "lock_dlm",
"test1_cluster:gfs_fs1"

Sep 26 13:28:17 dev03 kernel: Joined cluster. Now mounting FS...

Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2:
Trying to acquire journal lock...

Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2:
Looking at journal...

Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2: Done

Sep 26 13:28:18 dev03 kernel: Trying to join cluster "lock_dlm",
"test1_cluster:gfs_sdb1"

Sep 26 13:28:18 dev03 kernel: Joined cluster. Now mounting FS...

Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2:
Trying to acquire journal lock...

Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2:
Looking at journal...

Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2:
Done


On Fri, Sep 26, 2008 at 2:06 PM, Alan A <alan.zg at gmail.com> wrote:

>
> I have been able to recreate the problem with gfs_grow. Here is the output
> of the test command, and the actual command, with the /var/log/messages -
> all from the node3. I am opening ticket with RH, and will give you the
> ticket number afterwards.
>
>
> [root at dev03 /]# gfs_grow -v -T /lvm_test2
>
> FS: Mount Point: /lvm_test2
>
> FS: Device: /dev/mapper/gfs_sdb1-gfs_sdb1
>
> FS: Options: rw,hostdata=jid=2:id=262146:first=0
>
> FS: Size: 1572864
>
> RGRP: Current Resource Group List:
>
> RI: Addr 1328945, RgLen 15, Start 1328960, DataLen 243904, BmapLen 60976
>
> RI: Addr 1310720, RgLen 2, Start 1310722, DataLen 18220, BmapLen 4555
>
> RI: Addr 1250100, RgLen 4, Start 1250104, DataLen 60616, BmapLen 15154
>
> RI: Addr 1189480, RgLen 4, Start 1189484, DataLen 60616, BmapLen 15154
>
> RI: Addr 1128860, RgLen 4, Start 1128864, DataLen 60616, BmapLen 15154
>
> RI: Addr 1068240, RgLen 4, Start 1068244, DataLen 60616, BmapLen 15154
>
> RI: Addr 1007620, RgLen 4, Start 1007624, DataLen 60616, BmapLen 15154
>
> RI: Addr 947000, RgLen 4, Start 947004, DataLen 60616, BmapLen 15154
>
> RI: Addr 886380, RgLen 4, Start 886384, DataLen 60616, BmapLen 15154
>
> RI: Addr 825760, RgLen 4, Start 825764, DataLen 60616, BmapLen 15154
>
> RI: Addr 765140, RgLen 4, Start 765144, DataLen 60616, BmapLen 15154
>
> RI: Addr 704512, RgLen 4, Start 704516, DataLen 60624, BmapLen 15156
>
> RI: Addr 545589, RgLen 4, Start 545593, DataLen 60612, BmapLen 15153
>
> RI: Addr 484970, RgLen 4, Start 484974, DataLen 60612, BmapLen 15153
>
> RI: Addr 424351, RgLen 4, Start 424355, DataLen 60612, BmapLen 15153
>
> RI: Addr 363732, RgLen 4, Start 363736, DataLen 60612, BmapLen 15153
>
> RI: Addr 303113, RgLen 4, Start 303117, DataLen 60612, BmapLen 15153
>
> RI: Addr 242494, RgLen 4, Start 242498, DataLen 60612, BmapLen 15153
>
> RI: Addr 181875, RgLen 4, Start 181879, DataLen 60612, BmapLen 15153
>
> RI: Addr 121256, RgLen 4, Start 121260, DataLen 60612, BmapLen 15153
>
> RI: Addr 60637, RgLen 4, Start 60641, DataLen 60612, BmapLen 15153
>
> RI: Addr 17, RgLen 4, Start 21, DataLen 60616, BmapLen 15154
>
> RGRP: 22 Resource groups in total
>
> JRNL: Current Journal List:
>
> JI: Addr 671744 NumSeg 2048 SegSize 16
>
> JI: Addr 638976 NumSeg 2048 SegSize 16
>
> JI: Addr 606208 NumSeg 2048 SegSize 16
>
> JRNL: 3 Journals in total
>
> DEV: Size: 1703936
>
> RGRP: New Resource Group List:
>
> RI: Addr 1572864, RgLen 9, Start 1572873, DataLen 131060, BmapLen 32765
>
> RGRP: 1 Resource groups in total
>
> [root at dev03 /]# gfs_grow -v  /lvm_test2
>
> FS: Mount Point: /lvm_test2
>
> FS: Device: /dev/mapper/gfs_sdb1-gfs_sdb1
>
> FS: Options: rw,hostdata=jid=2:id=262146:first=0
>
> FS: Size: 1572864
>
> RGRP: Current Resource Group List:
>
> RI: Addr 1328945, RgLen 15, Start 1328960, DataLen 243904, BmapLen 60976
>
> RI: Addr 1310720, RgLen 2, Start 1310722, DataLen 18220, BmapLen 4555
>
> RI: Addr 1250100, RgLen 4, Start 1250104, DataLen 60616, BmapLen 15154
>
> RI: Addr 1189480, RgLen 4, Start 1189484, DataLen 60616, BmapLen 15154
>
> RI: Addr 1128860, RgLen 4, Start 1128864, DataLen 60616, BmapLen 15154
>
> RI: Addr 1068240, RgLen 4, Start 1068244, DataLen 60616, BmapLen 15154
>
> RI: Addr 1007620, RgLen 4, Start 1007624, DataLen 60616, BmapLen 15154
>
> RI: Addr 947000, RgLen 4, Start 947004, DataLen 60616, BmapLen 15154
>
> RI: Addr 886380, RgLen 4, Start 886384, DataLen 60616, BmapLen 15154
>
> RI: Addr 825760, RgLen 4, Start 825764, DataLen 60616, BmapLen 15154
>
> RI: Addr 765140, RgLen 4, Start 765144, DataLen 60616, BmapLen 15154
>
> RI: Addr 704512, RgLen 4, Start 704516, DataLen 60624, BmapLen 15156
>
> RI: Addr 545589, RgLen 4, Start 545593, DataLen 60612, BmapLen 15153
>
> RI: Addr 484970, RgLen 4, Start 484974, DataLen 60612, BmapLen 15153
>
> RI: Addr 424351, RgLen 4, Start 424355, DataLen 60612, BmapLen 15153
>
> RI: Addr 363732, RgLen 4, Start 363736, DataLen 60612, BmapLen 15153
>
> RI: Addr 303113, RgLen 4, Start 303117, DataLen 60612, BmapLen 15153
>
> RI: Addr 242494, RgLen 4, Start 242498, DataLen 60612, BmapLen 15153
>
> RI: Addr 181875, RgLen 4, Start 181879, DataLen 60612, BmapLen 15153
>
> RI: Addr 121256, RgLen 4, Start 121260, DataLen 60612, BmapLen 15153
>
> RI: Addr 60637, RgLen 4, Start 60641, DataLen 60612, BmapLen 15153
>
> RI: Addr 17, RgLen 4, Start 21, DataLen 60616, BmapLen 15154
>
> RGRP: 22 Resource groups in total
>
> JRNL: Current Journal List:
>
> JI: Addr 671744 NumSeg 2048 SegSize 16
>
> JI: Addr 638976 NumSeg 2048 SegSize 16
>
> JI: Addr 606208 NumSeg 2048 SegSize 16
>
> JRNL: 3 Journals in total
>
> DEV: Size: 1703936
>
> RGRP: New Resource Group List:
>
> RI: Addr 1572864, RgLen 9, Start 1572873, DataLen 131060, BmapLen 32765
>
> RGRP: 1 Resource groups in total
>
> Preparing to write new FS information...
>
> Done.
>
>
>
>
>
> Node3
>
> /var/log/messages
>
>
>
> Sep 26 13:28:13 dev03 clvmd: Cluster LVM daemon started - connected to CMAN
>
> Sep 26 13:28:17 dev03 kernel: GFS 0.1.23-5.el5_2.2 (built Aug 14 2008
> 17:08:35) installed
>
> Sep 26 13:28:17 dev03 kernel: Trying to join cluster "lock_dlm",
> "test1_cluster:gfs_fs1"
>
> Sep 26 13:28:17 dev03 kernel: Joined cluster. Now mounting FS...
>
> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2:
> Trying to acquire journal lock...
>
> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2:
> Looking at journal...
>
> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2:
> Done
>
> Sep 26 13:28:18 dev03 kernel: Trying to join cluster "lock_dlm",
> "test1_cluster:gfs_sdb1"
>
> Sep 26 13:28:18 dev03 kernel: Joined cluster. Now mounting FS...
>
> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2:
> Trying to acquire journal lock...
>
> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2:
> Looking at journal...
>
> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2:
> Done
>
>
> On Fri, Sep 26, 2008 at 1:59 PM, Bob Peterson <rpeterso at redhat.com> wrote:
>
>> ----- "Alan A" <alan.zg at gmail.com> wrote:
>> | Again thanks for the fast and prompt response Bob.
>> |
>> | I will try to reproduce the problem with gfs_grow.
>> |
>> | One more question regarding GFS - what steps would you recommend (if
>> | any)
>> | for growing and shrinking active GFS volume?
>>
>> Hi Alan,
>>
>> Neither GFS or GFS2 volumes cannot be shrunk.  Eventually I
>> need to start working on a gfs2_shrink tool for gfs2, but I
>> don't think GFS will ever be able to shrink.
>>
>> As for growing, it sounds like you're already familiar with
>> that.  You just do something like:
>>
>> lvresize or lvextend the logical volume
>> mount the gfs volume to a mount point
>> gfs_grow /your/mount/point
>>
>> It's probably safest to do gfs_grow when there is not a lot of
>> system activity.  For example, at night when the system is not
>> being beat up by lots of I/O.
>>
>> Regards,
>>
>> Bob Peterson
>> Red Hat Clustering & GFS
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
>
> --
> Alan A.
>



-- 
Alan A.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20080926/f22704fc/attachment.htm>


More information about the Linux-cluster mailing list