[Linux-cluster] GFS reserved blocks?

Shawn Hood shawnlhood at gmail.com
Mon Oct 13 20:02:59 UTC 2008


I actually just ran the reclaim on a live filesystem and it seems to
be working okay now.  Hopefully this isn't problematic, as a large
number of operations in the GFS tool suite operate on mounted
filesystems.

Shawn

On Mon, Oct 13, 2008 at 4:00 PM, Jason Huddleston
<jason.huddleston at verizon.com> wrote:
> Shawn,
>   I have been seeing the same thing on one of my clusters (shown below)
> under Red Hat 4.6. I found some details on this under an article on the
> open-shared root web site
> (http://www.open-sharedroot.org/faq/troubleshooting-guide/file-systems/gfs/file-system-full)
> and an article in Red Hat's knowledge base
> (http://kbase.redhat.com/faq/FAQ_78_10697.shtm). It seems to be a bug in the
> reclaim of metadata blocks when an inode is released. I saw a patch
> (bz298931) released for this in the 2.99.10 cluster release notes but it was
> reverted (bz298931) a few days after it was submitted. The only suggestion
> that I have gotten back from Red Hat is to shutdown the app so the GFS
> drives are not being accessed and then run the "gfs_tool reclaim <mount
> point>" command.
>
> [root at omzdwcdrp003 ~]# gfs_tool df /l1load1
> /l1load1:
> SB lock proto = "lock_dlm"
> SB lock table = "DWCDR_prod:l1load1"
> SB ondisk format = 1309
> SB multihost format = 1401
> Block size = 4096
> Journals = 20
> Resource Groups = 6936
> Mounted lock proto = "lock_dlm"
> Mounted lock table = "DWCDR_prod:l1load1"
> Mounted host data = ""
> Journal number = 13
> Lock module flags =
> Local flocks = FALSE
> Local caching = FALSE
> Oopses OK = FALSE
>
> Type           Total          Used           Free           use%
> ------------------------------------------------------------------------
> inodes         155300         155300         0              100%
> metadata       2016995        675430         1341565        33%
> data           452302809      331558847      120743962      73%
> [root at omzdwcdrp003 ~]# df -h /l1load1
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/mapper/l1load1--vg-l1load1--lv
>                    1.7T  1.3T  468G  74% /l1load1
> [root at omzdwcdrp003 ~]# du -sh /l1load1
> 18G     /l1load1
>
> ----
> Jason Huddleston, RHCE
> ----
> PS-USE-Linux
> Partner Support - Unix Support and Engineering
> Verizon Information Processing Services
>
>
>
> Shawn Hood wrote:
>>
>> Does GFS reserve blocks for the superuser, a la ext3's "Reserved block
>> count"?  I've had a ~1.1TB FS report that it's full with df reporting
>> ~100GB remaining.
>>
>>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



-- 
Shawn Hood
910.670.1819 m




More information about the Linux-cluster mailing list