[Linux-cluster] GFS reserved blocks?

Jason Huddleston jason.huddleston at verizon.com
Mon Oct 13 20:39:26 UTC 2008


Sweet. Maybe your notes will save someone else some time. I know it was 
a great resource for me when I set up my first GFS cluster.

---
Jay

Shawn Hood wrote:
> Someone give me write access to the FAQ!  I've been compiling these
> undocumented (or hard to find) bits of knowledge for some time now.
>
>
> Shawn
>
> On Mon, Oct 13, 2008 at 4:29 PM, Kevin Anderson <kanderso at redhat.com> wrote:
>   
>> For gfs, the recommended solution is to periodically run gfs_tool
>> reclaim on your filesystems at a time of your choosing.  Depending on
>> the frequency of your deletes, this might be once a day or once a week.
>> The only downside is the during the reclaim operation, the filesystem is
>> locked from other activities.  As the reclaim is relatively fast, this
>> doesn't really cause a problem.  But scheduling the command to be run
>> during "idle" times of the day will mitigate the impact.
>>
>> We attempted to come up with a method of doing this automatically, but
>> there are deadlock lock issues between gfs and the vfs layer that
>> prevent it from being implemented.  In addition, there is still the
>> issue of when is the right time to do the reclaim, and this would be
>> application specific.
>>
>> So, just run gfs_tool reclaim if your storage is getting consumed by
>> metadata storage.
>>
>> Kevin
>>
>> On Mon, 2008-10-13 at 15:18 -0500, Jason Huddleston wrote:
>>     
>>> I've been watching mine do this for about two months now. I think it
>>> started when I upgraded from RHEL 4.5 to 4.6. The app team only has
>>> about 18 gig used on that 1.7TB drive but they create and delete allot
>>> of files because that is the loading area they used when new data
>>> comes in. In the last month I have seen it go up to 70 to 85% used but
>>> it usually comes back down to about 50% within about 24 hours.
>>> Hopefully they will find a fix for this soon.
>>>
>>> ---
>>> Jay
>>>
>>> Shawn Hood wrote:
>>>       
>>>> I actually just ran the reclaim on a live filesystem and it seems to
>>>> be working okay now.  Hopefully this isn't problematic, as a large
>>>> number of operations in the GFS tool suite operate on mounted
>>>> filesystems.
>>>>
>>>> Shawn
>>>>
>>>> On Mon, Oct 13, 2008 at 4:00 PM, Jason Huddleston
>>>> <jason.huddleston at verizon.com> wrote:
>>>>
>>>>         
>>>>> Shawn,
>>>>>   I have been seeing the same thing on one of my clusters (shown below)
>>>>> under Red Hat 4.6. I found some details on this under an article on the
>>>>> open-shared root web site
>>>>> (http://www.open-sharedroot.org/faq/troubleshooting-guide/file-systems/gfs/file-system-full)
>>>>> and an article in Red Hat's knowledge base
>>>>> (http://kbase.redhat.com/faq/FAQ_78_10697.shtm). It seems to be a bug in the
>>>>> reclaim of metadata blocks when an inode is released. I saw a patch
>>>>> (bz298931) released for this in the 2.99.10 cluster release notes but it was
>>>>> reverted (bz298931) a few days after it was submitted. The only suggestion
>>>>> that I have gotten back from Red Hat is to shutdown the app so the GFS
>>>>> drives are not being accessed and then run the "gfs_tool reclaim <mount
>>>>> point>" command.
>>>>>
>>>>> [root at omzdwcdrp003 ~]# gfs_tool df /l1load1
>>>>> /l1load1:
>>>>> SB lock proto = "lock_dlm"
>>>>> SB lock table = "DWCDR_prod:l1load1"
>>>>> SB ondisk format = 1309
>>>>> SB multihost format = 1401
>>>>> Block size = 4096
>>>>> Journals = 20
>>>>> Resource Groups = 6936
>>>>> Mounted lock proto = "lock_dlm"
>>>>> Mounted lock table = "DWCDR_prod:l1load1"
>>>>> Mounted host data = ""
>>>>> Journal number = 13
>>>>> Lock module flags =
>>>>> Local flocks = FALSE
>>>>> Local caching = FALSE
>>>>> Oopses OK = FALSE
>>>>>
>>>>> Type           Total          Used           Free           use%
>>>>> ------------------------------------------------------------------------
>>>>> inodes         155300         155300         0              100%
>>>>> metadata       2016995        675430         1341565        33%
>>>>> data           452302809      331558847      120743962      73%
>>>>> [root at omzdwcdrp003 ~]# df -h /l1load1
>>>>> Filesystem            Size  Used Avail Use% Mounted on
>>>>> /dev/mapper/l1load1--vg-l1load1--lv
>>>>>                    1.7T  1.3T  468G  74% /l1load1
>>>>> [root at omzdwcdrp003 ~]# du -sh /l1load1
>>>>> 18G     /l1load1
>>>>>
>>>>> ----
>>>>> Jason Huddleston, RHCE
>>>>> ----
>>>>> PS-USE-Linux
>>>>> Partner Support - Unix Support and Engineering
>>>>> Verizon Information Processing Services
>>>>>
>>>>>
>>>>>
>>>>> Shawn Hood wrote:
>>>>>
>>>>>           
>>>>>> Does GFS reserve blocks for the superuser, a la ext3's "Reserved block
>>>>>> count"?  I've had a ~1.1TB FS report that it's full with df reporting
>>>>>> ~100GB remaining.
>>>>>>
>>>>>>
>>>>>>
>>>>>>             
>>>>> --
>>>>> Linux-cluster mailing list
>>>>> Linux-cluster at redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>
>>>>>
>>>>>           
>>>>
>>>>
>>>>         
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>       
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>>     
>
>
>
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20081013/cfee2bd0/attachment.htm>


More information about the Linux-cluster mailing list