<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
I've been watching mine do this for about two months now. I think it
started when I upgraded from RHEL 4.5 to 4.6. The app team only has
about 18 gig used on that 1.7TB drive but they create and delete allot
of files because that is the loading area they used when new data comes
in. In the last month I have seen it go up to 70 to 85% used but it
usually comes back down to about 50% within about 24 hours. Hopefully
they will find a fix for this soon.<br>
<br>
---<br>
Jay<br>
<br>
Shawn Hood wrote:
<blockquote
cite="mid:cfe2fc960810131302j3a3e4d40x7a787bc8c1951770@mail.gmail.com"
type="cite">
<pre wrap="">I actually just ran the reclaim on a live filesystem and it seems to
be working okay now. Hopefully this isn't problematic, as a large
number of operations in the GFS tool suite operate on mounted
filesystems.
Shawn
On Mon, Oct 13, 2008 at 4:00 PM, Jason Huddleston
<a class="moz-txt-link-rfc2396E" href="mailto:jason.huddleston@verizon.com"><jason.huddleston@verizon.com></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Shawn,
I have been seeing the same thing on one of my clusters (shown below)
under Red Hat 4.6. I found some details on this under an article on the
open-shared root web site
(<a class="moz-txt-link-freetext" href="http://www.open-sharedroot.org/faq/troubleshooting-guide/file-systems/gfs/file-system-full">http://www.open-sharedroot.org/faq/troubleshooting-guide/file-systems/gfs/file-system-full</a>)
and an article in Red Hat's knowledge base
(<a class="moz-txt-link-freetext" href="http://kbase.redhat.com/faq/FAQ_78_10697.shtm">http://kbase.redhat.com/faq/FAQ_78_10697.shtm</a>). It seems to be a bug in the
reclaim of metadata blocks when an inode is released. I saw a patch
(bz298931) released for this in the 2.99.10 cluster release notes but it was
reverted (bz298931) a few days after it was submitted. The only suggestion
that I have gotten back from Red Hat is to shutdown the app so the GFS
drives are not being accessed and then run the "gfs_tool reclaim <mount
point>" command.
[root@omzdwcdrp003 ~]# gfs_tool df /l1load1
/l1load1:
SB lock proto = "lock_dlm"
SB lock table = "DWCDR_prod:l1load1"
SB ondisk format = 1309
SB multihost format = 1401
Block size = 4096
Journals = 20
Resource Groups = 6936
Mounted lock proto = "lock_dlm"
Mounted lock table = "DWCDR_prod:l1load1"
Mounted host data = ""
Journal number = 13
Lock module flags =
Local flocks = FALSE
Local caching = FALSE
Oopses OK = FALSE
Type Total Used Free use%
------------------------------------------------------------------------
inodes 155300 155300 0 100%
metadata 2016995 675430 1341565 33%
data 452302809 331558847 120743962 73%
[root@omzdwcdrp003 ~]# df -h /l1load1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/l1load1--vg-l1load1--lv
1.7T 1.3T 468G 74% /l1load1
[root@omzdwcdrp003 ~]# du -sh /l1load1
18G /l1load1
----
Jason Huddleston, RHCE
----
PS-USE-Linux
Partner Support - Unix Support and Engineering
Verizon Information Processing Services
Shawn Hood wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Does GFS reserve blocks for the superuser, a la ext3's "Reserved block
count"? I've had a ~1.1TB FS report that it's full with df reporting
~100GB remaining.
</pre>
</blockquote>
<pre wrap="">
--
Linux-cluster mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a>
</pre>
</blockquote>
<pre wrap=""><!---->
</pre>
</blockquote>
<br>
</body>
</html>