Directory Size problem

Dan Track dan.track at gmail.com
Fri Dec 1 12:09:25 UTC 2006


On 11/30/06, Steven W. Orr <steveo at syslang.net> wrote:
> On Thursday, Nov 30th 2006 at 08:51 -0000, quoth Dan Track:
>
> =>Thank you very much for the gem of a tip. That is the issue.
> =>
> =>Many Thanks
> =>Dan
>
> Sorry, but I believe that this is not the issue. The size of a directory
> is in fact determined by the maximum number of files tha the directory
> ever contained. (Something in the back of my mind seems to remember
> somethingf about the number of files in a directory exceeding 188 to cause
> the directory to go from 1 block to 2 blocks. I'm probably wrong about the
> exact number.) Your problem is in the 3Gig region so the size of the
> directory is not an issue. Don't confuse the size of the directory with
> the size of the content of the directory.
>
> Every file has two counters on it in the file system. One counter is the
> number of links on the file. Every file is allowed to have multiple names.
> That's what the ln command is all about. (That's ln without the -s
> option.) If you delete a file then the file doesn't actually get its
> storage reclaimed unless the last name to the file is deleted. That's why
> the systemcall to delete a file is called unlink(2). The other counter
> that exists on a file is the open count. If a file is opened and then
> deleted, the file strorage is not reclaimed until the last open channel is
> closed. If a file is opened by multiple channels then this phenomenum will
> occur.
>
> Just for fun, create a big file for yourself:
>
> dd if=/dev/zero of=/tmp/foo count=1M
> ln /tmp/foo /tmp/bar
> { read ; sleep 1000; } < /tmp/foo &
> { read ; sleep 1000; } < /tmp/bar &
> rm /tmp/foo
>
> Play with these commands and you'll see all of this stuff come into play.
>

Hi

Thanks for the explanation. I had assumed that the person was right,as
I had no other idea why I would have this problem If I run du or df -h
I get to see where the disk usage being taken up, and du shows that
its /var/spool/mqueue. But if I run ls -lhd /var/spool/mqueue it
reports the directory to be only 300k. Can you suggest any other
possibility?

I've run the sync command aswell, but to no avail.

Thanks in advance
Dan




More information about the fedora-list mailing list