What uses these 50 GB?

Eric Sandeen sandeen at redhat.com
Sun Aug 17 19:36:23 UTC 2014


On 8/17/14, 2:05 PM, Roland Olbricht wrote:
> Hello Eric,
> 
> thank you for the quick reply and the explanations.
> 
>> dumpe2fs -h output might show us that.
> 
> Filesystem volume name:   <none>

...

> Filesystem created:       Wed Apr 16 06:31:53 2014
> Last mount time:          Wed Apr 16 06:48:26 2014
> Last write time:          Wed Apr 16 06:48:26 2014
> Mount count:              1
> Maximum mount count:      -1
> Last checked:             Wed Apr 16 06:31:53 2014
> Check interval:           0 (<none>)
> Lifetime writes:          133 MB
> Reserved blocks uid:      0 (user root)
> Reserved blocks gid:      0 (group root)
> First inode:              11
> Inode size:               256
> Required extra isize:     28
> Desired extra isize:      28
> Journal inode:            8
> First orphan inode:       8388637

...

Ok, pretty standard, and it appears that you mkfs'd it in april,
mounted it, and have never unmounted it.  (Nor has it ever crashed,
apparently).

Also, the last line above shows that there is at least one orphan
inode, meaning it's open but unlinked.  Blocks aren't freed
until it's closed; that may be where all the space is.

> 
>> It could also be open but unlinked files, or unprocessed orphan inodes
>> after a crash.  Have you run e2fsck?
> 
> No, not yet. Shoud I do this routinely? Running
> 
> sudo e2fsck -fn /dev/sdc
> 
> gives as result:
> 
> Warning!  /dev/sdc is mounted.
> Warning: skipping journal recovery because doing a read-only filesystem check.

and so the rest isn't useful.  You'd have to unmount it to check it.

...

> So I assume the best thing to do here would be to unmount the disk and do a e2fsck?

Probably so, or figure out who has that orphan inode open... if that process lets
go, the space may free up; if it doesn't let go, you won't be able to unmount...

-Eric





More information about the Ext3-users mailing list