A hard link problem
m.roth2006 at rcn.com
m.roth2006 at rcn.com
Tue Apr 14 14:34:35 UTC 2009
We're backing up using rsync and hard links. The problem is that the fs is filling up *fast*.
According to df,
154814444 108694756 38255576 74%
According to du -s -k, I've got 123176708 in use, which appears larger (unless it's too early in the morning for me to read that right).
Now, ls -Ri | wc -l on one directory shows 10765, while ls -Ri | sort -u | wc -l in the same directory shows 3274, so yeah, there are a lot of hard links. What I need to figure out, so that we don't blow out the filesystem, is how much space is *really* in use. I'd like something a bit faster and more elegant than, say, ls -Ri | awk '{print $!;}' > filelist, and then a shell script to loop
find /backup -inum $fromlist -ls | awk '{print $7;}' > total, and then awk '{total += $1;}END { print total;}' total
That would be a mess....
Suggestions?
mark
More information about the redhat-list
mailing list