A hard link problem

mark m.roth2006 at rcn.com
Wed Apr 15 04:03:10 UTC 2009


Cameron Simpson wrote:
> On 14Apr2009 21:30, mark <m.roth2006 at rcn.com> wrote:
> | Cameron Simpson wrote:
> | > On 14Apr2009 10:34, m.roth2006 at rcn.com <m.roth2006 at rcn.com> wrote:
> | > | We're backing up using rsync and hard links. The problem is that the
> | > | fs is filling up *fast*.
> | > 
> | > Is this the usual scheme of:
> | >   - hard link previous backup tree to new backup tree
> | >   - rsync to new backup tree
> | > ?
> | > (I gather you can nearly do that in one rsync step these days.)
> | > Anyway, is that what you mean?
> | 
> | You can do it in one step with rsync these days. However, there's still too
> | much.
> 
> Have you checked if there are files whose content is the same but have
> changes metadata (mtime, ownership, etc)? You could run an md5sum over

When I do an ll -R, on one set of directories, I get just over 10k files. When
I d0 ls -liR | sort -u, I get about 3500.
<snip>
> | I'm in the process, probably successfully, of convincing my boss to let
> | me do compressed tar backups of the rsync'd directories, with
> | beginning-of-month and weekly fulls, and the rest incrementals.
> 
> I presume keeping a bunch of hardlinked backups and tarring the older
> stuff? That way you have easy restore, partial restore and browsing.

Right. Don't think we need stuff over a week or two that's not tarred and
feathered, er, compressed.
> 
> I also have a script for pruning forests of this kind of backup; it loops
> (optional) removing the oldest backup until sufficent free space is
> available, keeping a specifiable minimum number of backups around. You
> may find it useful, or adapt it.

It's more we need a policy of so long, then off to tape somewhere.

Actually, we need an enterprise backup solution, but they haven't sprung for
that yet.
	
	mark




More information about the redhat-list mailing list