[rhn-users] nfs limit on tarred files?

cbeerse at gmail.com cbeerse at gmail.com
Wed Jun 8 10:12:47 UTC 2005


Doctor Khumalo wrote:

> Hi there -
> 
> I'm having problem with NFS mounts. This occurs in all flavors of Red 
> Hat. Here is what happens:
> 
> I have an nfs mount from a server(python:/backup) to a  directory called 
> /backup on my local server, monty. After installing RHEL 4 on monty, I 
> mount the nfs server(python:/backup) and then try to copy over tarred 
> directories from the newly mounted /backup. The directory is 34 GB and 
> is tarred into a single .tar file. When I try to untar the 
> directory/files to the / partition, it freezes and stops copying files 
> ove. Here's what I do
> 
> cd /
> tar -xf /backups/files.tar

As far as I can imagine, either tar and/or nfs does not handle such large files.

The best thing to do is to use tar in a pipeline. Start on the machine where the 
tar-file (or the actual data) is and do something like this ( a one-liner!! )

cat large_file.tar | ( rsh targetmachine '( cd /path/to/target; tar xvf - ) ' )


You can use `ssh` in place of `rsh`, most current systems donnot allow rsh and 
have ssh available by default. If you like to pick the source, not the tarfile, 
the command is:

tar cvf - . | ( ssh targetmachine '( cd /path/to/target; tar xvf - ) ' )



btw, in the above examples, tar has no limit to the amount of data since it does 
not write to (nor read from) a sinlge large file, it uses a stream of data. It 
also does not use the nfs filesystem, avoiding file-size limits there too.


CBee




More information about the rhn-users mailing list