rsync or rdist

Herta Van den Eynde herta.vandeneynde at gmail.com
Tue Mar 11 12:48:28 UTC 2008


On 11/03/2008, Aubin, Jean-Francois <jean-francois.aubin at cgi.com> wrote:
>
> use rsync witch a switch -ropcl.
>   Ex: rsync -ropcl --stats --progress /rep_src
> username at other_srv:/storage/archive
>
> rsync with this method is very safe, checksum verify .... We also use
> with a generated ssh key.
>
>
> J-F Aubin
>
> Le mardi 11 mars 2008 à 12:47 +0100, Herta Van den Eynde a écrit :
>
> > On 10/03/2008, peter winterflood <peter.winterflood at ossi.co.uk> wrote:
> > >
> > > Herta Van den Eynde wrote:
> > > > On 10/03/2008, Rodrick Brown <rbrown at ballistasec.com> wrote:
> > > >
> > > >> tar cvfp - . | ssh -c blowfish remote '(cd /storage/archive; tar
> xvf -
> > > )'
> > > >>
> > > >>
> > > >> -----Original Message-----
> > > >> From: redhat-list-bounces at redhat.com [mailto:
> > > >> redhat-list-bounces at redhat.com] On Behalf Of Mad Unix
> > > >> Sent: Monday, March 10, 2008 9:29 AM
> > > >> To: General Red Hat Linux discussion list
> > > >> Subject: Re: rsync or rdist
> > > >>
> > > >> any one have acript to do the remote transfer ...
> > > >>
> > > >> On Mon, Mar 10, 2008 at 3:17 PM, Herta Van den Eynde <
> > > >> herta.vandeneynde at gmail.com> wrote:
> > > >>
> > > >>
> > > >>> On 10/03/2008, Mad Unix <madunix at gmail.com> wrote:
> > > >>>
> > > >>>> I need a script transfer archive log files from Production site
> > > >>>> Server1  to DR site Server2 on the same subnet
> > > >>>> i want to sync the files between /arc with /storage/archive on
> both
> > > >>>> servers ....
> > > >>>>
> > > >>>> --
> > > >>>> madunix
> > > >>>>
> > > >>> AFAIK, rdist copies entire files. rsync only copies the blocks
> that
> > > are
> > > >>> different.
> > > >>>
> > > >>> Note also that you can run rsync through ssh for a more secure
> > > transfer.
> > > >>>
> > > >>> Kind regards,
> > > >>>
> > > >>> Herta
> > > >>>
> > > >>> --
> > > >>> "Life on Earth may be expensive,
> > > >>> but it comes with a free ride around the Sun."
> > > >>> --
> > > >>> redhat-list mailing list
> > > >>> unsubscribe mailto:redhat-list-request at redhat.com
> ?subject=unsubscribe
> > > >>> https://www.redhat.com/mailman/listinfo/redhat-list
> > > >>>
> > > >>>
> > > >>
> > > >> --
> > > >> madunix
> > > >> --
> > > >>
> > > >>
> > > > Looks like a complicated way to do what a simple 'scp -pr source
> target'
> > > > will accomplish.  Or am I missing something?
> > > >
> > > > Rodrick does have a point, though: if you simply want to copy new
> files
> > > from
> > > > server A to server B, a simple copy will be faster than rsync, as
> you
> > > don't
> > > > need the comparison phase.  But scp will be faster than the tar -
> > > transfer -
> > > > untar.
> > > >
> > > > Kind regards,
> > > >
> > > > Herta
> > > >
> > > >
> > > >
> > >
> > > well if scp inherits the same limitation of rcp -r then it wont take
> > > links with it.
> > > tar picks up all links, but does not follow them.
> > >
> > > I would always use a variation of the tar command given above for
> > > complete directory copies, from one system to another, however would
> add
> > > the "B" modifier to the example given above to ensure that tar Blocks
> > > for pipes/network.
> > >
> > > However rsync would be a much better option if say a DR host needs to
> be
> > > kept in sync with a production, as rsync can be configured to to
> > > incremental updates, ie only copy changes, and where files are deleted
> > > on the source delete them at the dest, maintaining a complete mirror
> of
> > > two directories across a network.
> > > it could be cron's to run every few mins.
> > >
> > > regards peter
> > >
> >
> > You're right, Peter.
> > Both scp and rsync ignore softlinks to files, and hardlinks are
> converted to
> > regular files.  Named pipes aren't copied properly either.
> >
> > Kind regards,
> >
> > Herta


Minor correction: scp copies both the soft links and the hard link as new
files (i.e. with their proper inode).
The rsync -ropcl copies the soft link as soft link, but the hard link is
still copied as a separate file.  Looks like tar is the only one that will
copy those files correctly.

FWIIW, this isn't just semantics.  I remember copying a directory tree which
in one of its subdirectoryies had a soft  link to the directory I was
copying.  It caused a recursive loop which would have filled the disk.

Kind regards,

Herta




Kind regards,

Herta




-- 
"Life on Earth may be expensive,
but it comes with a free ride around the Sun."



More information about the redhat-list mailing list