Way to make distant servers to appear to have the same data ?
Phil Meyer
pmeyer at themeyerfarm.com
Tue Jan 23 21:57:34 UTC 2007
David Timms wrote:
> What is the most effective, most robust way to allow servers that are
> quite distant, and on slow networks "appear" to have the same content ?
>
> In my example, the content is read/write at 4 sites. Hopefully the
> system should make caching possible for files that were originally at
> another site. If a file were not already cached, then it would get
> loaded across the slow network.
>
> redhat global file system appears to be designed to do this:
> http://linux.sys-con.com/read/166309_2.htm
> but then talks about storage area network or LAN connections rather
> than slow wan links.
>
> http://www.drbd.org/ raid across machines?
>
> I did see a few other projects designed to solve this sort of problem,
> but I am having trouble finding them now {search hints ?}
>
> http://www.coda.cs.cmu.edu/ljpaper/lj.html
>
> Has anybody used / appraised coda ?
>
> David Timms.
>
In the olden days, we used to require users to 'promote' content from a
staging server to a production server. The act of 'promotion' included
an rsync to the remote servers, and then a 'roll out' of that content at
the same moment on all servers once all had acknowledged receipt of the
promotion. It was all automated and not hard to do.
Steps:
rsync staging/content primary::staging/content
wait for completions
on all servers at once:
mv content content.$$ ; mv staging/content content
roll back was easy
mv content.NNNNN content
I have been out of that game for a few years, but it was easy enough then.
Now days you have much more dynamic content, more middle ware, broader
use of server based cookies, etc.
It can be difficult to insure that all of that data is preserved across
remote servers.
But for fairly static content, the old way should work just fine.
Good Luck!
More information about the fedora-list
mailing list