[Spacewalk-list] Spacewalk 2.1 - Redundancy and multi-DC kickstarts

Vamegh Hedayati vamegh at gmail.com
Wed May 14 10:38:39 UTC 2014


Hi All,

I have read through quite a bit of documentation with regards to the two
things we are trying to achieve with spacewalk.

1. The main thing we require is the ability to build servers from another
DC (lets say dc2) but to still have the profiles registered with the
original spacewalk server in DC1.  It is just not possible to kickstart
from the original DC as layer 2 transport between the 2 dc's does not exist
but layer 3 does, that is to say dc1 cannot respond to dhcp requests from a
server in dc2 as it does not see the mac address sent, but dc1 does see dc2
and vice-versa when an ip address has been assigned to the server.

2. We also need redundancy even if it is a failover pair.

Before I go into detail, some background information:  The spacewalk
installation is updated to 2.1 and uses PostgreSQL, its a default build
using the instructions provided on the spacewalk wiki, but all of the
server builds are automated and use the spacewalk api where necessary to
kickstart (using snippets) and to de-register a server being removed.
Practically all of the client servers are Centos 6 and the spacewalk server
itself is on centos 6.5. Spacewalk Proxy servers are not currently deployed
- but are being actively tested for deployment to Separate DC's.

There are a few ways to achieve what I need, from what I can tell but what
I really need is more information.

1. Kickstart across multiple DC's:
Has anyone got Spacewalk Proxy to kickstart servers using its own tftp/dhcp
and can provide any guidance for this ?
Has anyone setup spacewalk proxy and used cobbler replication ?
Another possible solution is to use the api : Namespace: kickstart.profile
-> Method: downloadRenderedKickstart (although this should not be necessary
as the proxy should see the profiles from the master spacewalk instance)
Has anyone used ISS in master to master replication method and use
different masters to kickstart servers but shared the client information
across all of the spacewalk instances ?

I guess most importantly what would be your recommended method ? What
method should I use that will be easiest to maintain and known to work.

2. Redundancy
There is not much information out there about ISS and either Master to
Master replication or Master to Slave replication. In fact there is very
little information about ISS and its state. I have read a few comments that
say it is virtually impossible to set up master to master replication for
spacewalk - is this true still for spacewalk 2.1 ?  I have so far come
across the following bits of information:

http://www.redhat.com/pdf/ISS_Best_Practices_Whitepaper.pdf
https://fedorahosted.org/spacewalk/wiki/InterSpacewalkServerSync

The redhat whitepaper briefly touches on master to master replication
(Bi-directional Sync) - but not in any great depth.

I have as yet  not configured ISS, will it also do DB replication across
the spacewalk server instances ?

Are there any other methods to achieve spacewalk redundancy (a failover
pair is fine) ?

Thank you in advance,

Kind Regards,

Vamegh
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20140514/66e5b3df/attachment.htm>


More information about the Spacewalk-list mailing list