[Spacewalk-list] Redundant spacewalk servers

Paul Robert Marino prmarino1 at gmail.com
Wed Dec 6 07:08:16 UTC 2017


Well postgresql is a different issue I did use an external postgresql
cluster front ended by pgpool. the databases used master slave replication
in hot mode. the down side is that due to the design of Java's dbi nearly
all queries will go to the master so you cant take full advantage of the
load balancing for read only queries. the upside is pgpool will reduce the
number if idle connection left over from processes that leave open unused
connections.

On Tue, Dec 5, 2017 at 4:36 PM, Chris.Bone at concur.com <Chris.Bone at concur.com
> wrote:

> Thanks for that, impressive for of the top of your head!
>
> What I was most interested in was the use of a single postgresql database
> by two instances of spacewalk. Where there are issues directly related to
> that?
>
> I am thinking as a first step to have an active/passice spacewalk, but
> with each server using the same database.
>
>
>
> --
>
> Regards,
>
> Chris Bone
>
>
>
> *From: *Paul Robert Marino <prmarino1 at gmail.com>
> *Date: *Monday, December 4, 2017 at 13:25 PM
> *To: *Chris Bone <Chris.Bone at concur.com>
> *Cc: *Spacewalk Userlist <spacewalk-list at redhat.com>
> *Subject: *Re: Redundant spacewalk servers
>
>
>
> I'm CCing the user list to get some feed back as to if the community would
> like me to write a howto for the site. It would take me a day or two to do
> it so before I put in the effort I would like to know if there is the
> intrest level to justify the work
>
>
>
> Well I had some luck doing it.
>
> To be clear right now I am not currently working with spacewalk so this
> will be off the top of my head.
>
>
>
> First of all yes it can be partially done. There are a few services which
> can only be done on one host at a time but the web interface can be
> clustered behind a sticky or a non sticky load balancer.
>
> By sticky I mean that the load balancer dynamically creates persistent
> dnat rules to route clients to the same server in the backend.
>
> The key to load balancing Web interface API and cobbler is to use either a
> clustered file system or NAS to share some directories between the hosts.
> The list of directories that need to be clustered I actually originally
> found in the RHN Satellite 5 documentation on access.redhat.com. It turns
> out that that part is very easy to do and is actually talked about in the
> documentation in active passive clustering the only thing missing is the
> load balancer I personally used keep a lived for that because we wised it
> before a lot of other things as well. Also in my case I used a gluster Nas
> cluster since my site was mission critical and we used it for several other
> things like RHEV as well.
>
>
>
> Now for the tricky parts.
>
> Taskomatic (the scheduler based on quartz) and the rhnsearchd ( an index
> cache builder) can only be run on one host at a time. The reason for this
> is because rhnsearchd is a dated design that writes the cache to files on
> disk which are then read by the Java API as needed there are a lot of stock
> tools out there which can do this better but it's deep embedded into the
> spacewalk code so it would be hard to replace.
>
> Taskomatic is based on quartz which is a Java library for rolling your own
> scheduler it's very easy to code to but sadly it's capabilities are kind of
> limited and every time I've seen it used in a project it has been buggy
> frankly if my java coding skills were better I would probably writ a
> proposed patch to replace it with on scheduler by sos berlin.
>
>
>
>
>
> The next part that is really tricky is osa-dispacher
>
> The trick here is clustering jabber which is doable, that said I never got
> around to it but I did do the first stage which is moving the sessions from
> the Berkley dB files to the PostgreSQL database which also interestingly is
> how you stablize it because the bug that causes the database to break is
> tied to the Berkley dB backend essentially so switching it to a PostgreSQL
> backend fixes is and allows for clustering. The next step is jabber itself
> I never got to this but jabber does support  clustering in the sense that a
> message can be posted to by a client of one server then transmuted to one
> or more clients on an other server.
>
>
>
> Now for connecting clients you could use halinux paranah or keepalived.
> Myself I used keepalived with scripts for the master slave notify to start
> and stop taskomatic and rhnsearchd and an added monitoring script for the
> other services which would try to restart fails services but if one of them
> failed to restart after the 3rd check it would fail over. The trick here is
>
> that the monitoring script should only check taskomatic and rhnsearchd
> while the box is in the master state. Additionally if using keepalived
> remember to set both to "state=slave" and set nopreempt to prevent it from
> continually trying go back to a failed master. You also need a check for
> the https load balancer as well
>
>
>
>  Sent from my BlackBerry - the most secure mobile device
>
> *From:* Chris.Bone at concur.com
>
> *Sent:* November 27, 2017 5:45 PM
>
> *To:* prmarino1 at gmail.com
>
> *Subject:* Redundant spacewalk servers
>
>
>
> Hello Paul,
>
> I found a post from 2012 where you said you were going to try setting up
> redundant live/live spacewalk servers using a shared postgresql database.
>
> How did it go?!
>
> I am in the process of moving to a live/standby configuration, but would
> be much happier with live/live with a standby database. I can’t see why it
> would not work, but also can’t see a good way to show that there is no
> corruption going on.
>
>
>
> --
>
> Regards,
>
> Chris Bone
>
>
> ------------------------------
>
>
> This e-mail message is authorized for use by the intended recipient only
> and may contain information that is privileged and confidential. If you
> received this message in error, please call us immediately at *(425)
> 590-5000* <(425)5905000> and ask to speak to the message sender. Please
> do not copy, disseminate, or retain this message unless you are the
> intended recipient. In addition, to ensure the security of your data,
> please do not send any unencrypted credit card or personally identifiable
> information to this email address. Thank you.
>
> ------------------------------
>
> This e-mail message is authorized for use by the intended recipient only
> and may contain information that is privileged and confidential. If you
> received this message in error, please call us immediately at (425)
> 590-5000 and ask to speak to the message sender. Please do not copy,
> disseminate, or retain this message unless you are the intended recipient.
> In addition, to ensure the security of your data, please do not send any
> unencrypted credit card or personally identifiable information to this
> email address. Thank you.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20171206/6336858a/attachment.htm>


More information about the Spacewalk-list mailing list