[Spacewalk-list] [External] Spacewak 2.7 and PgPool-II integration problems

Paul Robert Marino prmarino1 at gmail.com
Thu Feb 15 21:08:39 UTC 2018


now that you provided more logs tit looks like a max connections issue in
postgresql its probably not tuned correctly but my previous advice still
stands

On Thu, Feb 15, 2018 at 3:41 PM, Eduardo Capistrán <eduardocap at herbalife.com
> wrote:

> Here we go, these were the steps that we followed to install Spacewalk:
>
>
>
> To setting up spacewalk repo:
>
> rpm -Uvh http://yum.spacewalkproject.org/2.6/RHEL/7/x86_64/
> spacewalk-repo-2.6-0.el7.noarch.rpm
>
>
>
> Epel repo:
>
> rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
>
>
>
> Contrib package in master and slave node of postgresql:
>
> Yum install postgresql-contrib
>
>
>
> On spacewalk server:
>
> yum install spacewalk-setup-postgresql
>
>
>
> And we create a new database on master node of postgresql with this
> information:
>
> su - postgres -c 'PGPASSWORD=spacepw; createdb -E UTF8 spaceschema ; createlang plpgsql spaceschema ; createlang pltclu spaceschema ; yes $PGPASSWORD | createuser -P -sDR spaceuser'
>
>
>
> After that, as suggested by spacewalk guide:
>
> ·         Pg_hba.conf (of master/slave node of postgresql):
>
> local   spaceschema     spaceuser                               md5
>
> host    spaceschema     spaceuser       192.168.0.0/16          md5
>
>
>
> increase maximal number of connections to 600:
>
> echo max_connections = 600 >>/var/lib/pgsql/data/postgresql.conf
>
>
>
> When we try again to setting up Spacewalk, the output its:
>
> # spacewalk-setup --external-postgresql
>
> ** Database: Setting up database connection for PostgreSQL backend.
>
> ** Database: Populating database.
>
> The Database has schema.  Would you like to clear the database [Y]? Y
>
> ** Database: Clearing database.
>
> ** Database: Shutting down spacewalk services that may be using DB.
>
> ** Database: Services stopped.  Clearing DB.
>
> ** Database: Re-populating database.
>
> *** Progress: ###########################
>
> * Configuring tomcat.
>
> * Setting up users and groups.
>
> ** GPG: Initializing GPG and importing key.
>
> * Performing initial configuration.
>
> * Configuring apache SSL virtual host.
>
> ** Skipping SSL virtual host configuration.
>
> * Configuring jabberd.
>
> * Creating SSL certificates.
>
> ** SSL: Generating CA certificate.
>
> ** SSL: Deploying CA certificate.
>
> ** SSL: Generating server certificate.
>
> ** SSL: Storing SSL certificates.
>
> * Deploying configuration files.
>
> * Update configuration in database.
>
> * Setting up Cobbler..
>
> * Restarting services.
>
>    ((%%))
>
> Tomcat failed to start properly or the installer ran out of tries.  Please
> check /var/log/tomcat*/catalina.out for errors.
>
>
>
>
>
> Ouput of some lines of postgresql logs…
>
> 2018-02-13 13:36:18.975 EST [3221] FATAL:  sorry, too many clients already
>
> 2018-02-13 13:36:18.976 EST [3222] FATAL:  sorry, too many clients already
>
> 2018-02-13 13:36:18.977 EST [3223] FATAL:  sorry, too many clients already
>
> 2018-02-13 13:37:22.100 EST [3271] WARNING:  there is no transaction in
> progress
>
> 2018-02-13 13:37:22.103 EST [3271] WARNING:  there is no transaction in
> progress
>
> 2018-02-13 13:37:22.120 EST [3271] WARNING:  there is no transaction in
> progress
>
> 2018-02-13 13:37:22.147 EST [3271] WARNING:  there is no transaction in
> progress
>
> 2018-02-13 13:37:22.179 EST [3271] WARNING:  there is no transaction in
> progress
>
>
>
> For setting up Pgpool-II cluster we follow the next steps:
>
>
>
> ·         Pgpool-II version 3.6.7 (subaruboshi) installed with three
> nodes.
>
>
>
> Spacewalk guide says 600 connections are necessary to be opened, so in
> pgpool.conf we set:
>
> # - Concurrent session and pool size -
>
> #Originally set in 32
>
> num_init_children = 200
>
>                                    # Number of concurrent sessions allowed
>
>                                    # (change requires restart)
>
> max_pool = 3
>
>                                    # Number of connection pool caches per
> connection
>
>                                    # (change requires restart)
>
> Where num_init_children * max_pool = connections permitted (
> http://www.pgpool.net/mediawiki/index.php/Relationship_between_max_pool,
> _num_init_children,_and_max_connections)
>
>
>
> Ouput of some lines of pgpool-II logs…
>
> 2018-01-27 10:09:04: pid 22707: LOG:  pool_send_and_wait: Error or notice
> message from backend: : DB node id: 0 backend pid: 3266 statement: "
> drop schema if exists rpm cascade ;" message: "drop cascades to 5 other
> objects"
>
> 2018-01-27 10:09:04: pid 22707: LOG:  backend [0]: NOTICE: drop cascades
> to 5 other objects
>
> 2018-01-27 10:09:04: pid 22707: LOG:  pool_send_and_wait: Error or notice
> message from backend: : DB node id: 0 backend pid: 3266 statement: "
> drop schema if exists rhn_exception cascade ;" message: "drop cascades to 3
> other objects"
>
> 2018-01-27 10:09:04: pid 22707: LOG:  backend [0]: NOTICE: drop cascades
> to 3 other objects
>
> 2018-01-27 10:09:04: pid 22707: LOG:  pool_send_and_wait: Error or notice
> message from backend: : DB node id: 0 backend pid: 3266 statement: "
> drop schema if exists rhn_config cascade ;" message: "drop cascades to 8
> other objects"
>
> 2018-01-27 10:09:04: pid 22707: LOG:  backend [0]: NOTICE: drop cascades
> to 8 other objects
>
> 2018-01-27 10:09:04: pid 22707: LOG:  pool_send_and_wait: Error or notice
> message from backend: : DB node id: 0 backend pid: 3266 statement: "
> drop schema if exists rhn_server cascade ;" message: "drop cascades to 20
> other objects"
>
> 2018-01-27 10:09:04: pid 22707: LOG:  backend [0]: NOTICE: drop cascades
> to 20 other objects
>
> 2018-01-27 10:09:04: pid 22707: LOG:  pool_send_and_wait: Error or notice
> message from backend: : DB node id: 0 backend pid: 3266 statement: "
> drop schema if exists rhn_entitlements cascade ;" message: "drop cascades
> to 8 other objects"
>
> 2018-01-27 10:09:04: pid 22707: LOG:  backend [0]: NOTICE: drop cascades
> to 8 other objects
>
> 2018-01-27 10:09:04: pid 22707: LOG:  pool_send_and_wait: Error or notice
> message from backend: : DB node id: 0 backend pid: 3266 statement: "
> drop schema if exists rhn_bel cascade ;" message: "schema "rhn_bel" does
> not exist, skipping"
>
> 2018-01-27 10:09:04: pid 22707: LOG:  backend [0]: NOTICE: schema
> "rhn_bel" does not exist, skipping
>
> 2018-01-27 10:09:04: pid 22707: LOG:  pool_send_and_wait: Error or notice
> message from backend: : DB node id: 0 backend pid: 3266 statement: "
> drop schema if exists rhn_cache cascade ;" message: "drop cascades to 3
> other objects"
>
>
>
> And:
>
>
>
> 2018-01-27 10:10:27: pid 22162: LOG:  fork a new child process with pid:
> 22935
>
> 2018-01-27 10:10:27: pid 22636: ERROR:  unable to read data from frontend
>
> 2018-01-27 10:10:27: pid 22636: DETAIL:  EOF encountered with frontend
>
> 2018-01-27 10:10:27: pid 22929: ERROR:  unable to read data from frontend
>
> 2018-01-27 10:10:27: pid 22929: DETAIL:  EOF encountered with frontend
>
> 2018-01-27 10:10:27: pid 22740: LOG:  selecting backend connection
>
> 2018-01-27 10:10:27: pid 22740: DETAIL:  failback event detected,
> discarding existing connections
>
> 2018-01-27 10:10:27: pid 22811: ERROR:  unable to read data from frontend
>
> 2018-01-27 10:10:27: pid 22811: DETAIL:  EOF encountered with frontend
>
> 2018-01-27 10:10:27: pid 22757: ERROR:  unable to read data from frontend
>
> 2018-01-27 10:10:27: pid 22757: DETAIL:  EOF encountered with frontend
>
> 2018-01-27 10:10:27: pid 22925: ERROR:  unable to read data from frontend
>
> 2018-01-27 10:10:27: pid 22925: DETAIL:  EOF encountered with frontend
>
> 2018-01-27 10:10:27: pid 22749: ERROR:  unable to read data from frontend
>
> 2018-01-27 10:10:27: pid 22749: DETAIL:  EOF encountered with frontend
>
> 2018-01-27 10:10:27: pid 22752: ERROR:  unable to read data from frontend
>
> 2018-01-27 10:10:27: pid 22752: DETAIL:  EOF encountered with frontend
>
> 2018-01-27 10:10:27: pid 22919: ERROR:  unable to read data from frontend
>
> 2018-01-27 10:10:27: pid 22919: DETAIL:  EOF encountered with frontend
>
>
>
> Regards,
>
>
>
>
>
> *Eduardo Capistran* | Principal Linux Engineer | IS Server Operations
>
> *Herbalife Nutrition* |  Camino al ITESO No. 8900 INT 1A, Col El Mante,
> Tlaquepaque, Jalisco 45609
>
> x6452 + 5954 | +52 (33) 37705400 x5954 <+52%2033%203770%205400> | +52 (1)
> 333-808-6131 | eduardocap at herbalife.com
>
>
>
> *From:* Avi Miller [mailto:avi.miller at oracle.com]
> *Sent:* jueves, 15 de febrero de 2018 11:17 a.m.
> *To:* Eduardo Capistrán
> *Cc:* spacewalk-list at redhat.com; Server Ops_Linux
> *Subject:* Re: [External] [Spacewalk-list] Spacewak 2.7 and PgPool-II
> integration problems
>
>
>
> Hey Eduardo
>
>
>
> On 15 Feb 2018, at 7:12 pm, Eduardo Capistrán <eduardocap at herbalife.com>
> wrote:
>
>
>
> Thanks for your reply Avi,
>
>
>
> We have installed spacewalk 2.7 in Oracle Linux using the embedded
> postgresql method without problem since version 1.9.
>
>
>
> That's good to know.
>
>
>
> We are now trying to move to a HA architecture and we are facing these
> kind of problems.
>
> Any success case you know making usage of porsgres cluster with pgpool-II ?
>
>
>
> To be honest, we don't test (or support) PostgreSQL or pgpool. We do test
> and support Spacewalk with Oracle Database RAC, but I guess that's not an
> option here. :)
>
>
>
> I would suggest perhaps providing the configuration steps you followed
> exactly in mode detail so that any PostgreSQL/pgpool experts can chime in,
> because the  original email didn't have much detail on how you configured
> Spacewalk during installation.
>
>
>
> Cheers,
>
> Avi
>
>
>
> --
> Oracle <http://www.oracle.com>
> Avi Miller | Product Management Director | +61 (3) 8616 3496
> <+61%203%208616%203496>
> Oracle Linux and Virtualization
> 417 St Kilda Road, Melbourne, Victoria 3004 Australia
>
>
>
> _______________________________________________
> Spacewalk-list mailing list
> Spacewalk-list at redhat.com
> https://www.redhat.com/mailman/listinfo/spacewalk-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20180215/c18a9d5f/attachment.htm>


More information about the Spacewalk-list mailing list