[Spacewalk-list] Spacewalk slow opens full list of nodes (Systems) 250nodes

Matt Moldvan matt at moldvan.com
Tue Mar 29 19:58:41 UTC 2016


Ah, yeah that seems to be an expensive operation to grab all of the systems
and cross reference the errata tables and packages involved, so that makes
sense it would slow things down.  Glad you got the UI working a little
better :D

On Tue, Mar 29, 2016 at 2:49 PM Konstantin Raskoshnyi <konrasko at gmail.com>
wrote:

> Wow, checked your profile, works the same speed as mine.
>
> I think the main problem we have a lot of Errata pending ~1000 per host.
> That's the main problem.
>
> Thank you!
>
> On Tue, Mar 29, 2016 at 11:34 AM, Matt Moldvan <matt at moldvan.com> wrote:
>
>> I was watching load time in Chrome developer tools for 500 systems, took
>> 10.22 seconds to fully load.  Our Postgres tunables look like:
>>
>> default_statistics_target = 50 # pgtune wizard 2015-12-08
>> maintenance_work_mem = 960MB # pgtune wizard 2015-12-08
>> constraint_exclusion = on # pgtune wizard 2015-12-08
>> checkpoint_completion_target = 0.9 # pgtune wizard 2015-12-08
>> effective_cache_size = 6GB # pgtune wizard 2015-12-08
>> work_mem = 3840kB # pgtune wizard 2015-12-08
>> wal_buffers = 8MB # pgtune wizard 2015-12-08
>> checkpoint_segments = 256 # pgtune wizard 2015-12-08
>> shared_buffers = 3840MB # pgtune wizard 2015-12-08
>> checkpoint_timeout = 30min # range 30s-1h
>>
>> The above works relatively well, considering we have ~5,000 systems
>> registered and the Jabber services are also using it for OSAD information.
>>
>> I don't recall if autovacuum is set to on in the default config, but I
>> also saw some benefit from running vacuum on a few tables in the Spacewalk
>> database (rhnserver, rhnerrata, etc).
>>
>> autovacuum = on # Enable autovacuum subprocess?  'on'
>> autovacuum_max_workers = 30 # max number of autovacuum subprocesses
>>
>>
>> On Tue, Mar 29, 2016 at 1:53 PM Konstantin Raskoshnyi <konrasko at gmail.com>
>> wrote:
>>
>>> Yep, already did.
>>>
>>> default_statistics_target = 100
>>> maintenance_work_mem = 2GB
>>> checkpoint_completion_target = 0.9
>>> effective_cache_size = 44GB
>>> work_mem = 2048MB
>>> wal_buffers = 16MB
>>> checkpoint_segments = 32
>>> shared_buffers = 14GB
>>> max_connections = 100
>>>
>>> Full load of 250 nodes about 20second. Was 150.
>>> What's you load time if you can share?
>>> Thanks!
>>>
>>> On Tue, Mar 29, 2016 at 10:44 AM, Matt Moldvan <matt at moldvan.com> wrote:
>>>
>>>> Sounds like since you're giving Tomcat more resources it's able to do
>>>> it's job better, and now Postgres needs some tuning.  I ran pgtune on mine
>>>> for 2,000 clients as such:
>>>>
>>>> as the postgres user:
>>>> pgtune -i data/postgresql.conf  -o ./data/postgresql.conf.new -c 2000
>>>>
>>>> You might be able to get way with fewer, though, but it's worth a try
>>>> if you want a usable UI without having to update (though the update for me
>>>> was relatively painless if I recall correctly).
>>>>
>>>> On Tue, Mar 29, 2016 at 12:50 PM Konstantin Raskoshnyi <
>>>> konrasko at gmail.com> wrote:
>>>>
>>>>> With this settings JAVA doesn't eat the memory anymore, but postgres
>>>>> eat 100%.
>>>>> Loading of the page with this settings is 100second.
>>>>> With default 70
>>>>>
>>>>> On Mon, Mar 28, 2016 at 8:53 PM, Matt Moldvan <matt at moldvan.com>
>>>>> wrote:
>>>>>
>>>>>> At those values I'm not surprised you see some slowness... try
>>>>>> Xmx/Xms of 4G and PermSize of 512m, MaxPermSize of 1024m.  The PermSize I
>>>>>> think is what helps the most, as the default is only 64MB if it's not
>>>>>> specified directly.  I'm not a Tomcat expert but it felt faster after I set
>>>>>> the values that way...
>>>>>>
>>>>>> On Mon, Mar 28, 2016 at 10:31 PM Konstantin Raskoshnyi <
>>>>>> konrasko at gmail.com> wrote:
>>>>>>
>>>>>>> I have this values for JAVA_OPTS
>>>>>>>
>>>>>>> tomcat   20101  0.3  0.7 4277092 513852 ?      Ssl  Mar25  16:36
>>>>>>> java -ea -Xms256m -Xmx256m -Djava.awt.headless=true
>>>>>>> -Dorg.xml.sax.driver=org.apache.xerces.parsers.SAXParser
>>>>>>> -Dorg.apache.tomcat.util.http.Parameters.MAX_COUNT=1024 -XX:MaxNewSize=256
>>>>>>> Tried to increase Xms & Xmx, but didn't find any advantages
>>>>>>>
>>>>>>> On Fri, Mar 25, 2016 at 8:09 PM, Matt Moldvan <matt at moldvan.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> You mentioned you did some Tomcat tuning, but what are your values
>>>>>>>> set to?  I noticed some considerable speed up when I tinkered with PermSize
>>>>>>>> and similar variables, like below.  I think the defaults are pretty low and
>>>>>>>> not enough memory is allocated off the bat.  Also while the page is loading
>>>>>>>> what do you notice in top or atop or htop?
>>>>>>>>
>>>>>>>> tomcat    2161     1  0 Mar24 ?        00:04:59
>>>>>>>> /usr/lib/jvm/java/bin/java -XX:NewRatio=4 -XX:PermSize=1024m
>>>>>>>> -XX:MaxPermSize=2048m -XX:NewSize=2048m -XX:MaxNewSize=2048m -Xms8g -Xmx8g
>>>>>>>> -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
>>>>>>>> -Dsun.rmi.dgc.client.gcInterval=3600000
>>>>>>>> -Dsun.rmi.dgc.server.gcInterval=3600000...rest truncated
>>>>>>>>
>>>>>>>> On Fri, Mar 25, 2016 at 9:07 PM Konstantin Raskoshnyi <
>>>>>>>> konrasko at gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Sorry, Asking again...Any possibilities to fix this bug, without
>>>>>>>>> upgrading the system?
>>>>>>>>> It's a physical host. Today it's almost not working (Systems
>>>>>>>>> menu), even if i list 25 hosts or search for a specific one. The top
>>>>>>>>> doesn't show any processes locking the system.
>>>>>>>>> Thanks!
>>>>>>>>>
>>>>>>>>> On Fri, Mar 25, 2016 at 10:41 AM, Matt Moldvan <matt at moldvan.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Yes I think the way the dependencies are set in the packages
>>>>>>>>>> updating one will potentially update the rest, so best to do the update off
>>>>>>>>>> hours, and if possible (if it's a VM for example) take a snapshot or back
>>>>>>>>>> up of your config and database first.
>>>>>>>>>>
>>>>>>>>>> On Fri, Mar 25, 2016 at 12:14 PM Konstantin Raskoshnyi <
>>>>>>>>>> konrasko at gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Yep. I double checked the version, and it's 2.3.
>>>>>>>>>>> Is it necessary to update the whole packages with yum?
>>>>>>>>>>> It's the main deployment server. Don't want to get any troubles
>>>>>>>>>>> :).
>>>>>>>>>>> Thanks!
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Friday, March 25, 2016, Matt Moldvan <matt at moldvan.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Cool, I was thinking you were already at 2.4 from your earlier
>>>>>>>>>>>> reply.  Either way good luck and let us know if it solves your issue in
>>>>>>>>>>>> case anyone has the same question in the future.
>>>>>>>>>>>>
>>>>>>>>>>>> > The spacewalk version is 2.4, Oct 7th, 2015 Thanks!
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Mar 24, 2016 at 11:15 PM Konstantin Raskoshnyi <
>>>>>>>>>>>> konrasko at gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Matt!
>>>>>>>>>>>>>
>>>>>>>>>>>>> I tried to tune tomcat, but looks like here's my problem
>>>>>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1214437
>>>>>>>>>>>>>
>>>>>>>>>>>>> Going to upgrade to version 2.4..
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Mar 24, 2016 at 8:06 PM, Matt Moldvan <
>>>>>>>>>>>>> matt at moldvan.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> What are your tomcat settings like?  We have maxThreads set
>>>>>>>>>>>>>> to 2048 for the 8009 and 8080 connectors in /etc/tomcat6/server.xml:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>     <Connector port="8080" protocol="HTTP/1.1"
>>>>>>>>>>>>>> connectionTimeout="20000" redirectPort="8443" maxThreads="2048"
>>>>>>>>>>>>>> maxKeepAliveRequests="1024" URIEncoding="UTF-8" address="127.0.0.1"/>
>>>>>>>>>>>>>>     <Connector port="8009" protocol="AJP/1.3"
>>>>>>>>>>>>>> redirectPort="8443" URIEncoding="UTF-8" address="127.0.0.1"
>>>>>>>>>>>>>> maxThreads="2048" maxConnections="2048" connectionTimeout="600"
>>>>>>>>>>>>>> keepAliveTimeout="600"/>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Also, /etc/tomcat6/tomcat6.conf has some settings for
>>>>>>>>>>>>>> JAVA_OPTS that are interesting for tuning purposes:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> JAVA_OPTS="-XX:NewRatio=4 -XX:PermSize=1024m
>>>>>>>>>>>>>> -XX:MaxPermSize=2048m -XX:NewSize=2048m -XX:MaxNewSize=2048m -Xms8g -Xmx8g
>>>>>>>>>>>>>> -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
>>>>>>>>>>>>>> -Dsun.rmi.dgc.client.gcInterval=3600000
>>>>>>>>>>>>>> -Dsun.rmi.dgc.server.gcInterval=3600000"
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Listing 500 systems on our master took ~10.22 seconds on an 8
>>>>>>>>>>>>>> vCPU/32GB RAM VMware VM, with an external Postgres database VM that has 8
>>>>>>>>>>>>>> vCPU/16GB RAM, so yours could be much quicker with some additional tuning,
>>>>>>>>>>>>>> I would think.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Mar 24, 2016 at 10:01 PM Konstantin Raskoshnyi <
>>>>>>>>>>>>>> konrasko at gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The spacewalk version is 2.4, Oct 7th, 2015
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Mar 24, 2016 at 6:21 PM, William H. ten Bensel <
>>>>>>>>>>>>>>> WHTENBEN at up.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What version of spacewalk are you running?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Bill
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mar 24, 2016, at 20:15, Konstantin Raskoshnyi <
>>>>>>>>>>>>>>>> konrasko at gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This email originated from outside of the company. Please
>>>>>>>>>>>>>>>> use discretion if opening attachments or clicking on links.
>>>>>>>>>>>>>>>> ------------------------------
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi Community!
>>>>>>>>>>>>>>>> We have a spacewalk server on SCLinux 7.1, Java 1.7 and
>>>>>>>>>>>>>>>> postgres
>>>>>>>>>>>>>>>> When I open Systems menu (list servers) and list whole 250
>>>>>>>>>>>>>>>> nodes - it takes about 90 seconds.
>>>>>>>>>>>>>>>> The java process shows 400%.
>>>>>>>>>>>>>>>> The server has 64Gb of Ram and 24 cores.
>>>>>>>>>>>>>>>> Thought the problem in postgres, actually i got the query
>>>>>>>>>>>>>>>> from the log file, the query runs about 8seconds.
>>>>>>>>>>>>>>>> All other time Java doing something...
>>>>>>>>>>>>>>>> Any solutions?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> **
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This email and any attachments may contain information that
>>>>>>>>>>>>>>>> is confidential and/or privileged for the sole use of the intended
>>>>>>>>>>>>>>>> recipient. Any use, review, disclosure, copying, distribution or reliance
>>>>>>>>>>>>>>>> by others, and any forwarding of this email or its contents, without the
>>>>>>>>>>>>>>>> express permission of the sender is strictly prohibited by law. If you are
>>>>>>>>>>>>>>>> not the intended recipient, please contact the sender immediately, delete
>>>>>>>>>>>>>>>> the e-mail and destroy all copies.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> **
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>> Spacewalk-list mailing list
>>>>>>>>>>>>>>>> Spacewalk-list at redhat.com
>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>> Spacewalk-list mailing list
>>>>>>>>>>>>>>> Spacewalk-list at redhat.com
>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> Spacewalk-list mailing list
>>>>>>>>>>>>>> Spacewalk-list at redhat.com
>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> Spacewalk-list mailing list
>>>>>>>>>>>>> Spacewalk-list at redhat.com
>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Spacewalk-list mailing list
>>>>>>>>>>> Spacewalk-list at redhat.com
>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Spacewalk-list mailing list
>>>>>>>>>> Spacewalk-list at redhat.com
>>>>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Spacewalk-list mailing list
>>>>>>>>> Spacewalk-list at redhat.com
>>>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Spacewalk-list mailing list
>>>>>>>> Spacewalk-list at redhat.com
>>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Spacewalk-list mailing list
>>>>>>> Spacewalk-list at redhat.com
>>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Spacewalk-list mailing list
>>>>>> Spacewalk-list at redhat.com
>>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Spacewalk-list mailing list
>>>>> Spacewalk-list at redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>
>>>>
>>>> _______________________________________________
>>>> Spacewalk-list mailing list
>>>> Spacewalk-list at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>
>>>
>>> _______________________________________________
>>> Spacewalk-list mailing list
>>> Spacewalk-list at redhat.com
>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>
>>
>> _______________________________________________
>> Spacewalk-list mailing list
>> Spacewalk-list at redhat.com
>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>
>
> _______________________________________________
> Spacewalk-list mailing list
> Spacewalk-list at redhat.com
> https://www.redhat.com/mailman/listinfo/spacewalk-list
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20160329/236b7039/attachment.htm>


More information about the Spacewalk-list mailing list