[Spacewalk-list] Taskomatic runs indefinitely without ever generating repodata

Florence Savary florence.savary.fs at gmail.com
Fri Jul 6 07:38:34 UTC 2018


Hello,

Matt, thank you for your feedback.

I tried your /etc/rhn/rhn.conf but again, it didn't change anything.

In /var/log/rhn/tasko/sat/channel-repodata-bunch, I have only the following
line :
Run with id 3438467 handles the whole task queue.



I have asked our DBA to look for locks in the databse, but there isn't any.

Regarding the echo 'select label,name,modified,last_synced from rhnchannel'
| sudo spacewalk-sql -i  command, I have an output looking like yours,
except there are more rows where there is no last_sync values (only 55
channels over 323 have a last_sync value).



Regarding strace, I have the following output repeating infinitely for the
taskomatic PID (19881) :

read(5, 0x7ffdf4421610, 1024)           = -1 EAGAIN (Resource temporarily
unavailable)
recvfrom(7, 0x7ffdf44219ff, 1, 0, NULL, NULL) = -1 EAGAIN (Resource
temporarily unavailable)
wait4(19883, 0x7ffdf4421a14, WNOHANG, NULL) = 0
nanosleep({0, 100000000}, NULL)         = 0
read(5, 0x7ffdf4421610, 1024)           = -1 EAGAIN (Resource temporarily
unavailable)
recvfrom(7, 0x7ffdf44219ff, 1, 0, NULL, NULL) = -1 EAGAIN (Resource
temporarily unavailable)
wait4(19883, 0x7ffdf4421a14, WNOHANG, NULL) = 0
nanosleep({0, 100000000}, NULL)         = 0

It looks like taskomatic is trying to read a resource that it can't find,
and that would be the reason why it is blocked.

The strace output of the java process associated (PID 19883) :

strace: Process 19883 attached
futex(0x7fa2610909d0, FUTEX_WAIT, 19884, NULL



What would you do from here ?

Regards,
Florence


2018-07-05 21:30 GMT+02:00 Matt Moldvan <matt at moldvan.com>:

> Is there anything interesting in /var/log/rhn/tasko/sat/channel-repodata-bunch?
> Do you have any hung reposync processes?  Any lingering Postgres locks that
> might be an issue?
>
> It's odd that the run would only take 1 second, unless something is wrong
> with the database or it's data...
>
> What do you see from a spacewalk-sql command like below?
>
> echo 'select label,name,modified,last_synced from rhnchannel' | sudo
> spacewalk-sql -i
>
>               label               |               name               |
>         modified            |        last_synced
>
> ----------------------------------+-------------------------
> ---------+-------------------------------+----------------------------
>
>  ovirt-x86_64-stable-6-nonprod    | ovirt-x86_64-stable-6-nonprod    |
> 2015-09-14 13:46:44.147134-05 |
>
>  extras7-x86_64-nonprod           | extras7-x86_64-nonprod           |
> 2017-11-06 10:26:30.011283-06 |
>
>  centos7-x86_64-all               | centos7-x86_64-all               |
> 2015-11-11 08:50:58.831234-06 | 2018-07-05 11:01:08.857-05
>
>  perl-5.16.x-all                  | perl-5.16.x-all                  |
> 2015-09-11 13:25:15.002198-05 | 2015-09-11 13:29:21.361-05
>
>  ovirt-x86_64-stable-6            | ovirt-x86_64-stable-6            |
> 2015-09-14 13:30:55.172-05    |
>
>  ovirt-x86_64-stable-6-prod       | ovirt-x86_64-stable-6-prod       |
> 2015-09-14 13:48:06.637063-05 |
>
>  other6-x86_64-all                | other6-x86_64-all                |
> 2015-07-28 09:20:38.156104-05 |
>
>  epel5-x86_64-all                 | epel5-x86_64-all                 |
> 2016-10-04 18:20:44.846312-05 | 2017-04-17 12:57:36.859-05
>
>  passenger6-x86_64-prod           | passenger6-x86_64-prod           |
> 2016-04-22 14:35:45.395518-05 |
>
>  perl-5.16.x-nonprod              | perl-5.16.x-nonprod              |
> 2015-09-11 13:27:32.261063-05 |
>
>  perl-5.16.x-prod                 | perl-5.16.x-prod                 |
> 2015-09-11 13:26:40.584715-05 | 2015-09-11 13:29:38.537-05
>
>  other6-x86_64-nonprod            | other6-x86_64-nonprod            |
> 2015-07-23 15:00:03.733479-05 |
>
>  other6-x86_64-prod               | other6-x86_64-prod               |
> 2015-07-21 15:10:48.719528-05 |
>
>  epel5-x86_64-prod                | epel5-x86_64-prod                |
> 2016-10-04 18:25:38.655383-05 |
>
>  passenger6-x86_64-all            | passenger6-x86_64-all            |
> 2016-04-20 11:37:19.002493-05 | 2016-04-20 11:58:42.312-05
>
>  docker7-x86_64-prod              | docker7-x86_64-prod              |
> 2017-08-03 11:42:08.474496-05 |
>
>  centos5-x86_64-nonprod           | centos5-x86_64-nonprod           |
> 2015-06-22 16:16:17.372799-05 |
>
>  other7-x86_64-nonprod            | other7-x86_64-nonprod            |
> 2016-07-14 13:03:10.320136-05 |
>
>  mongo3.2-centos6-x86_64-all      | mongo3.2-centos6-x86_64-all      |
> 2016-08-22 12:21:40.722182-05 | 2018-07-01 12:27:03.019-05
>
>  centos5-x86_64-prod              | centos5-x86_64-prod              |
> 2015-06-22 16:20:41.474486-05 |
>
>  passenger6-x86_64-nonprod        | passenger6-x86_64-nonprod        |
> 2016-04-20 12:29:24.677227-05 |
>
>  other7-x86_64-prod               | other7-x86_64-prod               |
> 2016-07-14 13:03:47.284295-05 |
>
>  cloudera5.7-x86_64-nonprod       | cloudera5.7-x86_64-nonprod       |
> 2016-05-09 12:10:16.496626-05 | 2016-06-20 13:11:20.62-05
>
>  epel5-x86_64-nonprod             | epel5-x86_64-nonprod             |
> 2016-10-04 18:25:09.844486-05 |
>
>  epel6-x86_64-prod                | epel6-x86_64-prod                |
> 2016-03-18 11:52:45.9199-05   | 2016-08-23 05:07:37.967-05
>
>  spacewalk6-client-all            | spacewalk6-client-all            |
> 2017-05-02 20:53:38.867018-05 | 2018-07-01 22:02:11.386-05
>
>  docker7-x86_64-nonprod           | docker7-x86_64-nonprod           |
> 2017-04-07 15:13:44.158973-05 |
>
>  mongo3.2-centos6-x86_64-nonprod  | mongo3.2-centos6-x86_64-nonprod  |
> 2016-08-22 12:34:18.095059-05 |
>
>  mongo3.2-centos6-x86_64-prod     | mongo3.2-centos6-x86_64-prod     |
> 2016-08-22 12:42:19.161165-05 |
>
>  local6-x86_64-all                | local6-x86_64-all                |
> 2015-09-30 08:55:37.657412-05 | 2016-04-19 07:00:23.632-05
>
>  centos5-x86_64-all               | centos5-x86_64-all               |
> 2015-06-22 15:20:22.085465-05 | 2017-04-17 13:09:39.635-05
>
>  spacewalk5-client-nonprod        | spacewalk5-client-nonprod        |
> 2017-05-02 20:53:20.430795-05 |
>
>  spacewalk5-client-prod           | spacewalk5-client-prod           |
> 2017-05-02 20:53:28.980968-05 |
>
>  spacewalk5-client-all            | spacewalk5-client-all            |
> 2017-05-02 20:53:08.276664-05 | 2018-07-05 10:10:11.665-05
>
>  spacewalk7-client-prod           | spacewalk7-client-prod           |
> 2017-05-02 20:54:32.321635-05 | 2018-07-05 11:01:14.499-05
>
>  epel6-x86_64-nonprod             | epel6-x86_64-nonprod             |
> 2016-03-18 11:52:14.915108-05 | 2018-07-05 10:10:08.774-05
>
>  centos7-x86_64-prod              | centos7-x86_64-prod              |
> 2015-11-11 09:02:06.69758-06  |
>
>  puppetlabs6-x86_64-prod          | puppetlabs6-x86_64-prod          |
> 2016-04-22 13:46:22.233841-05 | 2018-07-01 13:30:47.635-05
>
>  puppetlabs5-x86_64-nonprod       | puppetlabs5-x86_64-nonprod       |
> 2018-03-26 15:21:59.007749-05 | 2018-07-01 13:00:03.401-05
>
>  puppetlabs5-x86_64-prod          | puppetlabs5-x86_64-prod          |
> 2018-03-26 15:24:23.86552-05  | 2018-07-01 13:30:39.025-05
>
>  puppetlabs5-x86_64-all           | puppetlabs5-x86_64-all           |
> 2018-03-26 15:19:04.647981-05 | 2018-07-01 13:31:25.065-05
>
>  other5-x86_64-all                | other5-x86_64-all                |
> 2015-08-10 14:16:01.092867-05 |
>
>  other5-x86_64-nonprod            | other5-x86_64-nonprod            |
> 2015-08-10 14:18:05.114541-05 |
>
>  other5-x86_64-prod               | other5-x86_64-prod               |
> 2015-08-10 14:19:03.728982-05 |
>
>  centos6-x86_64-nonprod           | centos6-x86_64-nonprod           |
> 2015-06-22 16:24:07.137207-05 |
>
>  centos6-x86_64-prod              | centos6-x86_64-prod              |
> 2015-06-22 16:28:51.324002-05 |
>
>  extras7-x86_64-all               | extras7-x86_64-all               |
> 2017-08-16 09:13:26.8122-05   | 2018-07-05 10:05:10.626-05
>
>  centos6-x86_64-gitlab-ce-nonprod | centos6-x86_64-gitlab-ce-nonprod |
> 2017-04-17 11:43:36.609036-05 | 2018-07-05 10:04:57.277-05
>
>  spacewalk7-server-all            | spacewalk7-server-all            |
> 2017-03-28 15:22:31.851414-05 | 2018-07-05 11:11:31.564-05
>
>  local5-x86_64-all                | local5-x86_64-all                |
> 2016-02-24 12:19:36.791459-06 |
>
>  local5-x86_64-nonprod            | local5-x86_64-nonprod            |
> 2016-02-24 12:20:19.404008-06 |
>
>  local5-x86_64-prod               | local5-x86_64-prod               |
> 2016-02-24 12:20:45.098532-06 |
>
>  local6-x86_64-nonprod            | local6-x86_64-nonprod            |
> 2016-08-22 20:49:56.7376-05   |
>
>  local7-x86_64-all                | local7-x86_64-all                |
> 2016-07-14 13:00:32.511851-05 |
>
>  local7-x86_64-nonprod            | local7-x86_64-nonprod            |
> 2016-07-14 13:02:06.932169-05 |
>
>  local7-x86_64-prod               | local7-x86_64-prod               |
> 2016-07-14 13:02:38.496912-05 |
>
>  puppetlabs6-x86_64-all           | puppetlabs6-x86_64-all           |
> 2016-04-20 08:27:56.026914-05 | 2018-07-01 13:30:36.771-05
>
>  spacewalk7-client-nonprod        | spacewalk7-client-nonprod        |
> 2017-05-02 20:54:22.659512-05 | 2018-07-05 11:10:25.009-05
>
>  docker7-x86_64-all               | docker7-x86_64-all               |
> 2017-03-22 12:50:15.332561-05 | 2018-07-05 13:00:02.988-05
>
>  spacewalk7-client-all            | spacewalk7-client-all            |
> 2017-05-02 20:54:13.5076-05   | 2018-07-05 10:04:59.748-05
>
>  local6-x86_64-prod               | local6-x86_64-prod               |
> 2015-09-30 08:59:12.679727-05 |
>
>  centos6-x86_64-gitlab-ee-nonprod | centos6-x86_64-gitlab-ee-nonprod |
> 2016-04-14 11:39:01.432444-05 | 2018-07-05 11:12:20.525-05
>
>  mysqltools6-x86_64-all           | mysqltools6-x86_64-all           |
> 2016-03-17 12:41:37.44854-05  | 2018-07-05 12:00:02.319-05
>
>  mysqltools6-x86_64-nonprod       | mysqltools6-x86_64-nonprod       |
> 2016-03-17 12:58:35.036373-05 |
>
>  mysqltools6-x86_64-prod          | mysqltools6-x86_64-prod          |
> 2016-03-17 12:59:10.969162-05 |
>
>  spacewalk7-server-nonprod        | spacewalk7-server-nonprod        |
> 2017-03-28 15:23:02.210349-05 | 2018-07-05 11:12:47.471-05
>
>  spacewalk7-server-prod           | spacewalk7-server-prod           |
> 2017-03-28 15:23:29.309042-05 | 2017-05-02 20:56:45.247-05
>
>  epel7-x86_64-prod                | epel7-x86_64-prod                |
> 2016-03-22 09:48:38.060213-05 | 2018-07-05 09:57:25.861-05
>
>  puppetlabs6-x86_64-nonprod       | puppetlabs6-x86_64-nonprod       |
> 2016-04-20 12:28:55.337125-05 | 2018-07-01 13:30:43.362-05
>
>  newrelic-noarch-nover            | newrelic-noarch-nover            |
> 2016-10-13 13:54:38.621333-05 | 2016-10-13 14:09:41.778-05
>
>  other7-x86_64-all                | other7-x86_64-all                |
> 2016-07-14 13:01:25.848215-05 | 2018-07-05 14:00:03.714-05
>
>  spacewalk6-client-nonprod        | spacewalk6-client-nonprod        |
> 2017-05-02 20:53:50.507298-05 |
>
>  spacewalk6-client-prod           | spacewalk6-client-prod           |
> 2017-05-02 20:54:00.685324-05 |
>
>  spacewalk6-server-all            | spacewalk6-server-all            |
> 2018-06-22 23:11:30.637054-05 | 2018-07-05 11:01:11.543-05
>
>  puppetlabs7-x86_64-prod          | puppetlabs7-x86_64-prod          |
> 2016-07-14 13:29:04.67033-05  | 2018-07-01 13:31:29.425-05
>
>  spacewalk6-server-nonprod        | spacewalk6-server-nonprod        |
> 2018-06-22 23:17:20.660409-05 |
>
>  spacewalk6-server-prod           | spacewalk6-server-prod           |
> 2018-06-22 23:18:02.738869-05 |
>
>  puppetlabs7-x86_64-nonprod       | puppetlabs7-x86_64-nonprod       |
> 2016-07-14 13:28:34.475051-05 | 2018-07-01 13:16:25.948-05
>
>  epel6-x86_64-all                 | epel6-x86_64-all                 |
> 2016-03-18 11:50:17.587171-05 | 2018-07-05 11:07:42.644-05
>
>  centos6-x86_64-gitlab-ee         | centos6-x86_64-gitlab-ee         |
> 2015-12-24 13:21:10.493684-06 | 2018-07-05 11:08:30.039-05
>
>  puppetlabs7-x86_64-all           | puppetlabs7-x86_64-all           |
> 2016-07-14 12:54:59.388232-05 | 2018-07-01 13:32:02.745-05
>
>  epel7-x86_64-nonprod             | epel7-x86_64-nonprod             |
> 2016-03-22 09:47:34.668867-05 | 2017-04-21 11:08:24.573-05
>
>  centos6-x86_64-all               | centos6-x86_64-all               |
> 2015-06-22 15:19:13.053429-05 | 2018-07-02 01:12:57.768-05
>
>  epel7-x86_64-all                 | epel7-x86_64-all                 |
> 2016-03-22 09:44:48.748142-05 | 2018-07-05 09:11:28.553-05
>
>  centos7-x86_64-nonprod           | centos7-x86_64-nonprod           |
> 2015-10-21 22:02:28.107902-05 |
>
> (85 rows)
>
> On Thu, Jul 5, 2018 at 11:48 AM Gerald Vogt <vogt at spamcop.net> wrote:
>
>> On 05.07.18 16:05, Matt Moldvan wrote:
>> > How is the server utilization with respect to disk I/O (something like
>> > iotop or htop might help here)?  Maybe there is something else blocking
>>
>> My server is basically idle. 99% idle, little disk i/o. It doesn't do
>> anything really.
>>
>> > and the server doesn't have enough resources to complete.  Have you
>> > tried running an strace against the running process?
>>
>> If it doesn't have enough resources shouldn't there be an exception?
>>
>> For me, it looks more like something doesn't make it into the database
>> and thus into the persistent state. For instance, I now have the repodata
>> task at "RUNNING" for three days:
>>
>> Channel Repodata:       2018-07-02 08:13:10 CEST        RUNNING
>>
>> The log file shows this regarding repodata:
>>
>> > # fgrep -i repodata rhn_taskomatic_daemon.log
>> > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02 08:13:10,584
>> [Thread-12] INFO  com.redhat.rhn.taskomatic.TaskoQuartzHelper - Job
>> single-channel-repodata-bunch-0 scheduled succesfully.
>> > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02 08:13:10,636
>> [DefaultQuartzScheduler_Worker-8] INFO  com.redhat.rhn.taskomatic.TaskoJob
>> - single-channel-repodata-bunch-0: bunch channel-repodata-bunch STARTED
>> > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02 08:13:10,651
>> [DefaultQuartzScheduler_Worker-8] DEBUG com.redhat.rhn.taskomatic.TaskoJob
>> - single-channel-repodata-bunch-0: task channel-repodata started
>> > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02 08:13:10,793
>> [DefaultQuartzScheduler_Worker-8] INFO  com.redhat.rhn.taskomatic.task.ChannelRepodata
>> - In the queue: 4
>> > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02 08:13:11,102
>> [DefaultQuartzScheduler_Worker-8] DEBUG com.redhat.rhn.taskomatic.TaskoJob
>> - channel-repodata (single-channel-repodata-bunch-0) ... running
>> > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02 08:13:11,103
>> [DefaultQuartzScheduler_Worker-8] INFO  com.redhat.rhn.taskomatic.TaskoJob
>> - single-channel-repodata-bunch-0: bunch channel-repodata-bunch FINISHED
>>
>> So according to the logs the repodata bunch has finished. According to
>> the web interface it has not. Nothing has been updated in
>> /var/cache/rhn/repodata/ either. In addition, those four channels which
>> were still updated haven't been updated either now.
>>
>> Thanks,
>>
>> Gerald
>>
>>
>>
>> >
>> > I also had an (well, many) issue(s) with our Spacewalk server before
>> > disabling snapshots in /etc/rhn/rhn.conf.  I also increased the number
>> > of workers and max repodata work items:
>> >
>> > # system snapshots enabled
>> > enable_snapshots = 0
>> > ...
>> > taskomatic.maxmemory=6144
>> > taskomatic.errata_cache_max_work_items = 500
>> > taskomatic.channel_repodata_max_work_items = 50
>> > taskomatic.channel_repodata_workers = 5
>> >
>> >
>> >
>> > On Thu, Jul 5, 2018 at 4:38 AM Florence Savary
>> > <florence.savary.fs at gmail.com <mailto:florence.savary.fs at gmail.com>>
>> wrote:
>> >
>> >     Hello,
>> >
>> >     Thanks for sharing your configuration files. They differ very little
>> >     from mine. I just changed the number of workers in rhn.conf, but it
>> >     didn't change anything.
>> >
>> >     I deleted all the channels clones not used by any system and dating
>> >     back from before May 2018, in order to lower the number of channels
>> >     in the queue. There were 127 channels in the queue before these
>> >     deletion (indicated in /var/log/rhn/rhn_taskomatic_daemon.log), and
>> >     there are 361 of them now ... I must admit I'm confused... I hoped
>> >     it would reduce the number of channels to process and thus "help"
>> >     taskomatic, but obviously I was wrong.
>> >
>> >     I also noticed that the repodata regeneration seems to work fine for
>> >     existing channels that are not clones, but it is not working for new
>> >     channels that are not clones (and not working for new clones but
>> >     nothing new here).
>> >
>> >     Has anyone got any other idea (even the tiniest) ?
>> >
>> >     Regards,
>> >     Florence
>> >
>> >
>> >     2018-07-04 15:21 GMT+02:00 Paul Dias - BCX <paul.dias at bcx.co.za
>> >     <mailto:paul.dias at bcx.co.za>>:
>> >
>> >         Hi,____
>> >
>> >         __ __
>> >
>> >         Let me post my settings that I have on my CentOS6 server. Can’t
>> >         remember but I have one or two others, but his is from the top
>> >         of my head.____
>> >
>> >         __ __
>> >
>> >         /etc/rhn/rhn.conf____
>> >
>> >         # Added by paul dias increase number of taskomatic workers
>> >         20180620____
>> >
>> >         taskomatic.channel_repodata_workers = 3____
>> >
>> >         taskomatic.java.maxmemory=4096____
>> >
>> >         __ __
>> >
>> >         /etc/sysconfig/tomcat6____
>> >
>> >         JAVA_OPTS="-ea -Xms256m -Xmx512m -Djava.awt.headless=true
>> >         -Dorg.xml.sax.driver=org.apache.xerces.parsers.SAXParser
>> >         -Dorg.apache.tomcat.util.http.Parameters.MAX_COUNT=1024
>> >         -XX:MaxNewSize=256 -XX:-UseConcMarkSweepGC
>> >         -Dnet.sf.ehcache.skipUpdateCheck=true
>> >         -Djavax.sql.DataSource.Factory=org.apache.commons.
>> dbcp.BasicDataSourceFactory"____
>> >
>> >         __ __
>> >
>> >         /etc/tomcat/server.xml____
>> >
>> >         <!-- Define an AJP 1.3 Connector on port 8009 -->____
>> >
>> >              <Connector port="8009" protocol="AJP/1.3"
>> >         redirectPort="8443" URIEncoding="UTF-8" address="127.0.0.1"
>> >         maxThreads="256" connectionTimeout="20000"/>____
>> >
>> >         __ __
>> >
>> >              <Connector port="8009" protocol="AJP/1.3"
>> >         redirectPort="8443" URIEncoding="UTF-8" address="::1"
>> >         maxThreads="256" connectionTimeout="20000"/>____
>> >
>> >         __ __
>> >
>> >         /usr/share/rhn/config-defaults/rhn_taskomatic_daemon.conf____
>> >
>> >         # Initial Java Heap Size (in MB)____
>> >
>> >         wrapper.java.initmemory=512____
>> >
>> >         __ __
>> >
>> >         # Maximum Java Heap Size (in MB)____
>> >
>> >         wrapper.java.maxmemory=1512____
>> >
>> >         # Adjusted by paul 20180620____
>> >
>> >         __ __
>> >
>> >         wrapper.ping.timeout=0____
>> >
>> >         # # adjusted paul dias 20180620____
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         Regards,____
>> >
>> >         *Paul Dias____*
>> >
>> >         Technical Consultant____
>> >
>> >         6^th Floor, 8 Boundary Road____
>> >
>> >         Newlands____
>> >
>> >         Cape Town____
>> >
>> >         7700____
>> >
>> >         T: +27 (0) 21 681 3149 <+27%2021%20681%203149>
>> <tel:+27%2021%20681%203149>____
>> >
>> >         *Meet your future today.____*
>> >
>> >         *__ __*
>> >
>> >         __BCX______
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         __Social-facebook
>> >         <https://www.facebook.com/BCXworld>____Social-twitter
>> >         <https://twitter.com/BCXworld>____Social-linkdin
>> >         <https://za.linkedin.com/BCX>____Social-youtube
>> >         <https://www.youtube.com/BCXworld>______
>> >
>> >         __ __
>> >
>> >         __ __
>> >
>> >         This e-mail is subject to the BCX electronic communication legal
>> >         notice, available at:
>> >         https://www.bcx.co.za/disclaimers____
>> >
>> >         /__ __/
>> >
>> >         /__ __/
>> >
>> >         __ __
>> >
>> >         *From:*Paul Dias - BCX
>> >         *Sent:* 02 July 2018 06:53 PM
>> >
>> >
>> >         *To:* spacewalk-list at redhat.com <mailto:spacewalk-list at redhat.
>> com>
>> >         *Subject:* Re: [Spacewalk-list] Taskomatic runs indefinitely
>> >         without ever generating repodata____
>> >
>> >         __ __
>> >
>> >         What I have noticed, if you use
>> >         "spacecmd softchannel_generateyumcache <channel name>" and then
>> >         go to tasks and run single repodata bunch, you will see it will
>> >         actually start and generate your channel cache for you on the
>> >         channel you used the spacecmd  on, this works every time.____
>> >
>> >         __ __
>> >
>> >         But yes the task logs just show repodata bunch running
>> forever.____
>> >
>> >         __ __
>> >
>> >         Regards,____
>> >
>> >         *Paul Dias*____
>> >
>> >         6^th  Floor, 8 Boundary Road____
>> >
>> >         Newlands____
>> >
>> >         Cape Town____
>> >
>> >         7700____
>> >
>> >         T: +27 (0) 21 681 3149 <+27%2021%20681%203149>
>> <tel:+27%2021%20681%203149>____
>> >
>> >         __ __
>> >
>> >         *Meet your future today.*____
>> >
>> >         **____
>> >
>> >         BCX____
>> >
>> >         __ __
>> >
>> >         -----------------------------------------------------------
>> -------------
>> >
>> >         *From:*Gerald Vogt <vogt at spamcop.net <mailto:vogt at spamcop.net>>
>> >         *Sent:* Monday, 02 July 2018 9:45 AM
>> >         *To:* spacewalk-list at redhat.com <mailto:spacewalk-list at redhat.
>> com>
>> >         *Subject:* Re: [Spacewalk-list] Taskomatic runs indefinitely
>> >         without ever generating repodata____
>> >
>> >         ____
>> >
>> >         After letting the upgraded server sit for a while it seems only
>> >         a few of
>> >         the task schedules actually finish. By now, only those tasks
>> >         show up in
>> >         in the task engine status page:
>> >
>> >         Changelog Cleanup:       2018-07-01 23:00:00 CEST
>> FINISHED
>> >         Clean Log History:       2018-07-01 23:00:00 CEST
>> FINISHED
>> >         Compare Config Files:    2018-07-01 23:00:00 CEST
>> FINISHED
>> >         Daily Summary Mail:      2018-07-01 23:00:00 CEST
>> FINISHED
>> >         Daily Summary Queue:     2018-07-01 23:00:00 CEST
>> FINISHED
>> >
>> >         All the other tasks have disappeared from the list by now.
>> >
>> >         The repo-sync tasks seem to work. New packages appear in the
>> >         channel.
>> >         However, the repo build is not running or better it seems to
>> never
>> >         properly finish.
>> >
>> >         If I start it manually, it seems to do its work:
>> >
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>> 08:13:10,584 [Thread-12] INFO  com.redhat.rhn.taskomatic.TaskoQuartzHelper
>> - Job single-channel-repodata-bunch-0 scheduled succesfully.
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>> 08:13:10,636 [DefaultQuartzScheduler_Worker-8] INFO
>> com.redhat.rhn.taskomatic.TaskoJob - single-channel-repodata-bunch-0:
>> bunch channel-repodata-bunch STARTED
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>> 08:13:10,651 [DefaultQuartzScheduler_Worker-8] DEBUG
>> com.redhat.rhn.taskomatic.TaskoJob - single-channel-repodata-bunch-0:
>> task channel-repodata started
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>> 08:13:10,793 [DefaultQuartzScheduler_Worker-8] INFO
>> com.redhat.rhn.taskomatic.task.ChannelRepodata - In the queue: 4
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>> 08:13:11,102 [DefaultQuartzScheduler_Worker-8] DEBUG
>> com.redhat.rhn.taskomatic.TaskoJob - channel-repodata
>> (single-channel-repodata-bunch-0) ... running
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>> 08:13:11,103 [DefaultQuartzScheduler_Worker-8] INFO
>> com.redhat.rhn.taskomatic.TaskoJob - single-channel-repodata-bunch-0:
>> bunch channel-repodata-bunch FINISHED
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>> 08:13:11,137 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - File Modified Date:2018-06-23 03:48:50 CEST
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>> 08:13:11,137 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Channel Modified Date:2018-07-02 03:45:39 CEST
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>> 08:13:11,211 [Thread-678] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - File Modified Date:2018-06-23 04:09:51 CEST
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>> 08:13:11,213 [Thread-678] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Channel Modified Date:2018-07-02 03:47:55 CEST
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:19 | 2018-07-02
>> 08:13:19,062 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Generating new repository metadata for channel
>> 'epel6-centos6-x86_64'(sha1) 14401 packages, 11613 errata
>> >         > INFO   | jvm 1    | 2018/07/02 08:13:21 | 2018-07-02
>> 08:13:21,193 [Thread-678] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Generating new repository metadata for channel
>> 'epel7-centos7-x86_64'(sha1) 16282 packages, 10176 errata
>> >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02
>> 08:40:12,351 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Repository metadata generation for 'epel6-centos6-x86_64' finished in
>> 1613 seconds
>> >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02
>> 08:40:12,457 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - File Modified Date:2018-06-19 06:28:57 CEST
>> >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02
>> 08:40:12,457 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Channel Modified Date:2018-07-02 04:30:05 CEST
>> >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02
>> 08:40:12,691 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Generating new repository metadata for channel
>> 'postgresql96-centos7-x86_64'(sha256) 1032 packages, 0 errata
>> >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02
>> 08:41:51,710 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Repository metadata generation for 'postgresql96-centos7-x86_64' finished
>> in 98 seconds
>> >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02
>> 08:41:51,803 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - File Modified Date:2018-06-20 05:08:38 CEST
>> >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02
>> 08:41:51,803 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Channel Modified Date:2018-07-02 04:00:00 CEST
>> >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02
>> 08:41:51,923 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Generating new repository metadata for channel
>> 'postgresql10-centos6-x86_64'(sha512) 436 packages, 0 errata
>> >         > INFO   | jvm 1    | 2018/07/02 08:42:26 | 2018-07-02
>> 08:42:26,479 [Thread-677] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Repository metadata generation for 'postgresql10-centos6-x86_64' finished
>> in 34 seconds
>> >         > INFO   | jvm 1    | 2018/07/02 08:45:01 | 2018-07-02
>> 08:45:01,697 [Thread-678] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>> - Repository metadata generation for 'epel7-centos7-x86_64' finished in
>> 1900 seconds
>> >
>> >         yet, the task remains in RUNNING. And for whatever reason it
>> >         only seems
>> >         to work some channels. I find a total of 20 repos syncing in the
>> >         logs of
>> >         the updated server compared to 42 repos syncing in the logs of
>> >         the old.
>> >         I don't really see the difference between those 20 repos syncing
>> >         and
>> >         those other 22 not. First I suspected channels with custom
>> quartz
>> >         schedules, but then I found channels in both groups.
>> >
>> >         So I don't know how to troubleshoot this any further. The
>> >         repodata task
>> >         which I have started 1,5 hours ago is still at "RUNNING". The
>> >         channels
>> >         for which the sync works have been updated. I don't know why it
>> >         is still
>> >         running. Server load is back down...
>> >
>> >         Thanks,
>> >
>> >         Gerald
>> >
>> >         On 22.06.18 19:12, Gerald Vogt wrote:
>> >         > I have the same problem after upgrading from 2.6 to 2.8 on
>> CentOS 6.9. I
>> >         > have even increased the memory as suggested by that link but
>> it makes no
>> >         > differences. None of the scheduled tasks are running. I can
>> run a bunch
>> >         > manually. But the scheduler doesn't seem to work. Last
>> execution times
>> >         > on the task engine status pages are still at timestamps from
>> before the
>> >         > upgrade. -Gerald
>> >         >
>> >         >
>> >         >
>> >         > On 22.06.18 14:15, Avi Miller wrote:
>> >         >> Hi,
>> >         >>
>> >         >>> On 22 Jun 2018, at 5:51 pm, Florence Savary
>> >         >>> <florence.savary.fs at gmail.com
>> >         <mailto:florence.savary.fs at gmail.com>> wrote:
>> >         >>>
>> >         >>> When using taskotop, we can see a line for the
>> channel-repodata task,
>> >         >>> we see it is running, but there is never any channel
>> displayed in the
>> >         >>> Channel column. We can also see the task marked as running
>> in the
>> >         >>> Admin tab of the WebUI, but if we let it, it never stops.
>> The task
>> >         >>> runs indefinitely, whithout ever doing anything.
>> >         >>
>> >         >> If you've never modified the default memory settings,
>> Taskomatic is
>> >         >> probably running out of memory and task is crashing. This is
>> a known
>> >         >> issue, particularly when you sync large repos.
>> >         >>
>> >         >> I would suggest increasing the memory assigned to Taskomatic
>> to see if
>> >         >> that resolves the issue. You will need to restart it after
>> making
>> >         >> these changes:
>> >         >> https://docs.oracle.com/cd/E92593_01/E90695/html/swk24-
>> issues-memory.html
>> >         >>
>> >         >> Cheers,
>> >         >> Avi
>> >         >>
>> >         >> --
>> >         >> Oracle <http://www.oracle.com>
>> >         >> Avi Miller | Product Management Director | +61 (3) 8616 3496
>> <+61%203%208616%203496> <tel:+61%203%208616%203496>
>> >         >> Oracle Linux and Virtualization
>> >         >> 417 St Kilda Road, Melbourne, Victoria 3004 Australia
>> >         >>
>> >         >>
>> >         >> _______________________________________________
>> >         >> Spacewalk-list mailing list
>> >         >> Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.com>
>> >         >> https://www.redhat.com/mailman/listinfo/spacewalk-list
>> >         >>
>> >         >
>> >         > _______________________________________________
>> >         > Spacewalk-list mailing list
>> >         > Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.com>
>> >         > https://www.redhat.com/mailman/listinfo/spacewalk-list
>> >
>> >         ____
>> >
>> >
>> >         _______________________________________________
>> >         Spacewalk-list mailing list
>> >         Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.com>
>> >         https://www.redhat.com/mailman/listinfo/spacewalk-list
>> >
>> >
>> >     _______________________________________________
>> >     Spacewalk-list mailing list
>> >     Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.com>
>> >     https://www.redhat.com/mailman/listinfo/spacewalk-list
>> >
>> >
>> >
>> > _______________________________________________
>> > Spacewalk-list mailing list
>> > Spacewalk-list at redhat.com
>> > https://www.redhat.com/mailman/listinfo/spacewalk-list
>> >
>>
>> _______________________________________________
>> Spacewalk-list mailing list
>> Spacewalk-list at redhat.com
>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>
>
> _______________________________________________
> Spacewalk-list mailing list
> Spacewalk-list at redhat.com
> https://www.redhat.com/mailman/listinfo/spacewalk-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20180706/b75ee6ee/attachment.htm>


More information about the Spacewalk-list mailing list