<div dir="ltr">

<span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Hello all, </span><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">For your information, it seems we have found the solution here. Gerald, could you check the table called rhnRepoRegenQueue in your database ? We found out that something kept filling this table with duplicate rows. We had 45000000 rows in this table, and it was endlessly growing up. We think that something went wrong one day at some point during the repodata regeneration, and then the systems registered on the Spacewalk server were not able to access the repodata files, therefore adding one additionall row in this table each time a system tried to access the repodata, thus filling endlessly the table (we also have a rhn-check process running every five minutes on the systems that probably didn't help).</div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">If that is also your case Gerald, you should be able to delete all rows first to clean the table a bit, and right after that, modifiy every rows to force the regeneration with the sql command<span> </span><i>update rhnRepoRegenQueue set force = 'Y';</i></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">Like us, you may have to launch this command several times, to update the new rows as they appear, but by forcing the ones already existing, the process of regeneration should run normally, enabling the systems to access the repodata, and making them stop filling the table. </div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">Bit by bit, we managed to make the repodata bunch complete, and eventually the table 

<span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">rhnRepoRegenQueue</span>

was empty. That being said, we still don't know the root cause of this behavior, and what led us to such a huge table in the db. I will monitor taskomatic carefully during the next days, and see what happens ... We may face the same problem sooner or later, and may have to clean the table and force the regeneration again...</div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">Not sure if I made myself clear. </div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">Thanks to all for your help anyways. </div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">Regards, </div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">Florence</div><br class="m_-6085743587486771879gmail-Apple-interchange-newline">

<br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-07-19 13:17 GMT+02:00 Brian Long <span dir="ltr"><<a href="mailto:briandlong@gmail.com" target="_blank">briandlong@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Gerald,</div><div>Months ago I ran into a similar issue on our SW 2.6 server and I found that, because of a full filesystem issue, our database had become corrupt.  When SW 2.7 came out, I set up a fresh Spacewalk VM, fresh channels, etc. and made sure everything was working.  A coworker and I migrated all our clients to the SW 2.7 server and everything has been good so far.  SW 2.8 was released right after we performed the migration, but I'm wary of performing an upgrade with all the emails I've seen about issues with SW 2.8.  When the time comes, I'll probably shutdown all spacewalk and postgres services, snapshot my VM, perform the upgrade and do a bunch of testing with the non-prod clients before I remove the snapshot.  :)</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>/Brian/</div><div><br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jul 18, 2018 at 9:19 AM, Gerald Vogt <span dir="ltr"><<a href="mailto:vogt@spamcop.net" target="_blank">vogt@spamcop.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>On 14.07.18 16:07, Paul Dias - BCX wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<br>
<br>
you can put tomcat in debug mode and run tomcat if I remember without calling the service which forks it into the background, but actually display it in the console, that way you can see what is happening when <br>
</blockquote>
<br></span>
I don't think taskomatic is a tomcat. I can run taskomatic on the console. But that only prints all the log lines on stdout and nothing further.<span><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
you run a job, also in /usr/share/rhn/config-defaults<wbr>/rhn_taskomatic, there are options there that you can increase the level of logging from what I can see. I cant remember clearly but there was a post that I was going through a couple of weeks ago about troubleshooting I can't remember the address tbh!<br>
</blockquote>
<br></span>
Even with DEBUG log level it doesn't show anything beyond that it starts the channel-repodata task but never finishes... I have no idea what it is actually doing there...<br>
<br>
I am stumped. Currently, we are unable to update our spacewalk server...<br>
<br>
Thanks,<br>
<br>
Gerald<br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
Regards,<br>
*Paul Dias*<br>
6^th  Floor, 8 Boundary Road<span><br>
Newlands<br>
Cape Town<br>
7700<br>
T: +27 (0) 21 681 3149<br>
<br></span>
*Meet your future today.*<br>
**<br>
BCX<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>------------<br>
*From:* Gerald Vogt <<a href="mailto:vogt@spamcop.net" target="_blank">vogt@spamcop.net</a>><br>
*Sent:* Friday, 13 July 2018 8:56 AM<br>
*To:* <a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.com</a><br>
*Subject:* Re: [Spacewalk-list] Taskomatic runs indefinitely without ever generating repodata<div><div class="m_105974890664644146h5"><br>
Anyone any idea how to troubleshoot this? Any debug logging we could<br>
enable to find out what's really going on and where it's hanging?<br>
<br>
Thanks,<br>
<br>
Gerald<br>
<br>
On 06.07.18 09:20, Gerald Vogt wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 05.07.18 21:30, Matt Moldvan wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Is there anything interesting in /var/log/rhn/tasko/sat/chan<wbr>nel-repodata-bunch?  Do you have any hung <br>
</blockquote>
<br>
There is currently only a single file with this content:<br>
<br>
spacewalk:channel-repodata-bun<wbr>ch(996)# ls -l<br>
total 4<br>
-rw-r--r--. 1 root root 130 Jul  2 08:13 channel-repodata_15408487_out<br>
spacewalk:channel-repodata-bun<wbr>ch(997)# cat channel-repodata_15408487_out<br>
2018-07-02 08:13:10,793 [DefaultQuartzScheduler_Worker<wbr>-8] INFO com.redhat.rhn.taskomatic.task<wbr>.ChannelRepodata  - In the queue: 4<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
reposync processes?  Any lingering Postgres locks that might be an issue?<br>
</blockquote>
<br>
No reposync processes. All progres processes say "idle", so I guess there are no locks. Or how do I check for lingering locks?<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
It's odd that the run would only take 1 second, unless something is wrong with the database or it's data...<br>
<br>
What do you see from a spacewalk-sql command like below?<br>
</blockquote>
<br>
I see all the channels:<br>
<br>
                 label               |                      name                      |           modified            |        last_synced<br>
------------------------------<wbr>-----+------------------------<wbr>------------------------+-----<wbr>--------------------------+---<wbr>-------------------------<br>
<br>
   icinga-epel7-x86_64          <wbr>     | ICINGA stable release for epel-7 (x86_64)      | 2016-02-15 10:07:59.822942+01 | 2018-07-06 02:30:55.482+02<br>
   epel7-centos7-x86_64         <wbr>     | EPEL 7 for CentOS 7 (x86_64)                   | 2014-07-21 08:16:26.367135+02 | 2018-07-06 04:01:52.148+02<br>
   centos6-x86_64-extras        <wbr>     | CentOS 6 Extras (x86_64)                       | 2012-08-23 06:46:05.145629+02 | 2018-06-21 10:25:26.104+02<br>
   grafana-epe7-x86_64          <wbr>     | Grafana stable release for epel-7 (x86_64)     | 2016-05-06 08:29:49.308149+02 | 2018-06-21 04:58:15.022+02<br>
   spacewalk26-client-centos6-x8<wbr>6_64 | Spacewalk Client 2.6 for CentOS 6 (x86_64)     | 2017-04-25 13:44:49.266738+02 | 2018-06-21 10:41:07.369+02<br>
   globus-el6-x86_64            <wbr>     | Globus Toolkit 6 (el6)                         | 2016-05-13 15:23:31.807011+02 | 2018-07-06 03:34:49.95+02<br>
   internet2                    <wbr>     | perfSONAR RPM Repository                    <wbr>   | 2017-06-27 06:56:33.675378+02 | 2018-06-22 10:24:41.702+02<br>
   postgresql94-centos6-x86_64  <wbr>     | PostgreSQL 9.4 for CentOS 6 (x86_64)           | 2015-01-28 14:09:41.856451+01 | 2018-06-21 10:42:01.413+02<br>
   spacewalk26-server-centos6-x8<wbr>6_64 | Spacewalk Server 2.6 for CentOS 6 (x86_64)     | 2017-04-25 13:39:38.250769+02 | 2018-06-21 10:36:17.46+02<br>
   centos7-x86_64-fasttrack     <wbr>     | CentOS 7 FastTrack (x86_64)                    | 2014-07-21 08:16:26.017642+02 | 2018-06-21 10:26:29.571+02<br>
   spacewalk26-client-centos7-x8<wbr>6_64 | Spacewalk Client 2.6 for CentOS 7 (x86_64)     | 2017-04-25 13:46:00.107344+02 | 2018-06-22 10:22:28.484+02<br>
   centos7-x86_64-centosplus    <wbr>     | CentOS 7 Plus (x86_64)                      <wbr>   | 2014-07-21 08:16:25.467309+02 | 2018-06-21 10:25:19.884+02<br>
   centos6-x86_64-centosplus    <wbr>     | CentOS 6 Plus (x86_64)                      <wbr>   | 2012-08-23 07:18:00.349338+02 | 2018-06-21 10:36:04.08+02<br>
   docker-ce-centos7-x86_64     <wbr>     | Docker CE Stable for CentOS 7 (x86_64)         | 2017-09-28 12:52:45.858354+02 | 2018-07-06 04:30:05.442+02<br>
   postgresql10-centos7-x86_64  <wbr>     | PostgreSQL 10 for CentOS 7 (x86_64)            | 2018-02-12 14:48:14.617235+01 | 2018-02-12 15:06:16.464+01<br>
   bareos162-centos7-x86_64     <wbr>     | Bareos 16.2 for CentOS 7 (x86_64)              | 2017-09-26 14:37:16.533773+02 | 2018-06-21 04:59:21.954+02<br>
   docker-ce-edge-centos7-x86_64<wbr>     | Docker CE Edge for CentOS 7 (x86_64)           | 2017-12-29 09:58:14.581069+01 | 2018-06-21 04:59:39.796+02<br>
   beegfs6-centos7-x86_64       <wbr>     | BeeGFS 6 for CentOS 7 (x86_64)                 | 2018-03-19 14:08:08.389588+01 | 2018-06-21 04:59:43.132+02<br>
   icinga-epel6-x86_64          <wbr>     | ICINGA stable release for epel-6 (x86_64)      | 2018-01-15 15:41:31.138875+01 | 2018-07-06 02:30:28.142+02<br>
   openstack-pike-centos7       <wbr>     | OpenStack Pike for CentOS 7                    | 2017-10-05 09:10:22.575224+02 | 2018-06-21 05:36:35.43+02<br>
   globus-el7-x86_64            <wbr>     | Globus Toolkit 6 (el7)                         | 2017-09-28 13:00:07.32028+02  | 2018-07-06 03:31:22.806+02<br>
   postgresql10-centos6-x86_64  <wbr>     | PostgreSQL 10 for CentOS 6 (x86_64)            | 2018-02-12 14:48:55.970013+01 | 2018-07-06 04:02:04.03+02<br>
   ceph-jewel-centos7           <wbr>     | CentOS 7 Ceph Jewel (x86_64)                   | 2018-02-12 12:15:28.8976+01   | 2018-07-06 05:30:07.085+02<br>
   spacewalk28-server-centos6-x8<wbr>6_64 | Spacewalk Server 2.8 for CentOS 6 (x86_64)     | 2018-06-22 18:05:55.190988+02 | 2018-07-06 06:18:15.016+02<br>
   spacewalk28-client-centos7-x8<wbr>6_64 | Spacewalk Client 2.8 for CentOS 7 (x86_64)     | 2018-06-22 18:05:55.575963+02 | 2018-07-06 06:18:22.41+02<br>
   puppet5-el7-x86_64           <wbr>     | Puppet 5 for EL 7 (x86_64)                     | 2018-03-28 14:20:52.254978+02 | 2018-06-21 06:01:31.357+02<br>
   centos7-qemu-ev              <wbr>     | CentOS 7 QEMU EV (x86_64)                      | 2018-02-12 12:15:06.116673+01 | 2018-07-06 05:30:12.078+02<br>
   bareos172-centos7-x86_64     <wbr>     | Bareos 17.2 for CentOS 7 (x86_64)              | 2018-05-08 14:18:56.708206+02 | 2018-06-22 10:24:48.431+02<br>
   openstack-queens-centos7     <wbr>     | OpenStack Queens for CentOS 7                  | 2018-03-28 13:08:27.607498+02 | 2018-07-06 05:30:44.123+02<br>
   elrepo-centos7               <wbr>     | ELRepo for CentOS 7                            | 2017-09-18 12:03:42.302442+02 | 2018-06-21 05:01:30.303+02<br>
   spacewalk28-client-centos6-x8<wbr>6_64 | Spacewalk Client 2.8 for CentOS 6 (x86_64)     | 2018-06-22 18:05:53.475193+02 | 2018-07-06 06:18:07.158+02<br>
   centos7-x86_64-extras        <wbr>     | CentOS 7 Extras (x86_64)                       | 2014-07-21 08:16:25.841121+02 | 2018-06-21 10:26:27.879+02<br>
   internet2-web100_kernel      <wbr>     | perfSONAR Web100 Kernel RPM Repository         | 2017-06-27 06:57:03.825602+02 | 2018-06-21 10:24:50.96+02<br>
   centos6-x86_64-updates       <wbr>     | CentOS 6 Updates (x86_64)                      | 2012-08-23 06:46:05.264195+02 | 2018-06-21 10:34:24.866+02<br>
   centos7-x86_64-updates       <wbr>     | CentOS 7 Updates (x86_64)                      | 2014-07-21 08:16:26.196397+02 | 2018-07-02 09:48:02.273+02<br>
   centos6-x86_64-fasttrack     <wbr>     | CentOS 6 FastTrack (x86_64)                    | 2012-08-23 06:46:05.205228+02 | 2018-06-22 10:24:43.51+02<br>
   postgresql92-centos6-x86_64  <wbr>     | PostgreSQL 9.2 for CentOS 6 (x86_64)           | 2012-09-12 08:15:27.194188+02 | 2018-07-06 03:47:12.311+02<br>
   epel6-centos6-x86_64         <wbr>     | EPEL 6 for CentOS 6 (x86_64)                   | 2012-08-23 06:46:30.597753+02 | 2018-07-06 03:55:48.834+02<br>
   jpackage5.0-generic          <wbr>     | JPackage 5.0 for generic                       | 2014-07-02 10:32:24.985979+02 | 2018-07-06 03:46:46.084+02<br>
   hp-spp-rhel-7                <wbr>     | HP Software Delivery Repository for SPP RHEL 7 | 2015-04-16 14:18:33.041249+02 | 2018-07-06 05:31:01.633+02<br>
   owncloud-centos7-noarch      <wbr>     | ownCloud for CentOS 7                          | 2015-01-28 13:53:41.415573+01 | 2018-06-21 05:36:40.901+02<br>
   centos7-x86_64-scl           <wbr>     | CentOS 7 SCL (x86_64)                      <wbr>    | 2016-04-15 11:26:29.042925+02 | 2018-06-21 05:14:49.359+02<br>
   postgresql96-centos6-x86_64  <wbr>     | PostgreSQL 9.6 for CentOS 6 (x86_64)           | 2017-02-09 15:31:54.632728+01 | 2018-06-21 05:00:39.353+02<br>
   postgresql96-centos7-x86_64  <wbr>     | PostgreSQL 9.6 for CentOS 7 (x86_64)           | 2017-02-09 15:35:13.136001+01 | 2018-07-06 04:30:19.645+02<br>
   centos6-x86_64               <wbr>     | CentOS 6 (x86_64)                      <wbr>        | 2012-08-23 06:46:04.610089+02 | 2018-07-05 22:03:30.089+02<br>
   centos7-x86_64               <wbr>     | CentOS 7 (x86_64)                      <wbr>        | 2014-07-21 08:16:24.172395+02 | 2018-07-05 22:08:45.242+02<br>
<br>
-Gerald<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
echo 'select label,name,modified,last_synce<wbr>d from rhnchannel' | sudo spacewalk-sql -i<br>
<br>
label | name | modified|last_synced<br>
<br>
------------------------------<wbr>----+-------------------------<wbr>---------+--------------------<wbr>-----------+------------------<wbr>----------<br>
<br>
<br>
ovirt-x86_64-stable-6-nonprod| ovirt-x86_64-stable-6-nonprod| 2015-09-14 13:46:44.147134-05 |<br>
<br>
extras7-x86_64-nonprod | extras7-x86_64-nonprod | 2017-11-06 10:26:30.011283-06 |<br>
<br>
centos7-x86_64-all | centos7-x86_64-all | 2015-11-11 08:50:58.831234-06 | 2018-07-05 11:01:08.857-05<br>
<br>
perl-5.16.x-all| perl-5.16.x-all| 2015-09-11 13:25:15.002198-05 | 2015-09-11 13:29:21.361-05<br>
<br>
ovirt-x86_64-stable-6| ovirt-x86_64-stable-6| 2015-09-14 13:30:55.172-05|<br>
<br>
ovirt-x86_64-stable-6-prod | ovirt-x86_64-stable-6-prod | 2015-09-14 13:48:06.637063-05 |<br>
<br>
other6-x86_64-all| other6-x86_64-all| 2015-07-28 09:20:38.156104-05 |<br>
<br>
epel5-x86_64-all | epel5-x86_64-all | 2016-10-04 18:20:44.846312-05 | 2017-04-17 12:57:36.859-05<br>
<br>
passenger6-x86_64-prod | passenger6-x86_64-prod | 2016-04-22 14:35:45.395518-05 |<br>
<br>
perl-5.16.x-nonprod| perl-5.16.x-nonprod| 2015-09-11 13:27:32.261063-05 |<br>
<br>
perl-5.16.x-prod | perl-5.16.x-prod | 2015-09-11 13:26:40.584715-05 | 2015-09-11 13:29:38.537-05<br>
<br>
other6-x86_64-nonprod| other6-x86_64-nonprod| 2015-07-23 15:00:03.733479-05 |<br>
<br>
other6-x86_64-prod | other6-x86_64-prod | 2015-07-21 15:10:48.719528-05 |<br>
<br>
epel5-x86_64-prod| epel5-x86_64-prod| 2016-10-04 18:25:38.655383-05 |<br>
<br>
passenger6-x86_64-all| passenger6-x86_64-all| 2016-04-20 11:37:19.002493-05 | 2016-04-20 11:58:42.312-05<br>
<br>
docker7-x86_64-prod| docker7-x86_64-prod| 2017-08-03 11:42:08.474496-05 |<br>
<br>
centos5-x86_64-nonprod | centos5-x86_64-nonprod | 2015-06-22 16:16:17.372799-05 |<br>
<br>
other7-x86_64-nonprod| other7-x86_64-nonprod| 2016-07-14 13:03:10.320136-05 |<br>
<br>
mongo3.2-centos6-x86_64-all| mongo3.2-centos6-x86_64-all| 2016-08-22 12:21:40.722182-05 | 2018-07-01 12:27:03.019-05<br>
<br>
centos5-x86_64-prod| centos5-x86_64-prod| 2015-06-22 16:20:41.474486-05 |<br>
<br>
passenger6-x86_64-nonprod| passenger6-x86_64-nonprod| 2016-04-20 12:29:24.677227-05 |<br>
<br>
other7-x86_64-prod | other7-x86_64-prod | 2016-07-14 13:03:47.284295-05 |<br>
<br>
cloudera5.7-x86_64-nonprod | cloudera5.7-x86_64-nonprod | 2016-05-09 12:10:16.496626-05 | 2016-06-20 13:11:20.62-05<br>
<br>
epel5-x86_64-nonprod | epel5-x86_64-nonprod | 2016-10-04 18:25:09.844486-05 |<br>
<br>
epel6-x86_64-prod| epel6-x86_64-prod| 2016-03-18 11:52:45.9199-05 | 2016-08-23 05:07:37.967-05<br>
<br>
spacewalk6-client-all| spacewalk6-client-all| 2017-05-02 20:53:38.867018-05 | 2018-07-01 22:02:11.386-05<br>
<br>
docker7-x86_64-nonprod | docker7-x86_64-nonprod | 2017-04-07 15:13:44.158973-05 |<br>
<br>
mongo3.2-centos6-x86_64-nonpro<wbr>d| mongo3.2-centos6-x86_64-nonpro<wbr>d| 2016-08-22 12:34:18.095059-05 |<br>
<br>
mongo3.2-centos6-x86_64-prod | mongo3.2-centos6-x86_64-prod | 2016-08-22 12:42:19.161165-05 |<br>
<br>
local6-x86_64-all| local6-x86_64-all| 2015-09-30 08:55:37.657412-05 | 2016-04-19 07:00:23.632-05<br>
<br>
centos5-x86_64-all | centos5-x86_64-all | 2015-06-22 15:20:22.085465-05 | 2017-04-17 13:09:39.635-05<br>
<br>
spacewalk5-client-nonprod| spacewalk5-client-nonprod| 2017-05-02 20:53:20.430795-05 |<br>
<br>
spacewalk5-client-prod | spacewalk5-client-prod | 2017-05-02 20:53:28.980968-05 |<br>
<br>
spacewalk5-client-all| spacewalk5-client-all| 2017-05-02 20:53:08.276664-05 | 2018-07-05 10:10:11.665-05<br>
<br>
spacewalk7-client-prod | spacewalk7-client-prod | 2017-05-02 20:54:32.321635-05 | 2018-07-05 11:01:14.499-05<br>
<br>
epel6-x86_64-nonprod | epel6-x86_64-nonprod | 2016-03-18 11:52:14.915108-05 | 2018-07-05 10:10:08.774-05<br>
<br>
centos7-x86_64-prod| centos7-x86_64-prod| 2015-11-11 09:02:06.69758-06|<br>
<br>
puppetlabs6-x86_64-prod| puppetlabs6-x86_64-prod| 2016-04-22 13:46:22.233841-05 | 2018-07-01 13:30:47.635-05<br>
<br>
puppetlabs5-x86_64-nonprod | puppetlabs5-x86_64-nonprod | 2018-03-26 15:21:59.007749-05 | 2018-07-01 13:00:03.401-05<br>
<br>
puppetlabs5-x86_64-prod| puppetlabs5-x86_64-prod| 2018-03-26 15:24:23.86552-05| 2018-07-01 13:30:39.025-05<br>
<br>
puppetlabs5-x86_64-all | puppetlabs5-x86_64-all | 2018-03-26 15:19:04.647981-05 | 2018-07-01 13:31:25.065-05<br>
<br>
other5-x86_64-all| other5-x86_64-all| 2015-08-10 14:16:01.092867-05 |<br>
<br>
other5-x86_64-nonprod| other5-x86_64-nonprod| 2015-08-10 14:18:05.114541-05 |<br>
<br>
other5-x86_64-prod | other5-x86_64-prod | 2015-08-10 14:19:03.728982-05 |<br>
<br>
centos6-x86_64-nonprod | centos6-x86_64-nonprod | 2015-06-22 16:24:07.137207-05 |<br>
<br>
centos6-x86_64-prod| centos6-x86_64-prod| 2015-06-22 16:28:51.324002-05 |<br>
<br>
extras7-x86_64-all | extras7-x86_64-all | 2017-08-16 09:13:26.8122-05 | 2018-07-05 10:05:10.626-05<br>
<br>
centos6-x86_64-gitlab-ce-nonpr<wbr>od | centos6-x86_64-gitlab-ce-nonpr<wbr>od | 2017-04-17 11:43:36.609036-05 | 2018-07-05 10:04:57.277-05<br>
<br>
spacewalk7-server-all| spacewalk7-server-all| 2017-03-28 15:22:31.851414-05 | 2018-07-05 11:11:31.564-05<br>
<br>
local5-x86_64-all| local5-x86_64-all| 2016-02-24 12:19:36.791459-06 |<br>
<br>
local5-x86_64-nonprod| local5-x86_64-nonprod| 2016-02-24 12:20:19.404008-06 |<br>
<br>
local5-x86_64-prod | local5-x86_64-prod | 2016-02-24 12:20:45.098532-06 |<br>
<br>
local6-x86_64-nonprod| local6-x86_64-nonprod| 2016-08-22 20:49:56.7376-05 |<br>
<br>
local7-x86_64-all| local7-x86_64-all| 2016-07-14 13:00:32.511851-05 |<br>
<br>
local7-x86_64-nonprod| local7-x86_64-nonprod| 2016-07-14 13:02:06.932169-05 |<br>
<br>
local7-x86_64-prod | local7-x86_64-prod | 2016-07-14 13:02:38.496912-05 |<br>
<br>
puppetlabs6-x86_64-all | puppetlabs6-x86_64-all | 2016-04-20 08:27:56.026914-05 | 2018-07-01 13:30:36.771-05<br>
<br>
spacewalk7-client-nonprod| spacewalk7-client-nonprod| 2017-05-02 20:54:22.659512-05 | 2018-07-05 11:10:25.009-05<br>
<br>
docker7-x86_64-all | docker7-x86_64-all | 2017-03-22 12:50:15.332561-05 | 2018-07-05 13:00:02.988-05<br>
<br>
spacewalk7-client-all| spacewalk7-client-all| 2017-05-02 20:54:13.5076-05 | 2018-07-05 10:04:59.748-05<br>
<br>
local6-x86_64-prod | local6-x86_64-prod | 2015-09-30 08:59:12.679727-05 |<br>
<br>
centos6-x86_64-gitlab-ee-nonpr<wbr>od | centos6-x86_64-gitlab-ee-nonpr<wbr>od | 2016-04-14 11:39:01.432444-05 | 2018-07-05 11:12:20.525-05<br>
<br>
mysqltools6-x86_64-all | mysqltools6-x86_64-all | 2016-03-17 12:41:37.44854-05| 2018-07-05 12:00:02.319-05<br>
<br>
mysqltools6-x86_64-nonprod | mysqltools6-x86_64-nonprod | 2016-03-17 12:58:35.036373-05 |<br>
<br>
mysqltools6-x86_64-prod| mysqltools6-x86_64-prod| 2016-03-17 12:59:10.969162-05 |<br>
<br>
spacewalk7-server-nonprod| spacewalk7-server-nonprod| 2017-03-28 15:23:02.210349-05 | 2018-07-05 11:12:47.471-05<br>
<br>
spacewalk7-server-prod | spacewalk7-server-prod | 2017-03-28 15:23:29.309042-05 | 2017-05-02 20:56:45.247-05<br>
<br>
epel7-x86_64-prod| epel7-x86_64-prod| 2016-03-22 09:48:38.060213-05 | 2018-07-05 09:57:25.861-05<br>
<br>
puppetlabs6-x86_64-nonprod | puppetlabs6-x86_64-nonprod | 2016-04-20 12:28:55.337125-05 | 2018-07-01 13:30:43.362-05<br>
<br>
newrelic-noarch-nover| newrelic-noarch-nover| 2016-10-13 13:54:38.621333-05 | 2016-10-13 14:09:41.778-05<br>
<br>
other7-x86_64-all| other7-x86_64-all| 2016-07-14 13:01:25.848215-05 | 2018-07-05 14:00:03.714-05<br>
<br>
spacewalk6-client-nonprod| spacewalk6-client-nonprod| 2017-05-02 20:53:50.507298-05 |<br>
<br>
spacewalk6-client-prod | spacewalk6-client-prod | 2017-05-02 20:54:00.685324-05 |<br>
<br>
spacewalk6-server-all| spacewalk6-server-all| 2018-06-22 23:11:30.637054-05 | 2018-07-05 11:01:11.543-05<br>
<br>
puppetlabs7-x86_64-prod| puppetlabs7-x86_64-prod| 2016-07-14 13:29:04.67033-05| 2018-07-01 13:31:29.425-05<br>
<br>
spacewalk6-server-nonprod| spacewalk6-server-nonprod| 2018-06-22 23:17:20.660409-05 |<br>
<br>
spacewalk6-server-prod | spacewalk6-server-prod | 2018-06-22 23:18:02.738869-05 |<br>
<br>
puppetlabs7-x86_64-nonprod | puppetlabs7-x86_64-nonprod | 2016-07-14 13:28:34.475051-05 | 2018-07-01 13:16:25.948-05<br>
<br>
epel6-x86_64-all | epel6-x86_64-all | 2016-03-18 11:50:17.587171-05 | 2018-07-05 11:07:42.644-05<br>
<br>
centos6-x86_64-gitlab-ee | centos6-x86_64-gitlab-ee | 2015-12-24 13:21:10.493684-06 | 2018-07-05 11:08:30.039-05<br>
<br>
puppetlabs7-x86_64-all | puppetlabs7-x86_64-all | 2016-07-14 12:54:59.388232-05 | 2018-07-01 13:32:02.745-05<br>
<br>
epel7-x86_64-nonprod | epel7-x86_64-nonprod | 2016-03-22 09:47:34.668867-05 | 2017-04-21 11:08:24.573-05<br>
<br>
centos6-x86_64-all | centos6-x86_64-all | 2015-06-22 15:19:13.053429-05 | 2018-07-02 01:12:57.768-05<br>
<br>
epel7-x86_64-all | epel7-x86_64-all | 2016-03-22 09:44:48.748142-05 | 2018-07-05 09:11:28.553-05<br>
<br>
centos7-x86_64-nonprod | centos7-x86_64-nonprod | 2015-10-21 22:02:28.107902-05 |<br>
<br>
(85 rows)<br>
<br>
<br>
On Thu, Jul 5, 2018 at 11:48 AM Gerald Vogt <<a href="mailto:vogt@spamcop.net" target="_blank">vogt@spamcop.net</a> <mailto:<a href="mailto:vogt@spamcop.net" target="_blank">vogt@spamcop.net</a>>> wrote:<br>
<br>
    On 05.07.18 16:05, Matt Moldvan wrote:<br>
     > How is the server utilization with respect to disk I/O (something<br>
    like<br>
     > iotop or htop might help here)?  Maybe there is something else<br>
    blocking<br>
<br>
    My server is basically idle. 99% idle, little disk i/o. It doesn't<br>
    do anything really.<br>
<br>
     > and the server doesn't have enough resources to complete.  Have you<br>
     > tried running an strace against the running process?<br>
<br>
    If it doesn't have enough resources shouldn't there be an exception?<br>
<br>
    For me, it looks more like something doesn't make it into the<br>
    database and thus into the persistent state. For instance, I now<br>
    have the repodata task at "RUNNING" for three days:<br>
<br>
    Channel Repodata:       2018-07-02 08:13:10 CEST        RUNNING<br>
<br>
    The log file shows this regarding repodata:<br>
<br>
     > # fgrep -i repodata rhn_taskomatic_daemon.log<br>
     > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02 08:13:10,584<br>
    [Thread-12] INFO  com.redhat.rhn.taskomatic.Task<wbr>oQuartzHelper - Job<br>
    single-channel-repodata-bunch-<wbr>0 scheduled succesfully.<br>
     > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02 08:13:10,636<br>
    [DefaultQuartzScheduler_Worker<wbr>-8] INFO     com.redhat.rhn.taskomatic.Tas<wbr>koJob -<br>
    single-channel-repodata-bunch-<wbr>0: bunch channel-repodata-bunch STARTED<br>
     > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02 08:13:10,651<br>
    [DefaultQuartzScheduler_Worker<wbr>-8] DEBUG<br>
    com.redhat.rhn.taskomatic.Task<wbr>oJob -<br>
    single-channel-repodata-bunch-<wbr>0: task channel-repodata started<br>
     > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02 08:13:10,793<br>
    [DefaultQuartzScheduler_Worker<wbr>-8] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.ChannelRepodata - In the queue: 4<br>
     > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02 08:13:11,102<br>
    [DefaultQuartzScheduler_Worker<wbr>-8] DEBUG<br>
    com.redhat.rhn.taskomatic.Task<wbr>oJob - channel-repodata<br>
    (single-channel-repodata-bunch<wbr>-0) ... running<br>
     > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02 08:13:11,103<br>
    [DefaultQuartzScheduler_Worker<wbr>-8] INFO     com.redhat.rhn.taskomatic.Tas<wbr>koJob -<br>
    single-channel-repodata-bunch-<wbr>0: bunch channel-repodata-bunch FINISHED<br>
<br>
    So according to the logs the repodata bunch has finished. According<br>
    to the web interface it has not. Nothing has been updated in<br>
    /var/cache/rhn/repodata/ either. In addition, those four channels<br>
    which were still updated haven't been updated either now.<br>
<br>
    Thanks,<br>
<br>
    Gerald<br>
<br>
<br>
<br>
     ><br>
     > I also had an (well, many) issue(s) with our Spacewalk server before<br>
     > disabling snapshots in /etc/rhn/rhn.conf.  I also increased the<br>
    number<br>
     > of workers and max repodata work items:<br>
     ><br>
     > # system snapshots enabled<br>
     > enable_snapshots = 0<br>
     > ...<br>
     > taskomatic.maxmemory=6144<br>
     > taskomatic.errata_cache_max_wo<wbr>rk_items = 500<br>
     > taskomatic.channel_repodata_ma<wbr>x_work_items = 50<br>
     > taskomatic.channel_repodata_wo<wbr>rkers = 5<br>
     ><br>
     ><br>
     ><br>
     > On Thu, Jul 5, 2018 at 4:38 AM Florence Savary<br>
     > <<a href="mailto:florence.savary.fs@gmail.com" target="_blank">florence.savary.fs@gmail.com</a><br>
    <mailto:<a href="mailto:florence.savary.fs@gmail.com" target="_blank">florence.savary.fs@gma<wbr>il.com</a>><br>
    <mailto:<a href="mailto:florence.savary.fs@gmail.com" target="_blank">florence.savary.fs@gma<wbr>il.com</a><br>
    <mailto:<a href="mailto:florence.savary.fs@gmail.com" target="_blank">florence.savary.fs@gma<wbr>il.com</a>>>> wrote:<br>
     ><br>
     >     Hello,<br>
     ><br>
     >     Thanks for sharing your configuration files. They differ very<br>
    little<br>
     >     from mine. I just changed the number of workers in rhn.conf,<br>
    but it<br>
     >     didn't change anything.<br>
     ><br>
     >     I deleted all the channels clones not used by any system and<br>
    dating<br>
     >     back from before May 2018, in order to lower the number of<br>
    channels<br>
     >     in the queue. There were 127 channels in the queue before these<br>
     >     deletion (indicated<br>
    in /var/log/rhn/rhn_taskomatic<wbr>_daemon.log), and<br>
     >     there are 361 of them now ... I must admit I'm confused... I<br>
    hoped<br>
     >     it would reduce the number of channels to process and thus "help"<br>
     >     taskomatic, but obviously I was wrong.<br>
     ><br>
     >     I also noticed that the repodata regeneration seems to work<br>
    fine for<br>
     >     existing channels that are not clones, but it is not working<br>
    for new<br>
     >     channels that are not clones (and not working for new clones but<br>
     >     nothing new here).<br>
     ><br>
     >     Has anyone got any other idea (even the tiniest) ?<br>
     ><br>
     >     Regards,<br>
     >     Florence<br>
     ><br>
     ><br>
     >     2018-07-04 15:21 GMT+02:00 Paul Dias - BCX<br>
    <<a href="mailto:paul.dias@bcx.co.za" target="_blank">paul.dias@bcx.co.za</a> <mailto:<a href="mailto:paul.dias@bcx.co.za" target="_blank">paul.dias@bcx.co.za</a>><br>
     >     <mailto:<a href="mailto:paul.dias@bcx.co.za" target="_blank">paul.dias@bcx.co.za</a> <mailto:<a href="mailto:paul.dias@bcx.co.za" target="_blank">paul.dias@bcx.co.za</a>>>><wbr>:<br>
     ><br>
     >         Hi,____<br>
     ><br>
     >         __ __<br>
     ><br>
     >         Let me post my settings that I have on my CentOS6 server.<br>
    Can’t<br>
     >         remember but I have one or two others, but his is from<br>
    the top<br>
     >         of my head.____<br>
     ><br>
     >         __ __<br>
     ><br>
     >         /etc/rhn/rhn.conf____<br>
     ><br>
     >         # Added by paul dias increase number of taskomatic workers<br>
     >         20180620____<br>
     ><br>
     >         taskomatic.channel_repodata_w<wbr>orkers = 3____<br>
     ><br>
     >         taskomatic.java.maxmemory=409<wbr>6____<br>
     ><br>
     >         __ __<br>
     ><br>
     >         /etc/sysconfig/tomcat6____<br>
     ><br>
     >         JAVA_OPTS="-ea -Xms256m -Xmx512m -Djava.awt.headless=true<br>
     >         -Dorg.xml.sax.driver=org.apac<wbr>he.xerces.parsers.SAXParser<br>
     >         -Dorg.apache.tomcat.util.http<wbr>.Parameters.MAX_COUNT=1024<br>
     >         -XX:MaxNewSize=256 -XX:-UseConcMarkSweepGC<br>
     >         -Dnet.sf.ehcache.skipUpdateCh<wbr>eck=true<br>
     >       -Djavax.sql.DataSource.Factor<wbr>y=org.apache.commons.dbcp.Basi<wbr>cDataSourceFactory"____<br>
<br>
     ><br>
     >         __ __<br>
     ><br>
     >         /etc/tomcat/server.xml____<br>
     ><br>
     >         <!-- Define an AJP 1.3 Connector on port 8009 -->____<br>
     ><br>
     >              <Connector port="8009" protocol="AJP/1.3"<br>
     >         redirectPort="8443" URIEncoding="UTF-8" address="127.0.0.1"<br>
     >         maxThreads="256" connectionTimeout="20000"/>___<wbr>_<br>
     ><br>
     >         __ __<br>
     ><br>
     >              <Connector port="8009" protocol="AJP/1.3"<br>
     >         redirectPort="8443" URIEncoding="UTF-8" address="::1"<br>
     >         maxThreads="256" connectionTimeout="20000"/>___<wbr>_<br>
     ><br>
     >         __ __<br>
     ><br>
     >         /usr/share/rhn/config-default<wbr>s/rhn_taskomatic_daemon.conf__<wbr>__<br>
     ><br>
     >         # Initial Java Heap Size (in MB)____<br>
     ><br>
     >         wrapper.java.initmemory=512__<wbr>__<br>
     ><br>
     >         __ __<br>
     ><br>
     >         # Maximum Java Heap Size (in MB)____<br>
     ><br>
     >         wrapper.java.maxmemory=1512__<wbr>__<br>
     ><br>
     >         # Adjusted by paul 20180620____<br>
     ><br>
     >         __ __<br>
     ><br>
     >         wrapper.ping.timeout=0____<br>
     ><br>
     >         # # adjusted paul dias 20180620____<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         Regards,____<br>
     ><br>
     >         *Paul Dias____*<br>
     ><br>
     >         Technical Consultant____<br>
     ><br>
     >         6^th Floor, 8 Boundary Road____<br>
     ><br>
     >         Newlands____<br>
     ><br>
     >         Cape Town____<br>
     ><br>
     >         7700____<br>
     ><br>
     >         T: +27 (0) 21 681 3149 <tel:+27%2021%20681%203149><br>
    <tel:+27%2021%20681%203149>___<wbr>_<br>
     ><br>
     >         *Meet your future today.____*<br>
     ><br>
     >         *__ __*<br>
     ><br>
     >         __BCX______<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __Social-facebook<br>
     >         <<a href="https://www.facebook.com/BCXworld" rel="noreferrer" target="_blank">https://www.facebook.com/BCX<wbr>world</a>>____Social-twitter<br>
     >         <<a href="https://twitter.com/BCXworld" rel="noreferrer" target="_blank">https://twitter.com/BCXworld</a><wbr>><a href="https://maps.google.com/?q=l-facebook%0D%0A%3E%3E%3E+%C2%A0%C2%A0%C2%A0%C2%A0+%3E%C2%A0+%C2%A0+%C2%A0+%C2%A0+%C2%A0+____Social-twitter%0D%0A%3E%3E%3E+%C2%A0%C2%A0%C2%A0%C2%A0+%3E%C2%A0+%C2%A0+%C2%A0+%C2%A0+%C2%A0+____Socia&entry=gmail&source=g" target="_blank">____Socia</a>l-linkdin<br>
     >         <<a href="https://za.linkedin.com/BCX" rel="noreferrer" target="_blank">https://za.linkedin.com/BCX</a>><wbr>____Social-youtube<br>
     >         <<a href="https://www.youtube.com/BCXworld" rel="noreferrer" target="_blank">https://www.youtube.com/BCXw<wbr>orld</a>>______<br>
     ><br>
     >         __ __<br>
     ><br>
     >         __ __<br>
     ><br>
     >         This e-mail is subject to the BCX electronic<br>
    communication legal<br>
     >         notice, available at:<br>
     > <a href="https://www.bcx.co.za/disclaimers____" rel="noreferrer" target="_blank">https://www.bcx.co.za/disclaim<wbr>ers____</a><br>
     ><br>
     >         /__ __/<br>
     ><br>
     >         /__ __/<br>
     ><br>
     >         __ __<br>
     ><br>
     >         *From:*Paul Dias - BCX<br>
     >         *Sent:* 02 July 2018 06:53 PM<br>
     ><br>
     ><br>
     >         *To:* <a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.com</a><br>
    <mailto:<a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.<wbr>com</a>> <mailto:<a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.<wbr>com</a><br>
    <mailto:<a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.<wbr>com</a>>><br>
     >         *Subject:* Re: [Spacewalk-list] Taskomatic runs indefinitely<br>
     >         without ever generating repodata____<br>
     ><br>
     >         __ __<br>
     ><br>
     >         What I have noticed, if you use<br>
     >         "spacecmd softchannel_generat<wbr>eyumcache <channel name>"<br>
    and then<br>
     >         go to tasks and run single repodata bunch, you will see<br>
    it will<br>
     >         actually start and generate your channel cache for you on the<br>
     >         channel you used the spacecmd  on, this works every time.____<br>
     ><br>
     >         __ __<br>
     ><br>
     >         But yes the task logs just show repodata bunch running<br>
    forever.____<br>
     ><br>
     >         __ __<br>
     ><br>
     >         Regards,____<br>
     ><br>
     >         *Paul Dias*____<br>
     ><br>
     >         6^th  Floor, 8 Boundary Road____<br>
     ><br>
     >         Newlands____<br>
     ><br>
     >         Cape Town____<br>
     ><br>
     >         7700____<br>
     ><br>
     >         T: +27 (0) 21 681 3149 <tel:+27%2021%20681%203149><br>
    <tel:+27%2021%20681%203149>___<wbr>_<br>
     ><br>
     >         __ __<br>
     ><br>
     >         *Meet your future today.*____<br>
     ><br>
     >         **____<br>
     ><br>
     >         BCX____<br>
     ><br>
     >         __ __<br>
     ><br>
     >       -----------------------------<wbr>------------------------------<wbr>-------------<br>
     ><br>
     >         *From:*Gerald Vogt <<a href="mailto:vogt@spamcop.net" target="_blank">vogt@spamcop.net</a><br>
    <mailto:<a href="mailto:vogt@spamcop.net" target="_blank">vogt@spamcop.net</a>> <mailto:<a href="mailto:vogt@spamcop.net" target="_blank">vogt@spamcop.net</a><br>
    <mailto:<a href="mailto:vogt@spamcop.net" target="_blank">vogt@spamcop.net</a>>>><br>
     >         *Sent:* Monday, 02 July 2018 9:45 AM<br>
     >         *To:* <a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.com</a><br>
    <mailto:<a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.<wbr>com</a>> <mailto:<a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.<wbr>com</a><br>
    <mailto:<a href="mailto:spacewalk-list@redhat.com" target="_blank">spacewalk-list@redhat.<wbr>com</a>>><br>
     >         *Subject:* Re: [Spacewalk-list] Taskomatic runs indefinitely<br>
     >         without ever generating repodata____<br>
     ><br>
     >         ____<br>
     ><br>
     >         After letting the upgraded server sit for a while it<br>
    seems only<br>
     >         a few of<br>
     >         the task schedules actually finish. By now, only those tasks<br>
     >         show up in<br>
     >         in the task engine status page:<br>
     ><br>
     >         Changelog Cleanup:       2018-07-01 23:00:00 CEST     FINISHED<br>
     >         Clean Log History:       2018-07-01 23:00:00 CEST     FINISHED<br>
     >         Compare Config Files:    2018-07-01 23:00:00 CEST     FINISHED<br>
     >         Daily Summary Mail:      2018-07-01 23:00:00 CEST     FINISHED<br>
     >         Daily Summary Queue:     2018-07-01 23:00:00 CEST     FINISHED<br>
     ><br>
     >         All the other tasks have disappeared from the list by now.<br>
     ><br>
     >         The repo-sync tasks seem to work. New packages appear in the<br>
     >         channel.<br>
     >         However, the repo build is not running or better it seems<br>
    to never<br>
     >         properly finish.<br>
     ><br>
     >         If I start it manually, it seems to do its work:<br>
     ><br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02<br>
    08:13:10,584 [Thread-12] INFO     com.redhat.rhn.taskomatic.Tas<wbr>koQuartzHelper - Job<br>
    single-channel-repodata-bunch-<wbr>0 scheduled succesfully.<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02<br>
    08:13:10,636 [DefaultQuartzScheduler_Worker<wbr>-8] INFO     com.redhat.rhn.taskomatic.Tas<wbr>koJob -<br>
    single-channel-repodata-bunch-<wbr>0: bunch channel-repodata-bunch STARTED<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02<br>
    08:13:10,651 [DefaultQuartzScheduler_Worker<wbr>-8] DEBUG<br>
    com.redhat.rhn.taskomatic.Task<wbr>oJob -<br>
    single-channel-repodata-bunch-<wbr>0: task channel-repodata started<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02<br>
    08:13:10,793 [DefaultQuartzScheduler_Worker<wbr>-8] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.ChannelRepodata - In the queue: 4<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02<br>
    08:13:11,102 [DefaultQuartzScheduler_Worker<wbr>-8] DEBUG<br>
    com.redhat.rhn.taskomatic.Task<wbr>oJob - channel-repodata<br>
    (single-channel-repodata-bunch<wbr>-0) ... running<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02<br>
    08:13:11,103 [DefaultQuartzScheduler_Worker<wbr>-8] INFO     com.redhat.rhn.taskomatic.Tas<wbr>koJob -<br>
    single-channel-repodata-bunch-<wbr>0: bunch channel-repodata-bunch FINISHED<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02<br>
    08:13:11,137 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - File<br>
    Modified Date:2018-06-23 03:48:50 CEST<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02<br>
    08:13:11,137 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Channel<br>
    Modified Date:2018-07-02 03:45:39 CEST<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02<br>
    08:13:11,211 [Thread-678] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - File<br>
    Modified Date:2018-06-23 04:09:51 CEST<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02<br>
    08:13:11,213 [Thread-678] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Channel<br>
    Modified Date:2018-07-02 03:47:55 CEST<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:19 | 2018-07-02<br>
    08:13:19,062 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Generating<br>
    new repository metadata for channel 'epel6-centos6-x86_64'(sha1)<br>
    14401 packages, 11613 errata<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:13:21 | 2018-07-02<br>
    08:13:21,193 [Thread-678] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Generating<br>
    new repository metadata for channel 'epel7-centos7-x86_64'(sha1)<br>
    16282 packages, 10176 errata<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02<br>
    08:40:12,351 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Repository<br>
    metadata generation for 'epel6-centos6-x86_64' finished in 1613 seconds<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02<br>
    08:40:12,457 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - File<br>
    Modified Date:2018-06-19 06:28:57 CEST<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02<br>
    08:40:12,457 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Channel<br>
    Modified Date:2018-07-02 04:30:05 CEST<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02<br>
    08:40:12,691 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Generating<br>
    new repository metadata for channel<br>
    'postgresql96-centos7-x86_64'(<wbr>sha256) 1032 packages, 0 errata<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02<br>
    08:41:51,710 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Repository<br>
    metadata generation for 'postgresql96-centos7-x86_64' finished in 98<br>
    seconds<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02<br>
    08:41:51,803 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - File<br>
    Modified Date:2018-06-20 05:08:38 CEST<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02<br>
    08:41:51,803 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Channel<br>
    Modified Date:2018-07-02 04:00:00 CEST<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02<br>
    08:41:51,923 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Generating<br>
    new repository metadata for channel<br>
    'postgresql10-centos6-x86_64'(<wbr>sha512) 436 packages, 0 errata<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:42:26 | 2018-07-02<br>
    08:42:26,479 [Thread-677] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Repository<br>
    metadata generation for 'postgresql10-centos6-x86_64' finished in 34<br>
    seconds<br>
     >         > INFO   | jvm 1    | 2018/07/02 08:45:01 | 2018-07-02<br>
    08:45:01,697 [Thread-678] INFO     com.redhat.rhn.taskomatic.tas<wbr>k.repomd.RepositoryWriter - Repository<br>
    metadata generation for 'epel7-centos7-x86_64' finished in 1900 seconds<br>
     ><br>
     >         yet, the task remains in RUNNING. And for whatever reason it<br>
     >         only seems<br>
     >         to work some channels. I find a total of 20 repos syncing<br>
    in the<br>
     >         logs of<br>
     >         the updated server compared to 42 repos syncing in the<br>
    logs of<br>
     >         the old.<br>
     >         I don't really see the difference between those 20 repos<br>
    syncing<br>
     >         and<br>
     >         those other 22 not. First I suspected channels with<br>
    custom quartz<br>
     >         schedules, but then I found channels in both groups.<br>
     ><br>
     >         So I don't know how to troubleshoot this any further. The<br>
     >         repodata task<br>
     >         which I have started 1,5 hours ago is still at "RUNNING". The<br>
     >         channels<br>
     >         for which the sync works have been updated. I don't know<br>
    why it<br>
     >         is still<br>
     >         running. Server load is back down...<br>
     ><br>
     >         Thanks,<br>
     ><br>
     >         Gerald<br>
     ><br>
     >         On 22.06.18 19:12, Gerald Vogt wrote:<br>
     >         > I have the same problem after upgrading from 2.6 to 2.8<br>
    on CentOS 6.9. I<br>
     >         > have even increased the memory as suggested by that<br>
    link but it makes no<br>
     >         > differences. None of the scheduled tasks are running. I<br>
    can run a bunch<br>
     >         > manually. But the scheduler doesn't seem to work. Last<br>
    execution times<br>
     >         > on the task engine status pages are still at timestamps<br>
    from before the<br>
     >         > upgrade. -Gerald<br>
     >         ><br>
     >         ><br>
     >         ><br>
     >         > On 22.06.18 14:15, Avi Miller wrote:<br>
     >         >> Hi,<br>
     >         >><br>
     >         >>> On 22 Jun 2018, at 5:51 pm, Florence Savary<br>
     >         >>> <<a href="mailto:florence.savary.fs@gmail.com" target="_blank">florence.savary.fs@gmail.com</a><br>
    <mailto:<a href="mailto:florence.savary.fs@gmail.com" target="_blank">florence.savary.fs@gma<wbr>il.com</a>><br>
     >         <mailto:<a href="mailto:florence.savary.fs@gmail.com" target="_blank">florence.savary.fs@gm<wbr>ail.com</a><br>
    <mailto:<a href="mailto:florence.savary.fs@gmail.com" target="_blank">florence.savary.fs@gma<wbr>il.com</a>>>> wrote:<br>
     >         >>><br>
     >         >>> When using taskotop, we can see a line for the<br>
    channel-repodata task,<br>
     >         >>> we see it is running, but there is never any channel<br>
    displayed in the<br>
     >         >>> Channel column. We can also see the task marked as<br>
    running in the<br>
     >         >>> Admin tab of the WebUI, but if we let it, it never<br>
    stops. The task<br>
     >         >>> runs indefinitely, whithout ever doing anything.<br>
     >         >><br>
     >         >> If you've never modified the default memory settings,<br>
    Taskomatic is<br>
     >         >> probably running out of memory and task is crashing.<br>
    This is a known<br>
     >         >> issue, particularly when you sync large repos.<br>
     >         >><br>
     >         >> I would suggest increasing the memory assigned to<br>
    Taskomatic to see if<br>
     >         >> that resolves the issue. You will need to restart it<br>
    after making<br>
     >         >> these changes:<br>
     >         >><br>
    <a href="https://docs.oracle.com/cd/E92593_01/E90695/html/swk24-issues-memory.html" rel="noreferrer" target="_blank">https://docs.oracle.com/cd/E92<wbr>593_01/E90695/html/swk24-issue<wbr>s-memory.html</a><br>
     >         >><br>
     >         >> Cheers,<br>
     >         >> Avi<br>
     >         >><br>
     >         >> --<br>
     >         >> Oracle <<a href="http://www.oracle.com" rel="noreferrer" target="_blank">http://www.oracle.com</a>><br>
     >         >> Avi Miller | Product Management Director | +61 (3)<br>
    8616 3496 <tel:+61%203%208616%203496> <tel:+61%203%208616%203496><br>
     >         >> Oracle Linux and Virtualization<br>
     >         >> 417 St Kilda Road, Melbourne, Victoria 3004 Australia<br>
     >         >><br>
     >         >><br>
     >         >> ______________________________<wbr>_________________<br>
     >         >> Spacewalk-list mailing list<br>
     >         >> <a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a><br>
    <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>> <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a><br>
    <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>>><br>
     >         >> <a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
     >         >><br>
     >         ><br>
     >         > ______________________________<wbr>_________________<br>
     >         > Spacewalk-list mailing list<br>
     >         > <a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a><br>
    <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>> <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a><br>
    <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>>><br>
     >         > <a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
     ><br>
     >         ____<br>
     ><br>
     ><br>
     >         _____________________________<wbr>__________________<br>
     >         Spacewalk-list mailing list<br>
     > <a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a> <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>><br>
    <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a> <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>>><br>
     > <a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
     ><br>
     ><br>
     >     _____________________________<wbr>__________________<br>
     >     Spacewalk-list mailing list<br>
     > <a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a> <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>><br>
    <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a> <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>>><br>
     > <a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
     ><br>
     ><br>
     ><br>
     > ______________________________<wbr>_________________<br>
     > Spacewalk-list mailing list<br>
     > <a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a> <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>><br>
     > <a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
     ><br>
<br>
    ______________________________<wbr>_________________<br>
    Spacewalk-list mailing list<br>
    <a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a> <mailto:<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.<wbr>com</a>><br>
    <a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
Spacewalk-list mailing list<br>
<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
<br>
</blockquote>
<br>
______________________________<wbr>_________________<br>
Spacewalk-list mailing list<br>
<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
</blockquote>
<br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
Spacewalk-list mailing list<br>
<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
<br>
</div></div></blockquote><div class="m_105974890664644146HOEnZb"><div class="m_105974890664644146h5">
<br>
<br>
______________________________<wbr>_________________<br>
Spacewalk-list mailing list<br>
<a href="mailto:Spacewalk-list@redhat.com" target="_blank">Spacewalk-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/mailman<wbr>/listinfo/spacewalk-list</a><br>
</div></div></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Spacewalk-list mailing list<br>
<a href="mailto:Spacewalk-list@redhat.com">Spacewalk-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/spacewalk-list" rel="noreferrer" target="_blank">https://www.redhat.com/<wbr>mailman/listinfo/spacewalk-<wbr>list</a><br></blockquote></div><br></div>