[Spacewalk-list] Spacewalk metadata not transfered to client

Inter Load interload13 at gmail.com
Fri May 19 15:27:15 UTC 2017


For information :
OS : Red Hat Enterprise Linux Server release 7.3 (Maipo)
Spacewalk version : 2.6

Thanks
Regards
Romain

2017-05-19 17:07 GMT+02:00 Inter Load <interload13 at gmail.com>:

> Hello,
>
> I found my problem.
> I updated my Spacewalk Server.
> After downgrade (yum history undo 12), it's ok !!
>
> below, list of packages updated.
> What package can be problematic ? HTTPD ? JAVA ? Other ?
>
> Thanks
> Regards
> Romain
>
>
>> *Removed:*
>> *  NetworkManager.x86_64 1:1.4.0-19.el7_3
>>       NetworkManager-config-server.x86_64 1:1.4.0-19.el7_3*
>> *  NetworkManager-libnm.x86_64 1:1.4.0-19.el7_3
>>       NetworkManager-team.x86_64 1:1.4.0-19.el7_3*
>> *  NetworkManager-tui.x86_64 1:1.4.0-19.el7_3
>>       bind-libs.x86_64 32:9.9.4-38.el7_3.3*
>> *  bind-libs-lite.x86_64 32:9.9.4-38.el7_3.3
>>        bind-license.noarch 32:9.9.4-38.el7_3.3*
>> *  bind-utils.x86_64 32:9.9.4-38.el7_3.3
>>        ca-certificates.noarch 0:2017.2.11-70.1.el7_3*
>> *  container-selinux.noarch 2:2.10-2.el7
>>        device-mapper.x86_64 7:1.02.135-1.el7_3.4*
>> *  device-mapper-event.x86_64 7:1.02.135-1.el7_3.4
>>        device-mapper-event-libs.x86_64 7:1.02.135-1.el7_3.4*
>> *  device-mapper-libs.x86_64 7:1.02.135-1.el7_3.4
>>       dmidecode.x86_64 1:3.0-2.1.el7_3*
>> *  docker.x86_64 2:1.12.6-16.el7
>>        docker-client.x86_64 2:1.12.6-16.el7*
>> *  docker-common.x86_64 2:1.12.6-16.el7
>>       docker-rhel-push-plugin.x86_64 2:1.12.6-16.el7*
>> *  grubby.x86_64 0:8.28-21.el7_3
>>        httpd.x86_64 0:2.4.6-45.el7_3.4*
>> *  httpd-tools.x86_64 0:2.4.6-45.el7_3.4
>>        initscripts.x86_64 0:9.49.37-1.el7_3.1*
>> *  irqbalance.x86_64 3:1.0.7-6.el7_3.1
>>        java-1.8.0-openjdk.x86_64 1:1.8.0.131-2.b11.el7_3*
>> *  java-1.8.0-openjdk-headless.x86_64 1:1.8.0.131-2.b11.el7_3
>>       kernel-tools.x86_64 0:3.10.0-514.16.1.el7*
>> *  kernel-tools-libs.x86_64 0:3.10.0-514.16.1.el7
>>       libblkid.x86_64 0:2.23.2-33.el7_3.2*
>> *  libgudev1.x86_64 0:219-30.el7_3.8
>>        libmount.x86_64 0:2.23.2-33.el7_3.2*
>> *  libsss_idmap.x86_64 0:1.14.0-43.el7_3.14
>>       libsss_nss_idmap.x86_64 0:1.14.0-43.el7_3.14*
>> *  libuuid.x86_64 0:2.23.2-33.el7_3.2
>>       lvm2.x86_64 7:2.02.166-1.el7_3.4*
>> *  lvm2-libs.x86_64 7:2.02.166-1.el7_3.4
>>        mod_ssl.x86_64 1:2.4.6-45.el7_3.4*
>> *  nss.x86_64 0:3.28.4-1.0.el7_3
>>        nss-sysinit.x86_64 0:3.28.4-1.0.el7_3*
>> *  nss-tools.x86_64 0:3.28.4-1.0.el7_3
>>        nss-util.x86_64 0:3.28.4-1.0.el7_3*
>> *  ntpdate.x86_64 0:4.2.6p5-25.el7_3.2
>>        oci-register-machine.x86_64 1:0-3.11.gitdd0daef.el7*
>> *  oci-systemd-hook.x86_64 1:0.1.7-2.git2788078.el7
>>       openssh.x86_64 0:6.6.1p1-35.el7_3*
>> *  openssh-clients.x86_64 0:6.6.1p1-35.el7_3
>>        openssh-server.x86_64 0:6.6.1p1-35.el7_3*
>> *  pulseaudio-libs.x86_64 0:6.0-9.el7_3
>>       python-perf.x86_64 0:3.10.0-514.16.1.el7*
>> *  selinux-policy.noarch 0:3.13.1-102.el7_3.16
>>        selinux-policy-targeted.noarch 0:3.13.1-102.el7_3.16*
>> *  sssd-client.x86_64 0:1.14.0-43.el7_3.14
>>        systemd.x86_64 0:219-30.el7_3.8*
>> *  systemd-libs.x86_64 0:219-30.el7_3.8
>>       systemd-python.x86_64 0:219-30.el7_3.8*
>> *  systemd-sysv.x86_64 0:219-30.el7_3.8
>>       tomcat.noarch 0:7.0.69-11.el7_3*
>> *  tomcat-el-2.2-api.noarch 0:7.0.69-11.el7_3
>>       tomcat-jsp-2.2-api.noarch 0:7.0.69-11.el7_3*
>> *  tomcat-lib.noarch 0:7.0.69-11.el7_3
>>        tomcat-servlet-3.0-api.noarch 0:7.0.69-11.el7_3*
>> *  tzdata.noarch 0:2017b-1.el7
>>        tzdata-java.noarch 0:2017b-1.el7**  util-linux.x86_64
>> 0:2.23.2-33.el7_3.2
>>  yum-rhn-plugin.noarch 0:2.0.1-6.1.el7_3*
>>
>> *Installed:*
>> *  NetworkManager.x86_64 1:1.4.0-17.el7_3
>>       NetworkManager-config-server.x86_64 1:1.4.0-17.el7_3*
>> *  NetworkManager-libnm.x86_64 1:1.4.0-17.el7_3
>>       NetworkManager-team.x86_64 1:1.4.0-17.el7_3*
>> *  NetworkManager-tui.x86_64 1:1.4.0-17.el7_3
>>       bind-libs.x86_64 32:9.9.4-38.el7_3.2*
>> *  bind-libs-lite.x86_64 32:9.9.4-38.el7_3.2
>>        bind-license.noarch 32:9.9.4-38.el7_3.2*
>> *  bind-utils.x86_64 32:9.9.4-38.el7_3.2
>>        ca-certificates.noarch 0:2015.2.6-73.el7*
>> *  container-selinux.noarch 2:2.9-4.el7
>>       device-mapper.x86_64 7:1.02.135-1.el7_3.3*
>> *  device-mapper-event.x86_64 7:1.02.135-1.el7_3.3
>>        device-mapper-event-libs.x86_64 7:1.02.135-1.el7_3.3*
>> *  device-mapper-libs.x86_64 7:1.02.135-1.el7_3.3
>>       dmidecode.x86_64 1:3.0-2.el7*
>> *  docker.x86_64 2:1.12.6-11.el7
>>        docker-client.x86_64 2:1.12.6-11.el7*
>> *  docker-common.x86_64 2:1.12.6-11.el7
>>       docker-rhel-push-plugin.x86_64 2:1.12.6-11.el7*
>> *  grubby.x86_64 0:8.28-18.el7
>>        httpd.x86_64 0:2.4.6-45.el7*
>> *  httpd-tools.x86_64 0:2.4.6-45.el7
>>        initscripts.x86_64 0:9.49.37-1.el7*
>> *  irqbalance.x86_64 3:1.0.7-6.el7
>>        java-1.8.0-openjdk.x86_64 1:1.8.0.121-0.b13.el7_3*
>> *  java-1.8.0-openjdk-headless.x86_64 1:1.8.0.121-0.b13.el7_3
>>       kernel-tools.x86_64 0:3.10.0-514.10.2.el7*
>> *  kernel-tools-libs.x86_64 0:3.10.0-514.10.2.el7
>>       libblkid.x86_64 0:2.23.2-33.el7*
>> *  libgudev1.x86_64 0:219-30.el7_3.7
>>        libmount.x86_64 0:2.23.2-33.el7*
>> *  libsss_idmap.x86_64 0:1.14.0-43.el7_3.11
>>       libsss_nss_idmap.x86_64 0:1.14.0-43.el7_3.11*
>> *  libuuid.x86_64 0:2.23.2-33.el7
>>       lvm2.x86_64 7:2.02.166-1.el7_3.3*
>> *  lvm2-libs.x86_64 7:2.02.166-1.el7_3.3
>>        mod_ssl.x86_64 1:2.4.6-45.el7*
>> *  nss.x86_64 0:3.28.2-1.6.el7_3
>>        nss-sysinit.x86_64 0:3.28.2-1.6.el7_3*
>> *  nss-tools.x86_64 0:3.28.2-1.6.el7_3
>>        nss-util.x86_64 0:3.28.2-1.1.el7_3*
>> *  ntpdate.x86_64 0:4.2.6p5-25.el7_3.1
>>        oci-register-machine.x86_64 1:0-1.11.gitdd0daef.el7*
>> *  oci-systemd-hook.x86_64 1:0.1.4-9.git671c428.el7
>>       openssh.x86_64 0:6.6.1p1-33.el7_3*
>> *  openssh-clients.x86_64 0:6.6.1p1-33.el7_3
>>        openssh-server.x86_64 0:6.6.1p1-33.el7_3*
>> *  pulseaudio-libs.x86_64 0:6.0-8.el7
>>       python-perf.x86_64 0:3.10.0-514.10.2.el7*
>> *  selinux-policy.noarch 0:3.13.1-102.el7_3.15
>>        selinux-policy-targeted.noarch 0:3.13.1-102.el7_3.15*
>> *  sssd-client.x86_64 0:1.14.0-43.el7_3.11
>>        systemd.x86_64 0:219-30.el7_3.7*
>> *  systemd-libs.x86_64 0:219-30.el7_3.7
>>       systemd-python.x86_64 0:219-30.el7_3.7*
>> *  systemd-sysv.x86_64 0:219-30.el7_3.7
>>       tomcat.noarch 0:7.0.69-10.el7*
>> *  tomcat-el-2.2-api.noarch 0:7.0.69-10.el7
>>       tomcat-jsp-2.2-api.noarch 0:7.0.69-10.el7*
>> *  tomcat-lib.noarch 0:7.0.69-10.el7
>>        tomcat-servlet-3.0-api.noarch 0:7.0.69-10.el7*
>> *  tzdata.noarch 0:2017a-1.el7
>>        tzdata-java.noarch 0:2017a-1.el7**  util-linux.x86_64
>> 0:2.23.2-33.el7
>>  yum-rhn-plugin.noarch 0:2.0.1-6.el7*
>
>
> 2017-05-19 12:26 GMT+02:00 Inter Load <interload13 at gmail.com>:
>
>> Hello,
>>
>> I invite your assistance on this problem. I do not want to open a redhat
>> ticket on Spacewalk.
>> I hope that is this the right forum ?
>>
>> We are using Spacewalk to monitor some of our systems.
>> Actually, there are Exclamation mark in front of repository names on
>> Client.
>> After clean cache on the client (yum clean all), "yum" cannot retrieve
>> metadata from the SpaceWalk Server.
>> Client system shows 0 packages in base channel.
>>
>>
>>> *# yum repolist*
>>> *Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos*
>>> *This system is receiving updates from RHN Classic or Red Hat Satellite.*
>>> *repo id
>>>     repo name
>>>     status*
>>> *dev_ppr-rhel-x86_64-server-7
>>>      dev_ppr-rhel-x86_64-server-7
>>>       0*
>>> *dev_ppr-rhel-x86_64-server-7-extras
>>>     dev_ppr-rhel-x86_64-server-7-extras
>>>      0*
>>> *dev_ppr-rhel-x86_64-server-7-optional
>>>     dev_ppr-rhel-x86_64-server-7-optional
>>>      0*
>>> *dev_ppr-rhel-x86_64-server-7-updates
>>>      dev_ppr-rhel-x86_64-server-7-updates
>>>       0*
>>> *dev_ppr-rhel-x86_64-server-7-zabbix
>>>     dev_ppr-rhel-x86_64-server-7-zabbix
>>>      0**repolist: 0*
>>
>>
>>
>>
>>  I tried this documentation (https://access.redhat.com/solutions/19303)
>> but it doesn't work.
>> After "yum check-update" command, I have the following error message
>>
>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *One of the configured repositories failed (Unknown), and yum doesn't
>>> have enough cached data to continue. At this point the only safe thing yum
>>> can do is fail. There are a few ways to work "fix" this:     1. Contact the
>>> upstream for the repository and get them to fix the problem.     2.
>>> Reconfigure the baseurl/etc. for the repository, to point to a working
>>>   upstream. This is most often useful if you are using a newer
>>> distribution release than is supported by the repository (and the
>>> packages for the previous distribution release still work).     3. Run the
>>> command with the repository temporarily disabled            yum
>>> --disablerepo=<repoid> ...     4. Disable the repository permanently, so
>>> yum won't use it by default. Yum        will then just ignore the
>>> repository until you permanently enable it        again or use --enablerepo
>>> for temporary usage:            yum-config-manager --disable <repoid>
>>>   or            subscription-manager repos --disable=<repoid>     5.
>>> Configure the failing repository to be skipped, if it is unavailable.
>>>   Note that yum will try to contact the repo. when it runs most commands,
>>>       so will have to try and fail each time (and thus. yum will be be
>>> much        slower). If it is a very temporary problem though, this is
>>> often a nice        compromise:            yum-config-manager --save
>>> --setopt=<repoid>.skip_if_unavailable=truefailed to retrieve
>>> repodata/repomd.xml from dev_ppr-rhel-x86_64-server-7error was [Errno 14]
>>> HTTP Error 400 - Bad Request*
>>
>>
>>
>>
>> Client system always shows 0 packages in base channel (Previously, I
>> forced regeneration process after restart taskomatic service).
>>
>> I tried out ideas :
>> - On the client system : yum clean all; rm -rf /var/cache/yum/*;
>> rhn-profile-sync; yum update
>> - On the server SpaceWalk : spacewalk-service stop; rm -rf
>> /var/cache/rhn/reposync/*; rm -rf /var/cache/rhn/repodata/*; rm -rf
>> /var/cache/rhn/satsync/*; spacewalk-service start
>> - On the server SpaceWalk : Regenerate repo data for all channels :
>> spacecmd softwarechannel_list; for i in spacecmd softwarechannel_list; do
>> spacecmd softwarechannel_regenerateyumcache $i; done
>> - Add a new client
>>
>> For information, I noticed that a taskomatic service no regenerate
>> repodata after restart service. I am bound to force repodata regeneration.
>> In /var/log/rhn/rhn_taskomatic_daemon.log, I have the following message :
>>
>>>
>>> *INFO: Initializing c3p0 pool...
>>> com.mchange.v2.c3p0.PoolBackedDataSource at a111cc3c [
>>> connectionPoolDataSource ->
>>> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource at 226fcf3b [
>>> acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay ->
>>> 1000, autoCommitOnClose -> false, automaticTestTable -> null,
>>> breakAfterAcquireFailure -> false, checkoutTimeout -> 0,
>>> connectionCustomizerClassName ->
>>> com.redhat.rhn.common.db.RhnConnectionCustomizer, connectionTesterClassName
>>> -> com.mchange.v2.c3p0.impl.DefaultConnectionTester,
>>> debugUnreturnedConnectionStackTraces -> false, factoryClassLocation ->
>>> null, forceIgnoreUnresolvedTransactions -> false, identityToken ->
>>> 2uut749o7rg7up15sofpp|45a9cb94, idleConnectionTestPeriod -> 300,
>>> initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge ->
>>> 0, maxIdleTime -> 300, maxIdleTimeExcessConnections -> 0, maxPoolSize ->
>>> 20, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5,
>>> nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource at 135e2207 [
>>> description -> null, driverClass -> null, factoryClassLocation -> null,
>>> identityToken -> 2uut749o7rg7up15sofpp|6c9ab334, jdbcUrl ->
>>> jdbc:postgresql:rhnschema, properties -> {user=******, password=******,
>>> driver_proto=jdbc:postgresql} ], preferredTestQuery -> select 'c3p0 ping'
>>> from dual, propertyCycle -> 0, testConnectionOnCheckin -> false,
>>> testConnectionOnCheckout -> true, unreturnedConnectionTimeout -> 0,
>>> usesTraditionalReflectiveProxies -> false; userOverrides: {} ],
>>> dataSourceName -> null, factoryClassLocation -> null, identityToken ->
>>> 2uut749o7rg7up15sofpp|5727e9b9, numHelperThreads -> 3 ]*
>>
>>
>>
>> Have you got an idea of this behavior ?
>>
>> Thanks a lot
>> Romain
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20170519/b3a54669/attachment.htm>


More information about the Spacewalk-list mailing list