[Spacewalk-list] Spacewalk 2.2: Clients are not picking up scheduled tasks

Brian Musson mrbrian at gmail.com
Tue Jun 16 19:26:03 UTC 2015


Checked the properties of the system and the ID corresponds to one of the
systems in question.

BM

On Tue, Jun 16, 2015 at 11:01 AM, Robert Paschedag <robert.paschedag at web.de>
wrote:

> Could you verify, that the system within spacewalk has the same id as the
> client (1000010314)?? Maybe this system has been forced with a
> re-registration and now you have 2 systems with the same name
> Am 16.06.2015 4:59 nachm. schrieb Brian Musson <mrbrian at gmail.com>:
>
> Robert, here is the output from the client:
>
>
>    1. # service osad status
>    2. osad (pid  1842) is running...
>    3.
>    4. # rhn-actions-control --report
>    5. deploy is enabled
>    6. diff is enabled
>    7. upload is enabled
>    8. mtime_upload is enabled
>    9. run is enabled
>    10.
>    11.
>    12. # rhn_check -vvv
>    13. D: opening  db environment /var/lib/rpm cdb:mpool:joinenv
>    14. D: opening  db index       /var/lib/rpm/Packages rdonly mode=0x0
>    15. D: locked   db index       /var/lib/rpm/Packages
>    16. D: loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key
>    17. D: couldn't find any keys in /var/lib/rpm/pubkeys/*.key
>    18. D: loading keyring from rpmdb
>    19. D: opening  db index       /var/lib/rpm/Name rdonly mode=0x0
>    20. D: added key gpg-pubkey-c105b9de-4e0fd3a3 to keyring
>    21. D: added key gpg-pubkey-0608b895-4bd22942 to keyring
>    22. D: added key gpg-pubkey-442df0f8-4783f24a to keyring
>    23. D: added key gpg-pubkey-548c16bf-4c29a642 to keyring
>    24. D: added key gpg-pubkey-217521f6-45e8a532 to keyring
>    25. D: added key gpg-pubkey-863a853d-4f55f54d to keyring
>    26. D: added key gpg-pubkey-1bb35891-54f75af8 to keyring
>    27. D: Using legacy gpg-pubkey(s) from rpmdb
>    28. D: opening  db index       /var/lib/rpm/Providename rdonly mode=0x0
>    29. D: do_call packages.checkNeedUpdate('rhnsd=1',){}
>    30. D: opening  db environment /var/lib/rpm cdb:mpool:joinenv
>    31. D: opening  db index       /var/lib/rpm/Packages rdonly mode=0x0
>    32. D: loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key
>    33. D: couldn't find any keys in /var/lib/rpm/pubkeys/*.key
>    34. D: loading keyring from rpmdb
>    35. D: opening  db index       /var/lib/rpm/Name rdonly mode=0x0
>    36. D: added key gpg-pubkey-c105b9de-4e0fd3a3 to keyring
>    37. D: added key gpg-pubkey-0608b895-4bd22942 to keyring
>    38. D: added key gpg-pubkey-442df0f8-4783f24a to keyring
>    39. D: added key gpg-pubkey-548c16bf-4c29a642 to keyring
>    40. D: added key gpg-pubkey-217521f6-45e8a532 to keyring
>    41. D: added key gpg-pubkey-863a853d-4f55f54d to keyring
>    42. D: added key gpg-pubkey-1bb35891-54f75af8 to keyring
>    43. D: Using legacy gpg-pubkey(s) from rpmdb
>    44. D: opening  db index       /var/lib/rpm/Providename rdonly mode=0x0
>    45. D: closed   db index       /var/lib/rpm/Providename
>    46. D: closed   db index       /var/lib/rpm/Name
>    47. D: closed   db index       /var/lib/rpm/Packages
>    48. D: closed   db environment /var/lib/rpm
>    49. Loaded plugins: fastestmirror, presto, rhnplugin
>    50. Config time: 0.038
>    51. D: login(forceUpdate=False) invoked
>    52. D: readCachedLogin invoked
>    53. D: Unable to read pickled loginInfo at:
>    /var/spool/up2date/loginAuth.pkl
>    54. logging into up2date server
>    55. D: rpcServer: Calling XMLRPC up2date.login
>    56. D: writeCachedLogin() invoked
>    57. D: Wrote pickled loginInfo at 1434032930.09 with expiration
>    of 1434036530.09 seconds.
>    58. successfully retrieved authentication token from up2date server
>    59. D: logininfo:{'X-RHN-Server-Id': 1000010314,
>    'X-RHN-Auth-Server-Time': '1434032926.38', 'X-RHN-Auth':
>    '7RfB/XtV6EqZw8hGYqe+dFasQ+3q9QvfIzO+RrKIdd0=',
>    'X-RHN-Auth-Channels': [['centos6-base-x86_64', '20150611060001', '1',
>    '1'], ['spacewalk-client-x86_64', '20150611061143', '0',
>    '1'], ['epel6-x86_64', '20150611061143', '0',
>    '1'], ['centos6-sysops-x86_64', '20150504165024', '0',
>    '1'], ['centos6-updates-x86_64', '20150611060258', '0', '1']],
>    'X-RHN-Auth-User-Id': '', 'X-RHN-Auth-Expire-Offset': '3600.0'}
>    60. D: rpcServer: Calling XMLRPC up2date.listChannels
>    61. This system is receiving updates from RHN Classic or Red Hat
>    Satellite.
>    62. Setting up Package Sacks
>    63. Loading mirror speeds from cached hostfile
>    64. pkgsack time: 0.065
>    65. rpmdb time: 0.000
>    66. Loading mirror speeds from cached hostfile
>    67. repo time: 0.000
>    68. D: local action status: (0, 'rpm database not modified since last
>    update (or package list recently updated)', {})
>    69. D: rpcServer: Calling XMLRPC registration.welcome_message
>    70. D: closed   db index       /var/lib/rpm/Providename
>    71. D: closed   db index       /var/lib/rpm/Name
>    72. D: closed   db index       /var/lib/rpm/Packages
>    73. D: closed   db environment /var/lib/rpm
>
>
>
> On Jun 16, 2015, at 05:54, Robert Paschedag <robert.paschedag at web.de>
> wrote:
>
> Just saw, that my answer did not get to the list...
>
> Did you try to run rhn_check with -vv or run it through the debugger with
> "python -i -m pdb $(which rhn_check)"?
>
> Robert
>
> Am 15.06.2015 18:56 schrieb Brian Musson <mrbrian at gmail.com>:
>
>
> Hi I am trying to get to the bottom of this issue but I have hit a wall.
> Any hints would be greatly appreciated. The action gets scheduled but never
> gets picked up. From last week the jobs are still "Queued" and have not
> been picked up yet. I have a task to do this same patching to a larger pool
> of systems (almost 800) this week. Any direction or hints would be greatly
> appreciated.
>
>
>
> This action will be executed after 6/11/15 6:00:00 AM PDT
>
> This action's status is: Queued.
>
> This action has not yet been picked up.
>
>
> BM
>
>
> On Fri, Jun 12, 2015 at 1:09 PM, Brian Musson <mrbrian at gmail.com> wrote:
>
>
> Sorry to double post, but i thought it may be useful to show the output of
> the spacewalk proxy's /var/log/rhn/rhn_proxy_broker.log at the time of the
> run.
>
>
> 10.12.82.141 = client
>
>
> spacewalk-proxy# tail -f /var/log/rhn/rhn_proxy_broker.log
>
>
> 2015/06/12 16:06:26 -04:00 11778 10.12.82.41:
> proxy/apacheServer.__call__('New request, component proxy.broker',)
>
> 2015/06/12 16:06:26 -04:00 11778 10.12.82.141: broker/rhnBroker.handler
>
> 2015/06/12 16:06:26 -04:00 11778 10.12.82.141:
> proxy/rhnShared._serverCommo
>
> 2015/06/12 16:06:26 -04:00 11778 10.12.82.141:
> broker/rhnBroker.__handleAction
>
> 2015/06/12 16:06:26 -04:00 11778 10.12.82.141:
> proxy/rhnShared._clientCommo
>
> 2015/06/12 16:06:26 -04:00 11778 10.12.82.141:
> proxy/rhnShared._forwardServer2Client
>
> 2015/06/12 16:06:26 -04:00 11778 10.12.82.141:
> proxy/apacheHandler.handler('Leaving with status code 0',)
>
> 2015/06/12 16:06:26 -04:00 11778 10.12.82.141:
> proxy/apacheHandler.cleanupHandler
>
> 2015/06/12 16:06:26 -04:00 11781 192.168.1.208:
> proxy/apacheServer.__call__('New request, component proxy.broker',)
>
> 2015/06/12 16:06:26 -04:00 11781 10.12.82.141: broker/rhnBroker.handler
>
> 2015/06/12 16:06:26 -04:00 11781 10.12.82.141:
> proxy/rhnShared._serverCommo
>
> 2015/06/12 16:06:26 -04:00 11781 10.12.82.141:
> broker/rhnBroker.__handleAction
>
> 2015/06/12 16:06:26 -04:00 11781 10.12.82.141:
> proxy/rhnShared._clientCommo
>
> 2015/06/12 16:06:26 -04:00 11781 10.12.82.141:
> proxy/rhnShared._forwardServer2Client
>
> 2015/06/12 16:06:26 -04:00 11781 10.12.82.141:
> proxy/apacheHandler.handler('Leaving with status code 0',)
>
> 2015/06/12 16:06:26 -04:00 11781 10.12.82.141:
> proxy/apacheHandler.cleanupHandler
>
> 2015/06/12 16:06:26 -04:00 11777 10.12.72.56:
> proxy/apacheServer.__call__('New request, component proxy.broker',)
>
> 2015/06/12 16:06:26 -04:00 11777 10.12.82.141: broker/rhnBroker.handler
>
> 2015/06/12 16:06:26 -04:00 11777 10.12.82.141:
> proxy/rhnShared._serverCommo
>
> 2015/06/12 16:06:26 -04:00 11777 10.12.82.141:
> broker/rhnBroker.__handleAction
>
> 2015/06/12 16:06:26 -04:00 11777 10.12.82.141:
> proxy/rhnShared._clientCommo
>
> 2015/06/12 16:06:26 -04:00 11777 10.12.82.141:
> proxy/rhnShared._forwardServer2Client
>
> 2015/06/12 16:06:26 -04:00 11777 10.12.82.141:
> proxy/apacheHandler.handler('Leaving with status code 0',)
>
> 2015/06/12 16:06:26 -04:00 11777 10.12.82.141:
> proxy/apacheHandler.cleanupHandler
>
> 2015/06/12 16:06:26 -04:00 11784 10.12.72.41:
> proxy/apacheServer.__call__('New request, component proxy.broker',)
>
> 2015/06/12 16:06:26 -04:00 11784 10.12.82.141: broker/rhnBroker.handler
>
> 2015/06/12 16:06:26 -04:00 11784 10.12.82.141:
> proxy/rhnShared._serverCommo
>
> 2015/06/12 16:06:26 -04:00 11784 10.12.82.141:
> broker/rhnBroker.__handleAction
>
> 2015/06/12 16:06:26 -04:00 11784 10.12.82.141:
> proxy/rhnShared._clientCommo
>
> 2015/06/12 16:06:26 -04:00 11784 10.12.82.141:
> proxy/rhnShared._forwardServer2Client
>
> 2015/06/12 16:06:26 -04:00 11784 10.12.82.141:
> proxy/apacheHandler.handler('Leaving with status code 0',)
>
> 2015/06/12 16:06:26 -04:00 11784 10.12.82.141:
> proxy/apacheHandler.cleanupHandler
>
>
> BM
>
>
> On Fri, Jun 12, 2015 at 12:37 PM, Brian Musson <mrbrian at gmail.com> wrote:
>
>
> My server connects indirectly through a proxy due to network segmentation.
> Checked the proxy for that file but could not find it. I did a tail on the
> spacewalk master and saw lots of messages mentioning the proxy server that
> serves the clients in question:
>
>
> 10.12.13.142 = our proxy
>
> 1000010038 = client ID
>
>
> spacewalk-master# tail -f /var/log/rhn/rhn_server_xmlrpc.log | grep
> 10.12.13.142
>
> 2015/06/12 12:34:14 -07:00 16704 10.12.13.142:
> xmlrpc/queue.get(1000010038, 2, 'checkins enabled')
>
> 2015/06/12 12:34:15 -07:00 16718 10.12.13.142:
> xmlrpc/up2date.listChannels(1000010038,)
>
> 2015/06/12 12:34:15 -07:00 16721 10.12.13.142:
> xmlrpc/registration.welcome_message('lang: None',)
>
>
>
>
> BM
>
>
> On Fri, Jun 12, 2015 at 5:04 AM, Jan Dobes <jdobes at redhat.com> wrote:
>
>
> ----- Original Message -----
>
> From: "Brian Musson" <mrbrian at gmail.com>
>
> To: spacewalk-list at redhat.com
>
> Sent: Thursday, June 11, 2015 8:43:47 PM
>
> Subject: [Spacewalk-list] Spacewalk 2.2: Clients are not picking up
>  scheduled tasks
>
>
> I have about 3000 systems registered in spacewalk, but today we are
> focusing
>
> on applying package updates to 22 of them. Of the 22 systems scheduled to
>
> have security errata applied to them, 20 successfully completed the update
>
> without error. Unfortunately, there are two systems which have the task
>
> queued and have not picked it up yet.
>
>
> I have restarted osad, rhnsd, restarted jabberd on the spacewalk master and
>
> proxy through which these failed systems connect. Other clients which have
>
> successfully updated go through this proxy server as well.
>
>
> When looking at the GUI, the client appears to be healthy.
>
>
> BM
>
>
>
> What will appear in '/var/log/rhn/rhn_server_xmlrpc.log' on the spacewalk
> server when you run rhn_check?
>
>
> --
>
> Jan Dobes
>
> Satellite Engineering, Red Hat
>
>
> _______________________________________________
>
> Spacewalk-list mailing list
>
> Spacewalk-list at redhat.com
>
> https://www.redhat.com/mailman/listinfo/spacewalk-list
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20150616/96a55859/attachment.htm>


More information about the Spacewalk-list mailing list