[rhos-list] EXTERNAL: Re: Problems after Update.

Minton, Rich rich.minton at lmco.com
Fri Mar 22 18:00:18 UTC 2013


I just updated another cluster with the new openstack releases and after the update my glance-api-paste.ini nad glance-registry-paste.ini files are 0 bytes. I thought it would have created a save file. This might be a bug.



-rw-r-----. 1 glance glance     0 Mar  5 17:52 glance-api-paste.ini

-rw-r-----. 1 glance glance  5408 Mar  5 17:52 glance-cache.conf

-rw-r-----. 1 glance glance  3117 Mar  5 17:52 glance-registry.conf

-rw-r-----. 1 glance glance     0 Mar  5 17:52 glance-registry-paste.ini

-rw-r-----  1 root   glance  1091 Feb 27 13:16 glance-scrubber.conf



-----Original Message-----
From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Minton, Rich
Sent: Thursday, March 21, 2013 1:17 PM
To: Russell Bryant
Cc: rhos-list at redhat.com
Subject: Re: [rhos-list] EXTERNAL: Re: Problems after Update.



Found it! The NFS mount to my instances directory was missing. I remounted the directory and I can see all of my instances now.



I have one other issue with metadata and ssh keys but I'll create a new thread for that.



Thanks for the help.



-----Original Message-----

From: rhos-list-bounces at redhat.com<mailto:rhos-list-bounces at redhat.com> [mailto:rhos-list-bounces at redhat.com] On Behalf Of Minton, Rich

Sent: Thursday, March 21, 2013 1:03 PM

To: Russell Bryant

Cc: rhos-list at redhat.com<mailto:rhos-list at redhat.com>

Subject: Re: [rhos-list] EXTERNAL: Re: Problems after Update.



Here is some of what I found in "compute.log" and it doesn't look good.



-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

2013-03-21 12:15:21 42682 WARNING nova.compute.manager [-] Found 5 in the database and 0 on the hypervisor.

2013-03-21 12:15:21 42682 WARNING nova.compute.manager [-] [instance: 2d87a89a-fcbb-4581-a7c3-0e2b2c30b568] Instance shutdown by itself. Calling the stop API.

2013-03-21 12:15:21 42682 WARNING nova.compute.manager [-] [instance: b7e64392-1f28-446f-8fb3-2b75dc895951] Instance shutdown by itself. Calling the stop API.

2013-03-21 12:15:21 42682 INFO nova.virt.libvirt.driver [-] [instance: 2d87a89a-fcbb-4581-a7c3-0e2b2c30b568] Instance destroyed successfully.

2013-03-21 12:15:21 42682 WARNING nova.compute.manager [-] [instance: 40153f6e-e963-4a30-bb0a-3f7e612d5638] Instance shutdown by itself. Calling the stop API.

2013-03-21 12:15:21 42682 INFO nova.virt.libvirt.driver [-] [instance: b7e64392-1f28-446f-8fb3-2b75dc895951] Instance destroyed successfully.

2013-03-21 12:15:22 42682 WARNING nova.compute.manager [-] [instance: a39787b8-2c66-4aaa-b154-42ec05c0e3c8] Instance shutdown by itself. Calling the stop API.

2013-03-21 12:15:22 42682 INFO nova.virt.libvirt.driver [-] [instance: 40153f6e-e963-4a30-bb0a-3f7e612d5638] Instance destroyed successfully.

2013-03-21 12:15:22 42682 WARNING nova.compute.manager [-] [instance: be67a3ef-036b-4df8-90dd-04cbd0ffafdb] Instance shutdown by itself. Calling the stop API.

2013-03-21 12:15:22 42682 INFO nova.virt.libvirt.driver [-] [instance: a39787b8-2c66-4aaa-b154-42ec05c0e3c8] Instance destroyed successfully.

2013-03-21 12:15:23 42682 INFO nova.virt.libvirt.driver [-] [instance: be67a3ef-036b-4df8-90dd-04cbd0ffafdb] Instance destroyed successfully.

2013-03-21 12:15:23 42682 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 237303

2013-03-21 12:15:23 42682 AUDIT nova.compute.resource_tracker [-] Free disk (GB): -14

2013-03-21 12:15:23 42682 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 38

2013-03-21 12:15:23 42682 INFO nova.compute.resource_tracker [-] Compute_service record updated for uvslp-diu01os.lmdit.us.lmco.com

2013-03-21 12:16:02 AUDIT nova.compute.manager [req-a00e05db-32f4-4be0-8885-e789e8474b4f f015ea92bf4848a891b73ae2fbf6c75b e4ba09fbf76c41a29218dc68cacf78d5] [instance: be67a3ef-036b-4df8-90dd-04cbd0ffafdb] Get console output

2013-03-21 12:16:02 42682 ERROR nova.openstack.common.rpc.amqp [-] Exception during message handling

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 276, in _process_data

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     rval = self.proxy.dispatch(ctxt, version, method, **args)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 145, in dispatch

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     temp_level, payload)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     self.gen.next()

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     return f(*args, **kw)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 196, in decorated_function

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     kwargs['instance']['uuid'], e, sys.exc_info())

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     self.gen.next()

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 190, in decorated_function

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     return function(self, context, *args, **kwargs)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1931, in get_console_output

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     output = self.driver.get_console_output(instance)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     temp_level, payload)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     self.gen.next()

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     return f(*args, **kw)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1150, in get_console_output

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     libvirt_utils.chown(path, os.getuid())

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/utils.py", line 332, in chown

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     execute('chown', owner, path, run_as_root=True)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/utils.py", line 53, in execute

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     return utils.execute(*args, **kwargs)

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/utils.py", line 206, in execute

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp     cmd=' '.join(cmd))

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp ProcessExecutionError: Unexpected error while running command.

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp Command: sudo nova-rootwrap /etc/nova/rootwrap.conf chown 162 /var/lib/nova/instances/instance-000000f2/console.log

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp Exit code: 1

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp Stdout: ''

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp Stderr: "/bin/chown: cannot access `/var/lib/nova/instances/instance-000000f2/console.log': No such file or directory\n"

2013-03-21 12:16:02 42682 TRACE nova.openstack.common.rpc.amqp

2013-03-21 12:16:02 42682 ERROR nova.openstack.common.rpc.common [-] Returning exception Unexpected error while running command.

Command: sudo nova-rootwrap /etc/nova/rootwrap.conf chown 162 /var/lib/nova/instances/instance-000000f2/console.log

Exit code: 1

Stdout: ''

Stderr: "/bin/chown: cannot access `/var/lib/nova/instances/instance-000000f2/console.log': No such file or directory\n" to caller

2013-03-21 12:16:02 42682 ERROR nova.openstack.common.rpc.common [-] ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 276, in _process_data\n    rval

= self.proxy.dispatch(ctxt, version, method, **args)\n', '  File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 145, in dispatch\n    return getattr(proxyobj, method)(ctxt, **kwargs)\n', '  F

ile "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n    temp_level, payload)\n', '  File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n    self.gen.next()\n', '  File "/usr/lib/python

2.6/site-packages/nova/exception.py", line 92, in wrapped\n    return f(*args, **kw)\n', '  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 196, in decorated_function\n    kwargs[\'instance\'][\'uuid\'],

e, sys.exc_info())\n', '  File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n    self.gen.next()\n', '  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 190, in decorated_function\n    retu

rn function(self, context, *args, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1931, in get_console_output\n    output = self.driver.get_console_output(instance)\n', '  File "/usr/lib

/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n    temp_level, payload)\n', '  File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n    self.gen.next()\n', '  File "/usr/lib/python2.6/site-pack

ages/nova/exception.py", line 92, in wrapped\n    return f(*args, **kw)\n', '  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1150, in get_console_output\n    libvirt_utils.chown(path, os.getuid())\

n', '  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/utils.py", line 332, in chown\n    execute(\'chown\', owner, path, run_as_root=True)\n', '  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/utils.py", l

ine 53, in execute\n    return utils.execute(*args, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/nova/utils.py", line 206, in execute\n    cmd=\' \'.join(cmd))\n', 'ProcessExecutionError: Unexpected error while run

ning command.\nCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf chown 162 /var/lib/nova/instances/instance-000000f2/console.log\nExit code: 1\nStdout: \'\'\nStderr: "/bin/chown: cannot access `/var/lib/nova/instances/insta

nce-000000f2/console.log\': No such file or directory\\n"\n']



-----Original Message-----

From: Russell Bryant [mailto:rbryant at redhat.com]

Sent: Thursday, March 21, 2013 12:42 PM

To: Minton, Rich

Cc: rhos-list at redhat.com<mailto:rhos-list at redhat.com>

Subject: EXTERNAL: Re: Problems after Update.



Based on the instance data, there should be a Traceback in the log somewhere.  Check nova-compute.log.



--

Russell Bryant



On 03/21/2013 12:23 PM, Minton, Rich wrote:

> I found the source of a couple of my problems...

>

>

>

> The update wiped out my glance-api-paste.ini and

> glance-registry-paste.ini. Thank fully I had backups.

>

>

>

> I still have instances that are in shutoff/shutdown mode. I was able

> to get them to back to "active" but then they  went to "shutoff" soon

> after. Here is the "nova show" for one of the instances.

>

>

>

> +-------------------------------------+------------------------------------------------------------------------------------------+

>

> | Property                            |

> Value

> |

>

> +-------------------------------------+------------------------------------------------------------------------------------------+

>

> | OS-DCF:diskConfig                   |

> MANUAL

> |

>

> | OS-EXT-SRV-ATTR:host                |

> uvslp-diu01os.lmdit.us.lmco.com

>  |

>

> | OS-EXT-SRV-ATTR:hypervisor_hostname |

> uvslp-diu01os.lmdit.us.lmco.com

> |

>

> | OS-EXT-SRV-ATTR:instance_name       |

> instance-000000ea

>        |

>

> | OS-EXT-STS:power_state              |

> 4

> |

>

> | OS-EXT-STS:task_state               |

> None

>              |

>

> | OS-EXT-STS:vm_state                 |

> error

> |

>

> | accessIPv4

> |

>                    |

>

> | accessIPv6

> |

> |

>

> | config_drive

> |

>                          |

>

> | created                             |

> 2013-03-18T15:39:29Z

> |

>

> | fault                               | {u'message': u'IOError',

> u'code': 500, u'created': u'2013-03-21T16:17:12Z'}              |

>

> | flavor                              | m1.medium

> (3)

> |

>

> | hostId                              |

> e55588f13411a788271d5273f0dabfa2eb755e0323dab51039dab866

> |

>

> | id                                  |

> a39787b8-2c66-4aaa-b154-42ec05c0e3c8

> |

>

> | image                               | Red Hat Enterprise Linux 6.4,

> x86_64, Base Server (71c092f5-55de-4b3b-a9d9-4fcb25e15a89) |

>

> | key_name                            |

> TestKeys

> |

>

> | lmicc-access-nic network            |

> 10.10.16.11

> |

>

> | metadata                            |

> {}

>     |

>

> | name                                |

> demo

> |

>

> | security_groups                     | [{u'name':

> u'default'}]

>           |

>

> | status                              |

> ERROR

> |

>

> | tenant_id                           |

> e4ba09fbf76c41a29218dc68cacf78d5

>                 |

>

> | updated                             |

> 2013-03-21T16:17:12Z

> |

>

> | user_id                             |

> f015ea92bf4848a891b73ae2fbf6c75b

>                       |

>

> +-------------------------------------+------------------------------------------------------------------------------------------+

>

>

>

> *From:*rhos-list-bounces at redhat.com

> [mailto:rhos-list-bounces at redhat.com] *On Behalf Of *Minton, Rich

> *Sent:* Thursday, March 21, 2013 11:45 AM

> *To:* rhos-list at redhat.com<mailto:rhos-list at redhat.com>

> *Subject:* EXTERNAL: [rhos-list] Problems after Update.

>

>

>

> Where to begin... Basically, I'm hosed.

>

>

>

> I performed an update to the latest versions of the Openstack distro

> and rebooted my hosts. Then problems started...

>

> *         Can't launch new instances-the in progress star comes up and

> stays there almost forever.

>

> *         Several of my instances went to an error state. When I tried

> to reset their state to "active" they went into shutdown/shutoff state.

> These are the instances on my controller/compute node. I typed "virsh

> list" and the list was empty.

>

> *         I tried to bring up a VNC Console on one of the running

> instances and after several minutes received this error in the browser:

>

> ----------------------------------------------------------------------

> ---------------------------------------------------------------------

>

> CommunicationError at

> /nova/instances/1f6cb3f6-c4d4-45ac-890b-bf2770a90c82/

>

>

>

> _Error communicating with http://10.10.12.245:9292 timed out_

>

>

>

> Request Method:    GET

>

> Request URL:

> http://10.10.12.245/dashboard/nova/instances/1f6cb3f6-c4d4-45ac-890b-b

> f2770a90c82/?tab=instance_details__vnc

>

> Django Version:        1.4.2

>

> Exception Type:        CommunicationError

>

> Exception Value:

>

>

>

> Error communicating with http://10.10.12.245:9292 timed out

>

>

>

> Exception Location:

> /usr/lib/python2.6/site-packages/glanceclient/common/http.py in

> _http_request, line 145

>

> Python Executable:                 /usr/bin/python

>

> Python Version:       2.6.6

>

> Python Path:

>

>

>

> ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..',

>

> '/usr/lib64/python26.zip',

>

> '/usr/lib64/python2.6',

>

> '/usr/lib64/python2.6/plat-linux2',

>

> '/usr/lib64/python2.6/lib-tk',

>

> '/usr/lib64/python2.6/lib-old',

>

> '/usr/lib64/python2.6/lib-dynload',

>

> '/usr/lib64/python2.6/site-packages',

>

> '/usr/lib64/python2.6/site-packages/gtk-2.0',

>

> '/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg',

>

> '/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg',

>

> '/usr/lib/python2.6/site-packages',

>

> '/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info',

>

> '/usr/share/openstack-dashboard/openstack_dashboard']

>

>

>

> Server time:               Thu, 21 Mar 2013 15:26:38 +0000

>

> ----------------------------------------------------------------------

> --------------------------------------------------------------

>

> *         I tried deleting iptables and rebooting but did not solve

> anything.

>

> *         I'm not seeing  anything out of the ordinary in the nova logs.

>

>

>

> Any ideas on where I should start looking?

>

>

>

> Thanks for any help.

>

> Rick

>

>

>

> _Richard Minton_

>

> LMICC Systems Administrator

>

> 4000 Geerdes Blvd, 13D31

>

> King of Prussia, PA 19406

>

> Phone: 610-354-5482

>

>

>





_______________________________________________

rhos-list mailing list

rhos-list at redhat.com<mailto:rhos-list at redhat.com>

https://www.redhat.com/mailman/listinfo/rhos-list



_______________________________________________

rhos-list mailing list

rhos-list at redhat.com<mailto:rhos-list at redhat.com>

https://www.redhat.com/mailman/listinfo/rhos-list
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhos-list/attachments/20130322/7306a510/attachment.htm>


More information about the rhos-list mailing list