From rdo-info at redhat.com Thu Aug 1 08:04:33 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 1 Aug 2013 08:04:33 +0000 Subject: [Rdo-list] [RDO] TakaakiSuzuki started a discussion. Message-ID: <0000014038e82665-76c43b30-8ac0-4c9e-bee2-bc2a44de37e5-000000@email.amazonses.com> TakaakiSuzuki started a discussion. Nested KVM support --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/386/nested-kvm-support Have a great day! From rdo-info at redhat.com Thu Aug 1 08:06:22 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 1 Aug 2013 08:06:22 +0000 Subject: [Rdo-list] [RDO] Ian_Lawson started a discussion. Message-ID: <0000014038e9cffc-a7a62bbe-a80a-4c5b-99a1-cfb4522ecbdb-000000@email.amazonses.com> Ian_Lawson started a discussion. Oddity with RDO on RHEL6.4 with Cinder --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/387/oddity-with-rdo-on-rhel6-4-with-cinder Have a great day! From mmagr at redhat.com Thu Aug 1 11:22:40 2013 From: mmagr at redhat.com (Martin Magr) Date: Thu, 01 Aug 2013 13:22:40 +0200 Subject: [Rdo-list] [package announce] openstack-packstack Message-ID: <51FA4500.3050102@redhat.com> Greetings, Packstack package has been updated in RDO Grizzly EPEL6 repo to openstack-packstack-2013.1.1-0.22.dev653.el6. Regards, Martin %changelog * Thu Aug 01 2013 Martin M?gr - 2013.1.1-0.22.dev653 - Enable qpidd on boot (#988803) * Thu Jul 25 2013 Martin M?gr - 2013.1.1-0.21.dev651 - Swithed to https://github.com/packstack/puppet-qpid (#977786) - If allinone and quantum selected, install basic network (#986024) From mmagr at redhat.com Thu Aug 1 13:01:29 2013 From: mmagr at redhat.com (Martin Magr) Date: Thu, 01 Aug 2013 15:01:29 +0200 Subject: [Rdo-list] [package announce] openstack-packstack Message-ID: <51FA5C29.9080804@redhat.com> Greetings, Packstack package has been updated in RDO Havana EPEL6 repo to openstack-packstack-2013.2.1-0.1.dev691.el6. Regards, Martin %changelog * Thu Aug 01 2013 Martin M?gr - 2013.2.1-0.1.dev691 - Added support for Cinder GlusterFS backend configuration (#919607) - Added support for linuxbridge (#971770) - Service names made more descriptive (#947381) - Increased timeout of kernel update (#973217) - Set debug=true for Nova to have some logs (#958152) - kvm.modules is loaded only if it exists (#979041) - Enable qpidd on boot (#988803) - Switched to https://github.com/packstack/puppet-qpid (#977786) - If allinone and quantum selected, install basic network (#986024) * Mon Jul 15 2013 P?draig Brady - 2013.2.1-0.1.dev642 - Initial Havana release From rdo-info at redhat.com Thu Aug 1 14:34:36 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 1 Aug 2013 14:34:36 +0000 Subject: [Rdo-list] [RDO] rbowen started a discussion. Message-ID: <000001403a4d41e0-a070a556-31b0-4d0a-997b-c2443c959867-000000@email.amazonses.com> rbowen started a discussion. Packstack updates (Aug 1, 2013) --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/388/packstack-updates-aug-1-2013 Have a great day! From rdo-info at redhat.com Thu Aug 1 15:22:58 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 1 Aug 2013 15:22:58 +0000 Subject: [Rdo-list] [RDO] GreySquirrel started a discussion. Message-ID: <000001403a798848-b520b38f-1358-4892-b5d1-68e4b25bb855-000000@email.amazonses.com> GreySquirrel started a discussion. CentOS 6.4 & openstack-packstack-2013.1.1-0.22.dev653.el6 Issue --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/389/centos-6-4-openstack-packstack-2013-1-1-0-22-dev653-el6-issue Have a great day! From rdo-info at redhat.com Thu Aug 1 22:33:58 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 1 Aug 2013 22:33:58 +0000 Subject: [Rdo-list] [RDO] stevenca started a discussion. Message-ID: <000001403c0422f5-ca531e4a-726f-40a2-944b-40f2846ace42-000000@email.amazonses.com> stevenca started a discussion. Cinder NFS Problem --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/390/cinder-nfs-problem Have a great day! From rdo-info at redhat.com Thu Aug 1 23:19:58 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 1 Aug 2013 23:19:58 +0000 Subject: [Rdo-list] [RDO] msloan started a discussion. Message-ID: <000001403c2e3f81-b5e856b6-d796-46d2-9c12-26087246dbd0-000000@email.amazonses.com> msloan started a discussion. packstack fails allinone install --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/391/packstack-fails-allinone-install Have a great day! From rdo-info at redhat.com Fri Aug 2 03:15:56 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 2 Aug 2013 03:15:56 +0000 Subject: [Rdo-list] [RDO] Rongze started a discussion. Message-ID: <000001403d0648fa-d9cdef2e-94fc-4786-9bb6-15d0d4dd7477-000000@email.amazonses.com> Rongze started a discussion. puppetlabs-concat not found --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/392/puppetlabs-concat-not-found Have a great day! From rdo-info at redhat.com Fri Aug 2 14:51:52 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 2 Aug 2013 14:51:52 +0000 Subject: [Rdo-list] [RDO] PT_C started a discussion. Message-ID: <000001403f836d3f-178fa262-a7b6-45c1-8d8d-6c8bc12ba2b5-000000@email.amazonses.com> PT_C started a discussion. Routing tables messed up --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/393/routing-tables-messed-up Have a great day! From rdo-info at redhat.com Fri Aug 2 15:29:37 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 2 Aug 2013 15:29:37 +0000 Subject: [Rdo-list] [RDO] trude started a discussion. Message-ID: <000001403fa5fbac-02d97f82-e85c-4b70-8e41-2546aaec3446-000000@email.amazonses.com> trude started a discussion. Need a little help with networking after adding a compute node --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/394/need-a-little-help-with-networking-after-adding-a-compute-node Have a great day! From rdo-info at redhat.com Fri Aug 2 15:33:14 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 2 Aug 2013 15:33:14 +0000 Subject: [Rdo-list] [RDO] sushma started a discussion. Message-ID: <000001403fa94ac2-25a171e8-d38c-47ad-95b6-b28aadd0a66d-000000@email.amazonses.com> sushma started a discussion. Do we need more than one Nic card for Multi node setup? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/395/do-we-need-more-than-one-nic-card-for-multi-node-setup Have a great day! From rdo-info at redhat.com Fri Aug 2 16:27:33 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 2 Aug 2013 16:27:33 +0000 Subject: [Rdo-list] [RDO] PT_C started a discussion. Message-ID: <000001403fdb063d-f5496afb-7793-40d7-aa00-694dcac765dc-000000@email.amazonses.com> PT_C started a discussion. Server disconnected (code: 1006) --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/396/server-disconnected-code-1006 Have a great day! From rdo-info at redhat.com Fri Aug 2 18:07:31 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 2 Aug 2013 18:07:31 +0000 Subject: [Rdo-list] [RDO] PT_C started a discussion. Message-ID: <0000014040368bf4-cd6a64d1-fe53-48b8-8554-7eda7acc91a4-000000@email.amazonses.com> PT_C started a discussion. Apache web server routing issues --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/397/apache-web-server-routing-issues Have a great day! From pmyers at redhat.com Sun Aug 4 13:57:48 2013 From: pmyers at redhat.com (Perry Myers) Date: Sun, 04 Aug 2013 09:57:48 -0400 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) Message-ID: <51FE5DDC.4010104@redhat.com> Hi, I followed the instructions at: http://openstack.redhat.com/Neutron-Quickstart http://openstack.redhat.com/Running_an_instance_with_Neutron I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure to install the netns enabled kernel from RDO repos and reboot with that kernel before running packstack so that I didn't need to reboot the VM after the packstack install (and have br-ex disappear) The packstack install went without incident. And I was able to follow the launch an instance instructions. I noticed that the cirros VM took a long time to get to a login prompt on the VNC console. From looking at the console output it appears that the instance was waiting for a dhcp address. Once the VNC session got me to a login prompt, I logged in (as the cirros user) and confirmed that eth0 did not have an ip address. So, something networking related prevented the instance from getting an IP which of course makes ssh'ing into the instance via the floating ip later in the instructions not work properly. I tried ifup'ing eth0 and dhcp discovers were sent out but not responded to. One thing is that on the host running OpenStack services (the VM I ran packstack on), I don't see dnsmasq running except for the default libvirt network: > [admin at rdo-mgmt ~(keystone_demo)]$ ps -ef | grep dnsmas > nobody 1968 1 0 08:59 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --bind-interfaces --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts So... that seems to be a problem :) Just to confirm, I am running the right kernel: > [root at rdo-mgmt log(keystone_demo)]# uname -a > Linux rdo-mgmt 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 02:11:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux > [root at rdo-mgmt log(keystone_demo)]# rpm -q iproute kernel > iproute-2.6.32-23.el6_4.netns.1.x86_64 > kernel-2.6.32-358.114.1.openstack.el6.x86_64 >From quantum server.log: > 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error opening certificate file /var/lib/quantum/keystone-signing/signing_cert.pem > 140222780139336:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/signing_cert.pem','r') > 140222780139336:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: > > 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error loading file /var/lib/quantum/keystone-signing/cacert.pem > 140279285741384:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/cacert.pem','r') > 140279285741384:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: > 140279285741384:error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib:by_file.c:279: >From quantum dhcp-agent.log: > 2013-08-04 09:08:05 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ > data = self._dataqueue.get(timeout=self._timeout) > File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get > return waiter.wait() > File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait > return get_hub().switch() > File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch > return self.greenlet.switch() > Empty > 2013-08-04 09:08:05 ERROR [quantum.agent.dhcp_agent] Failed reporting state! > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 702, in _report_state > self.agent_state) > File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state > topic=self.topic) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call > return rpc.call(context, self._get_topic(topic), msg, timeout) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call > return _get_impl().call(CONF, context, topic, msg, timeout) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call > rpc_amqp.get_connection_pool(conf, Connection)) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call > rv = list(rv) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ > raise rpc_common.Timeout() > Timeout: Timeout while waiting on RPC response. > 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.853869 sec > 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] Synchronizing state > 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver > getattr(driver, action)() > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable > reuse_existing=True) > File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup > namespace=namespace) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug > ns_dev.link.set_address(mac_address) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address > self._as_root('set', self.name, 'address', mac_address) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root > kwargs.get('use_root_namespace', False)) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root > namespace) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute > root_helper=root_helper) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute > raise RuntimeError(m) > RuntimeError: > Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] > Exit code: 2 > Stdout: '' > Stderr: 'RTNETLINK answers: Device or resource busy\n' > 2013-08-04 09:32:36 INFO [quantum.agent.dhcp_agent] Synchronizing state > 2013-08-04 09:32:41 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver > getattr(driver, action)() > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable > reuse_existing=True) > File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup > namespace=namespace) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug > ns_dev.link.set_address(mac_address) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address > self._as_root('set', self.name, 'address', mac_address) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root > kwargs.get('use_root_namespace', False)) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root > namespace) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute > root_helper=root_helper) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute > raise RuntimeError(m) The RTNETLINK errors just repeat indefinitely >From openvswitch-agent.log: > 2013-08-04 09:08:29 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ > data = self._dataqueue.get(timeout=self._timeout) > File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get > return waiter.wait() > File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait > return get_hub().switch() > File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch > return self.greenlet.switch() > Empty > 2013-08-04 09:08:29 ERROR [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed reporting state! > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py", line 201, in _report_state > self.agent_state) > File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state > topic=self.topic) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call > return rpc.call(context, self._get_topic(topic), msg, timeout) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call > return _get_impl().call(CONF, context, topic, msg, timeout) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call > rpc_amqp.get_connection_pool(conf, Connection)) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call > rv = list(rv) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ > raise rpc_common.Timeout() > Timeout: Timeout while waiting on RPC response. Do we have a race condition wrt various Quantum agents connecting to the qpid bus that is just generating initial qpid connection error messages that can be safely ignored? If so, is there any way we can clean this up? >From l3-agent.log: > 2013-08-04 09:08:06 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ > data = self._dataqueue.get(timeout=self._timeout) > File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get > return waiter.wait() > File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait > return get_hub().switch() > File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch > return self.greenlet.switch() > Empty > 2013-08-04 09:08:06 ERROR [quantum.agent.l3_agent] Failed reporting state! > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 723, in _report_state > self.agent_state) > File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state > topic=self.topic) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call > return rpc.call(context, self._get_topic(topic), msg, timeout) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call > return _get_impl().call(CONF, context, topic, msg, timeout) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call > rpc_amqp.get_connection_pool(conf, Connection)) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call > rv = list(rv) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ > raise rpc_common.Timeout() > Timeout: Timeout while waiting on RPC response. > 2013-08-04 09:08:06 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.554131 sec > 2013-08-04 09:08:10 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ > data = self._dataqueue.get(timeout=self._timeout) > File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get > return waiter.wait() > File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait > return get_hub().switch() > File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch > return self.greenlet.switch() > Empty > 2013-08-04 09:08:10 ERROR [quantum.agent.l3_agent] Failed synchronizing routers > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 637, in _sync_routers_task > context, router_id) > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 77, in get_routers > topic=self.topic) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call > return rpc.call(context, self._get_topic(topic), msg, timeout) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call > return _get_impl().call(CONF, context, topic, msg, timeout) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call > rpc_amqp.get_connection_pool(conf, Connection)) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call > rv = list(rv) > File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ > raise rpc_common.Timeout() > Timeout: Timeout while waiting on RPC response. > 2013-08-04 09:08:10 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 20.022704 sec > 2013-08-04 09:11:33 ERROR [quantum.agent.l3_agent] Failed synchronizing routers > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task > self._process_routers(routers, all_routers=True) > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers > self.process_router(ri) > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router > self.external_gateway_added(ri, ex_gw_port, internal_cidrs) > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added > prefix=EXTERNAL_DEV_PREFIX) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug > ns_dev.link.set_address(mac_address) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address > self._as_root('set', self.name, 'address', mac_address) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root > kwargs.get('use_root_namespace', False)) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root > namespace) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute > root_helper=root_helper) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute > raise RuntimeError(m) > RuntimeError: > Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'qg-46ed452c-5e', 'address', 'fa:16:3e:e7:d8:30'] > Exit code: 2 > Stdout: '' > Stderr: 'RTNETLINK answers: Device or resource busy\n' > 2013-08-04 09:12:11 ERROR [quantum.agent.l3_agent] Failed synchronizing routers > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task > self._process_routers(routers, all_routers=True) > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers > self.process_router(ri) > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router > self.external_gateway_added(ri, ex_gw_port, internal_cidrs) > File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added > prefix=EXTERNAL_DEV_PREFIX) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug > ns_dev.link.set_address(mac_address) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address > self._as_root('set', self.name, 'address', mac_address) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root > kwargs.get('use_root_namespace', False)) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root > namespace) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute > root_helper=root_helper) > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute > raise RuntimeError(m) Same qpid connection issue, which I'm assuming can just be ignored at this point. But also similar device busy errors with creating the namespace for the l2 agent It appears that the issue with both the l2 agent and the dhcp agent that the namespace can't be created, which causes both of them to fail. Anyone have any thoughts on what to look at next here? Perry From pmyers at redhat.com Sun Aug 4 14:56:10 2013 From: pmyers at redhat.com (Perry Myers) Date: Sun, 04 Aug 2013 10:56:10 -0400 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 Message-ID: <51FE6B8A.3050501@redhat.com> Used the Quickstart and just swapped out havana for grizzly, but otherwise followed the Quickstart verbatim A few things I've noticed... RDO Grizzly has: openstack-packstack-2013.1.1-0.22.dev653.el6.src.rpm RDO Havana has: openstack-packstack-2013.2.1-0.1.dev691.el6.src.rpm It's hard to tell which version of packstack is 'newer' because the 2013.2.1 vs. 2013.1.1 makes the Havana one automatically > wrt NVR, even if the tarball behind the RPM is older. Is the devXXX number indicative of the tarball release? If this is the case, then havana does have a later version which makes sense. But... what I noticed is that running packstack from Havana RDO repos doesn't create the demo tenant or import the cirros image (which I thought should happen for all --allinone style installs) Then I thought to check the nightly repos here: http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6/x86_64/ Packstack isn't even in those repos, which means that folks can't use packstack to install nightly builds easily. All that being said, once I manually imported the cirros image, and used the admin tenant, I was able to boot an instance. But, I did notice that the Horizon UI acted a little differently. I didn't see dynamic refreshing of the instance state and the little progress bar anymore. I also can't seem to click on the "More" button to do things like launch the VNC Console or Associate a Floating IP. In various screens, the "Edit" or "Launch" buttons work fine. But on the screens with a "More" drop down button, the drop down doesn't pop up and so I can't take any of those actions. (I tried this in both Chrome and Firefox on F19, both had the same issue) Of course, this was all with Nova Networking, not Neutron, since we don't have neutron in the RDO Havana nightly or milestone repos yet. It'll be nice to verify that Neutron allinone works for Havana as well, as soon as we get the packages ready from the renaming. (Of course, this means also need the Puppet modules and Packstack changes for renaming as well) So, checklist: * packstack in Havana needs to create demo tenant and import cirros image just like in RDO Grizzly * Horizon seems to have screen refresh/updates and More button issues * Need Neutron packages for Havana so that we can use Neutron in Havana * Need Packstack available so that RDO Nightly users can install the nightly builds Cheers, Perry From kchamart at redhat.com Sun Aug 4 15:47:40 2013 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sun, 04 Aug 2013 21:17:40 +0530 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FE5DDC.4010104@redhat.com> References: <51FE5DDC.4010104@redhat.com> Message-ID: <51FE779C.5020008@redhat.com> On 08/04/2013 07:27 PM, Perry Myers wrote: > Hi, > > I followed the instructions at: > http://openstack.redhat.com/Neutron-Quickstart > http://openstack.redhat.com/Running_an_instance_with_Neutron > > I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure > to install the netns enabled kernel from RDO repos and reboot with that > kernel before running packstack so that I didn't need to reboot the VM > after the packstack install (and have br-ex disappear) > > The packstack install went without incident. And I was able to follow > the launch an instance instructions. > > I noticed that the cirros VM took a long time to get to a login prompt > on the VNC console. From looking at the console output it appears that > the instance was waiting for a dhcp address. Cirros guests runs a bunch of useful networking commands (there's work to incorporate similar into Fedora iamges too) for debugging purpose. You can find the path to your Cirros console log: http://kashyapc.wordpress.com/2013/04/06/finding-serial-console-log-of-a-nova-instance/ Cirros images log file gives insight if your guest is recieving DHCP lease requests. > > Once the VNC session got me to a login prompt, I logged in (as the > cirros user) and confirmed that eth0 did not have an ip address. In my setup, while debugging with Rhys Oxenham, I noticed we had to explicitly associate the IP address and route information (due to [*]). Assuming your private IP network is 30.0.0.x series, many you can try from VNC? $ ifconfig eth0 30.0.0.7 netmask 255.255.255.0 $ route add default gw 30.0.0.1 eth0 [*] https://bugzilla.redhat.com/show_bug.cgi?id=983672 - I doubt this will affect the RHEL kernel you're running. > > So, something networking related prevented the instance from getting an > IP which of course makes ssh'ing into the instance via the floating ip > later in the instructions not work properly. > > I tried ifup'ing eth0 and dhcp discovers were sent out but not responded to. > > One thing is that on the host running OpenStack services (the VM I ran > packstack on), I don't see dnsmasq running except for the default > libvirt network: > >> [admin at rdo-mgmt ~(keystone_demo)]$ ps -ef | grep dnsmas >> nobody 1968 1 0 08:59 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --bind-interfaces --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts > > So... that seems to be a problem :) Yes, you should have a dnsmasq instance running on the DHCP namespace: >From my setup, interfaces info inside the DHCP n/w namespace: $ ip netns exec qdhcp-4a04382f-03bf-49a9-9d4a-35ab9ffc22ad ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ns-77ee7ea5-61: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:16:3e:77:ee:87 brd ff:ff:ff:ff:ff:ff inet 30.0.0.3/24 brd 30.0.0.255 scope global ns-77ee7ea5-61 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe77:ee87/64 scope link valid_lft forever preferred_lft forever For reference, dnsmasq instances running on the namespace interface (ns-77ee7ea5-61 in this case): ======= $ ps -ef | grep dnsmasq root 26057 30911 0 11:10 pts/0 00:00:00 grep --color=auto dnsmasq nobody 29387 1 0 Aug02 ? 00:00:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=ns-77ee7ea5-61 --except-interface=lo --pid-file=/var/lib/quantum/dhcp/4a04382f-03bf-49a9-9d4a-35ab9ffc22ad/pid --dhcp-hostsfile=/var/lib/quantum/dhcp/4a04382f-03bf-49a9-9d4a-35ab9ffc22ad/host --dhcp-optsfile=/var/lib/quantum/dhcp/4a04382f-03bf-49a9-9d4a-35ab9ffc22ad/opts --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro --dhcp-range=set:tag0,30.0.0.0,static,120s --conf-file= --domain=openstacklocal root 29388 29387 0 Aug02 ? 00:00:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=ns-77ee7ea5-61 --except-interface=lo --pid-file=/var/lib/quantum/dhcp/4a04382f-03bf-49a9-9d4a-35ab9ffc22ad/pid --dhcp-hostsfile=/var/lib/quantum/dhcp/4a04382f-03bf-49a9-9d4a-35ab9ffc22ad/host --dhcp-optsfile=/var/lib/quantum/dhcp/4a04382f-03bf-49a9-9d4a-35ab9ffc22ad/opts --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro --dhcp-range=set:tag0,30.0.0.0,static,120s --conf-file= --domain=openstacklocal ======= > > Just to confirm, I am running the right kernel: >> [root at rdo-mgmt log(keystone_demo)]# uname -a >> Linux rdo-mgmt 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 02:11:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux > >> [root at rdo-mgmt log(keystone_demo)]# rpm -q iproute kernel >> iproute-2.6.32-23.el6_4.netns.1.x86_64 >> kernel-2.6.32-358.114.1.openstack.el6.x86_64 > >>From quantum server.log: >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error opening certificate file /var/lib/quantum/keystone-signing/signing_cert.pem >> 140222780139336:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/signing_cert.pem','r') >> 140222780139336:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: >> >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error loading file /var/lib/quantum/keystone-signing/cacert.pem >> 140279285741384:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/cacert.pem','r') >> 140279285741384:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: >> 140279285741384:error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib:by_file.c:279: Do you have the PEM cert file that system call 'fopen' is trying to locate? $ file /var/lib/quantum/keystone-signing/cacert.pem > >>From quantum dhcp-agent.log: > >> 2013-08-04 09:08:05 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >> data = self._dataqueue.get(timeout=self._timeout) >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >> return waiter.wait() >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >> return get_hub().switch() >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >> return self.greenlet.switch() >> Empty >> 2013-08-04 09:08:05 ERROR [quantum.agent.dhcp_agent] Failed reporting state! >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 702, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >> topic=self.topic) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >> rv = list(rv) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.853869 sec >> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] Synchronizing state >> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver >> getattr(driver, action)() >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable >> reuse_existing=True) >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup >> namespace=namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >> ns_dev.link.set_address(mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >> self._as_root('set', self.name, 'address', mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >> kwargs.get('use_root_namespace', False)) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >> namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >> root_helper=root_helper) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >> raise RuntimeError(m) >> RuntimeError: >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] >> Exit code: 2 >> Stdout: '' >> Stderr: 'RTNETLINK answers: Device or resource busy\n' >> 2013-08-04 09:32:36 INFO [quantum.agent.dhcp_agent] Synchronizing state >> 2013-08-04 09:32:41 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver >> getattr(driver, action)() >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable >> reuse_existing=True) >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup >> namespace=namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >> ns_dev.link.set_address(mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >> self._as_root('set', self.name, 'address', mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >> kwargs.get('use_root_namespace', False)) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >> namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >> root_helper=root_helper) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >> raise RuntimeError(m) > > The RTNETLINK errors just repeat indefinitely > >>From openvswitch-agent.log: > >> 2013-08-04 09:08:29 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >> data = self._dataqueue.get(timeout=self._timeout) >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >> return waiter.wait() >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >> return get_hub().switch() >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >> return self.greenlet.switch() >> Empty >> 2013-08-04 09:08:29 ERROR [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed reporting state! >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py", line 201, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >> topic=self.topic) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >> rv = list(rv) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. > > Do we have a race condition wrt various Quantum agents connecting to the > qpid bus that is just generating initial qpid connection error messages > that can be safely ignored? Yes - for now I think you can ignore. Nn my two-node F19 Grizzly w/ Neutron (Quantum) setup (hand-configured), I only see a couple of occurances of the Timeout message. After [...] 2013-08-02 08:33:56 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response . . return _get_impl().call(CONF, context, topic, msg, timeout) File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call rpc_amqp.get_connection_pool(conf, Connection)) File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call rv = list(rv) File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ raise rpc_common.Timeout() Timeout: Timeout while waiting on RPC response. [...] In my setup, I too noticed a couple of occurances of it. But I made an error, in my /etc/quantum/quantum.conf on Compute node, it had: qpid_hostname=localhost I changed it to explicit IP address (of the Controller node): qpid_hostname=192.168.122.218 > > If so, is there any way we can clean this up? > >>From l3-agent.log: > >> 2013-08-04 09:08:06 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >> data = self._dataqueue.get(timeout=self._timeout) >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >> return waiter.wait() >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >> return get_hub().switch() >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >> return self.greenlet.switch() >> Empty >> 2013-08-04 09:08:06 ERROR [quantum.agent.l3_agent] Failed reporting state! >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 723, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >> topic=self.topic) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >> rv = list(rv) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-08-04 09:08:06 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.554131 sec >> 2013-08-04 09:08:10 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >> data = self._dataqueue.get(timeout=self._timeout) >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >> return waiter.wait() >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >> return get_hub().switch() >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >> return self.greenlet.switch() >> Empty >> 2013-08-04 09:08:10 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 637, in _sync_routers_task >> context, router_id) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 77, in get_routers >> topic=self.topic) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >> rv = list(rv) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-08-04 09:08:10 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 20.022704 sec >> 2013-08-04 09:11:33 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task >> self._process_routers(routers, all_routers=True) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers >> self.process_router(ri) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added >> prefix=EXTERNAL_DEV_PREFIX) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >> ns_dev.link.set_address(mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >> self._as_root('set', self.name, 'address', mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >> kwargs.get('use_root_namespace', False)) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >> namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >> root_helper=root_helper) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >> raise RuntimeError(m) >> RuntimeError: >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'qg-46ed452c-5e', 'address', 'fa:16:3e:e7:d8:30'] >> Exit code: 2 >> Stdout: '' >> Stderr: 'RTNETLINK answers: Device or resource busy\n' >> 2013-08-04 09:12:11 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task >> self._process_routers(routers, all_routers=True) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers >> self.process_router(ri) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added >> prefix=EXTERNAL_DEV_PREFIX) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >> ns_dev.link.set_address(mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >> self._as_root('set', self.name, 'address', mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >> kwargs.get('use_root_namespace', False)) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >> namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >> root_helper=root_helper) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >> raise RuntimeError(m) > > Same qpid connection issue, which I'm assuming can just be ignored at > this point. But also similar device busy errors with creating the > namespace for the l2 agent > > It appears that the issue with both the l2 agent and the dhcp agent that > the namespace can't be created, which causes both of them to fail. I can't pin point the specific issue here, if you prefer, here are my configs http://kashyapc.fedorapeople.org/virt/openstack/two-node-OpenStack-f19-configs/controller-node-configs/quantum/ And, that's the setup diagram I have (ignore the file name :) ) http://kashyapc.fedorapeople.org/virt/openstack/namespaces-info-1.txt (Also, I haven't denoted Compute node in the ascii image. It's just running on a different VM.) > > Anyone have any thoughts on what to look at next here? > > Perry -- /kashyap From kchamart at redhat.com Mon Aug 5 06:10:21 2013 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 05 Aug 2013 11:40:21 +0530 Subject: [Rdo-list] rdo-release-havana RPM install fails with dependency on foreman-release Message-ID: <51FF41CD.50109@redhat.com> Heya (P?draig?), Installing the generic rdo-release-havana RPM pulls in the rdo-release-havana-3 package (while earlier it pulled in rdo-release-havana-2) As a result of pulling in rdo-release-havana-3, it fails with a dependency on foreman-release: $ sudo yum install -y \ > http://rdo.fedorapeople.org/openstack/openstack-havana/rdo-release-havana.rpm rdo-release-havana.rpm | 7.6 kB 00:00:00 Examining /var/tmp/yum-root-C_xp3m/rdo-release-havana.rpm: rdo-release-havana-3.noarch Marking /var/tmp/yum-root-C_xp3m/rdo-release-havana.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package rdo-release.noarch 0:havana-3 will be installed --> Processing Dependency: foreman-release for package: rdo-release-havana-3.noarch --> Finished Dependency Resolution Error: Package: rdo-release-havana-3.noarch (/rdo-release-havana) Requires: foreman-release You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Of course, explicitly installing 'foreman-release' will resolve the dependency issue. And for m-2 packages, we can use the absolute path to the URL $ yum install -y \ http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-2.noarch.rpm Just thought of bringing this to notice. -- /kashyap From tgraf at redhat.com Mon Aug 5 08:27:21 2013 From: tgraf at redhat.com (Thomas Graf) Date: Mon, 05 Aug 2013 10:27:21 +0200 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FE779C.5020008@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FE779C.5020008@redhat.com> Message-ID: <51FF61E9.4090601@redhat.com> On 08/04/2013 05:47 PM, Kashyap Chamarthy wrote: >>> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.853869 sec >>> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] Synchronizing state >>> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver >>> getattr(driver, action)() >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable >>> reuse_existing=True) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup >>> namespace=namespace) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >>> ns_dev.link.set_address(mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >>> self._as_root('set', self.name, 'address', mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >>> kwargs.get('use_root_namespace', False)) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >>> namespace) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >>> root_helper=root_helper) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >>> raise RuntimeError(m) >>> RuntimeError: >>> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] >>> Exit code: 2 >>> Stdout: '' >>> Stderr: 'RTNETLINK answers: Device or resource busy\n' Quantum attempts to change the MAC address while the link is up. The live MAC address change feature is not supported in the openstack kernel at this point. We can attempt a backport of the feature to the openstack kernel and enable it for tap and veth devices or we modify quantum to bring down the interface before changing the mac address and bring it up again afterwards. From kchamart at redhat.com Mon Aug 5 09:41:38 2013 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 05 Aug 2013 15:11:38 +0530 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FE779C.5020008@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FE779C.5020008@redhat.com> Message-ID: <51FF7352.4020805@redhat.com> [...] >> >> >From quantum server.log: >>> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error opening certificate file /var/lib/quantum/keystone-signing/signing_cert.pem >>> 140222780139336:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/signing_cert.pem','r') >>> 140222780139336:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: >>> >>> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error loading file /var/lib/quantum/keystone-signing/cacert.pem >>> 140279285741384:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/cacert.pem','r') >>> 140279285741384:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: >>> 140279285741384:error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib:by_file.c:279: > > Do you have the PEM cert file that system call 'fopen' is trying to locate? > > $ file /var/lib/quantum/keystone-signing/cacert.pem This appears to be a known issue https://bugs.launchpad.net/python-keystoneclient/+bug/1189539 Says these errors are benign. -- /kashyap From pbrady at redhat.com Mon Aug 5 09:48:59 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Mon, 05 Aug 2013 10:48:59 +0100 Subject: [Rdo-list] rdo-release-havana RPM install fails with dependency on foreman-release In-Reply-To: <51FF41CD.50109@redhat.com> References: <51FF41CD.50109@redhat.com> Message-ID: <51FF750B.8020206@redhat.com> On 08/05/2013 07:10 AM, Kashyap Chamarthy wrote: > Heya (P?draig?), > > > Installing the generic rdo-release-havana RPM pulls in the rdo-release-havana-3 package > (while earlier it pulled in rdo-release-havana-2) > > As a result of pulling in rdo-release-havana-3, it fails with a dependency on foreman-release: Oops right, catch 22. It works for `yum install rdo-release`, but not with this initial install method when the repo is not in place at all. I've reverted that change now, and will handle in a different manner later on. thanks! P?draig. From kchamart at redhat.com Mon Aug 5 10:08:14 2013 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 05 Aug 2013 15:38:14 +0530 Subject: [Rdo-list] rdo-release-havana RPM install fails with dependency on foreman-release In-Reply-To: <51FF750B.8020206@redhat.com> References: <51FF41CD.50109@redhat.com> <51FF750B.8020206@redhat.com> Message-ID: <51FF798E.9050907@redhat.com> On 08/05/2013 03:18 PM, P?draig Brady wrote: > On 08/05/2013 07:10 AM, Kashyap Chamarthy wrote: >> Heya (P?draig?), >> >> >> Installing the generic rdo-release-havana RPM pulls in the rdo-release-havana-3 package >> (while earlier it pulled in rdo-release-havana-2) >> >> As a result of pulling in rdo-release-havana-3, it fails with a dependency on foreman-release: > > Oops right, catch 22. > It works for `yum install rdo-release`, > but not with this initial install method > when the repo is not in place at all. > > I've reverted that change now, Cool, thanks. Do you want to place a quick README denoting there are two milestone packages (m2, m3) http://repos.fedorapeople.org/repos/openstack/openstack-havana/ Or not really needed? Also, while I have your attention here, maybe we should advertise the usage of 'fedora-easy-karma' tool to provide up-votes by people who have been testing stuff with updates-testing enabled? Packages are languishing in Bodhi: https://admin.fedoraproject.org/updates/FEDORA-2013-14093/openstack-packstack-2013.2.1-0.1.dev691.fc19?_csrf_token=dbacbb6a8f3abee94c659d6805addfa02e0105b1 > and will handle in a different manner later on. > > thanks! > P?draig. > -- /kashyap From pbrady at redhat.com Mon Aug 5 10:37:36 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Mon, 05 Aug 2013 11:37:36 +0100 Subject: [Rdo-list] rdo-release-havana RPM install fails with dependency on foreman-release In-Reply-To: <51FF798E.9050907@redhat.com> References: <51FF41CD.50109@redhat.com> <51FF750B.8020206@redhat.com> <51FF798E.9050907@redhat.com> Message-ID: <51FF8070.4000907@redhat.com> On 08/05/2013 11:08 AM, Kashyap Chamarthy wrote: > On 08/05/2013 03:18 PM, P?draig Brady wrote: >> On 08/05/2013 07:10 AM, Kashyap Chamarthy wrote: >>> Heya (P?draig?), >>> >>> >>> Installing the generic rdo-release-havana RPM pulls in the rdo-release-havana-3 package >>> (while earlier it pulled in rdo-release-havana-2) >>> >>> As a result of pulling in rdo-release-havana-3, it fails with a dependency on foreman-release: >> >> Oops right, catch 22. >> It works for `yum install rdo-release`, >> but not with this initial install method >> when the repo is not in place at all. >> >> I've reverted that change now, > > Cool, thanks. > > Do you want to place a quick README denoting there are two milestone packages (m2, m3) > > http://repos.fedorapeople.org/repos/openstack/openstack-havana/ > > Or not really needed? No need I think. Changes here are usually uninteresting plumbing, already documented in the rpm -q --changelog. Scripts/docs reference generic links rather than specific milestones. > Also, while I have your attention here, maybe we should advertise the usage of > 'fedora-easy-karma' tool to provide up-votes by people who have been testing stuff with > updates-testing enabled? > > Packages are languishing in Bodhi: > > https://admin.fedoraproject.org/updates/FEDORA-2013-14093/openstack-packstack-2013.2.1-0.1.dev691.fc19?_csrf_token=dbacbb6a8f3abee94c659d6805addfa02e0105b1 It would be good to provide feedback from RDO to Fedora. Note the caveat though where Fedora N+1 is being used for the RDO Fedora N versions. I.E. the RDO Havana Fedora 19 repo currently contains openstack-packstack-2013.2.1-0.1.dev691.fc20 and whilst currently the same as fc19, may not be generally. Also this would impact tools auto providing karma from installed versions. Something to think about indeed. thanks, P?draig. From pmyers at redhat.com Mon Aug 5 11:52:08 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 05 Aug 2013 07:52:08 -0400 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FF61E9.4090601@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FE779C.5020008@redhat.com> <51FF61E9.4090601@redhat.com> Message-ID: <51FF91E8.6050205@redhat.com> On 08/05/2013 04:27 AM, Thomas Graf wrote: > On 08/04/2013 05:47 PM, Kashyap Chamarthy wrote: >>>> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] >>>> task run outlasted interval by 56.853869 sec >>>> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] >>>> Synchronizing state >>>> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to >>>> enable dhcp. >>>> Traceback (most recent call last): >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line >>>> 131, in call_driver >>>> getattr(driver, action)() >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line >>>> 124, in enable >>>> reuse_existing=True) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line >>>> 554, in setup >>>> namespace=namespace) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", >>>> line 181, in plug >>>> ns_dev.link.set_address(mac_address) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", >>>> line 180, in set_address >>>> self._as_root('set', self.name, 'address', mac_address) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", >>>> line 167, in _as_root >>>> kwargs.get('use_root_namespace', False)) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", >>>> line 47, in _as_root >>>> namespace) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", >>>> line 58, in _execute >>>> root_helper=root_helper) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", >>>> line 61, in execute >>>> raise RuntimeError(m) >>>> RuntimeError: >>>> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', >>>> 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] >>>> Exit code: 2 >>>> Stdout: '' >>>> Stderr: 'RTNETLINK answers: Device or resource busy\n' > > Quantum attempts to change the MAC address while the link is up. The > live MAC address change feature is not supported in the openstack > kernel at this point. > > We can attempt a backport of the feature to the openstack kernel and > enable it for tap and veth devices or we modify quantum to bring down > the interface before changing the mac address and bring it up again > afterwards. Thanks Thomas. Or perhaps we need a fix to Quantum itself to create the link with the proper MAC address to begin with rather than changing it in a second step? With the above error, I wonder if the Quantum Quickstart ever fully worked at all on either RHOS or RDO Grizzly? Terry, how did you work around the above issue when testing on RHOS? Perry From tgraf at redhat.com Mon Aug 5 11:59:37 2013 From: tgraf at redhat.com (Thomas Graf) Date: Mon, 05 Aug 2013 13:59:37 +0200 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FF91E8.6050205@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FE779C.5020008@redhat.com> <51FF61E9.4090601@redhat.com> <51FF91E8.6050205@redhat.com> Message-ID: <51FF93A9.1070909@redhat.com> On 08/05/2013 01:52 PM, Perry Myers wrote: > On 08/05/2013 04:27 AM, Thomas Graf wrote: >> On 08/04/2013 05:47 PM, Kashyap Chamarthy wrote: >>>>> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] >>>>> task run outlasted interval by 56.853869 sec >>>>> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] >>>>> Synchronizing state >>>>> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to >>>>> enable dhcp. >>>>> Traceback (most recent call last): >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line >>>>> 131, in call_driver >>>>> getattr(driver, action)() >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line >>>>> 124, in enable >>>>> reuse_existing=True) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line >>>>> 554, in setup >>>>> namespace=namespace) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", >>>>> line 181, in plug >>>>> ns_dev.link.set_address(mac_address) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", >>>>> line 180, in set_address >>>>> self._as_root('set', self.name, 'address', mac_address) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", >>>>> line 167, in _as_root >>>>> kwargs.get('use_root_namespace', False)) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", >>>>> line 47, in _as_root >>>>> namespace) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", >>>>> line 58, in _execute >>>>> root_helper=root_helper) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", >>>>> line 61, in execute >>>>> raise RuntimeError(m) >>>>> RuntimeError: >>>>> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', >>>>> 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] >>>>> Exit code: 2 >>>>> Stdout: '' >>>>> Stderr: 'RTNETLINK answers: Device or resource busy\n' >> >> Quantum attempts to change the MAC address while the link is up. The >> live MAC address change feature is not supported in the openstack >> kernel at this point. >> >> We can attempt a backport of the feature to the openstack kernel and >> enable it for tap and veth devices or we modify quantum to bring down >> the interface before changing the mac address and bring it up again >> afterwards. > > Thanks Thomas. Or perhaps we need a fix to Quantum itself to create the > link with the proper MAC address to begin with rather than changing it > in a second step? This would make sense from my POV. I doubt that it's desirable to have the wrong MAC address live at any point. > With the above error, I wonder if the Quantum Quickstart ever fully > worked at all on either RHOS or RDO Grizzly? > > Terry, how did you work around the above issue when testing on RHOS? From rdo-info at redhat.com Mon Aug 5 14:03:42 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 5 Aug 2013 14:03:42 +0000 Subject: [Rdo-list] [RDO] moimael started a discussion. Message-ID: <000001404eca6845-1c7a7cab-e741-40c1-ace4-b4daa13b6331-000000@email.amazonses.com> moimael started a discussion. CentOS 6.4 grub does not show openstack kernel --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/398/centos-6-4-grub-does-not-show-openstack-kernel Have a great day! From rdo-info at redhat.com Mon Aug 5 15:24:02 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 5 Aug 2013 15:24:02 +0000 Subject: [Rdo-list] [RDO] o1o1o11o1 started a discussion. Message-ID: <000001404f13f377-18be53ad-9dfb-4d31-b126-32cc0b3ee346-000000@email.amazonses.com> o1o1o11o1 started a discussion. RDO Fails to install on 6.4 with the following --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/399/rdo-fails-to-install-on-6-4-with-the-following Have a great day! From twilson at redhat.com Mon Aug 5 16:27:11 2013 From: twilson at redhat.com (Terry Wilson) Date: Mon, 5 Aug 2013 12:27:11 -0400 (EDT) Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FF91E8.6050205@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FE779C.5020008@redhat.com> <51FF61E9.4090601@redhat.com> <51FF91E8.6050205@redhat.com> Message-ID: <1107455714.11703980.1375720031570.JavaMail.root@redhat.com> ----- Original Message ----- > On 08/05/2013 04:27 AM, Thomas Graf wrote: > > On 08/04/2013 05:47 PM, Kashyap Chamarthy wrote: > >>>> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] > >>>> task run outlasted interval by 56.853869 sec > >>>> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] > >>>> Synchronizing state > >>>> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to > >>>> enable dhcp. > >>>> Traceback (most recent call last): > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line > >>>> 131, in call_driver > >>>> getattr(driver, action)() > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line > >>>> 124, in enable > >>>> reuse_existing=True) > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line > >>>> 554, in setup > >>>> namespace=namespace) > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > >>>> line 181, in plug > >>>> ns_dev.link.set_address(mac_address) > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >>>> line 180, in set_address > >>>> self._as_root('set', self.name, 'address', mac_address) > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >>>> line 167, in _as_root > >>>> kwargs.get('use_root_namespace', False)) > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >>>> line 47, in _as_root > >>>> namespace) > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >>>> line 58, in _execute > >>>> root_helper=root_helper) > >>>> File > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > >>>> line 61, in execute > >>>> raise RuntimeError(m) > >>>> RuntimeError: > >>>> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', > >>>> 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] > >>>> Exit code: 2 > >>>> Stdout: '' > >>>> Stderr: 'RTNETLINK answers: Device or resource busy\n' > > > > Quantum attempts to change the MAC address while the link is up. The > > live MAC address change feature is not supported in the openstack > > kernel at this point. > > > > We can attempt a backport of the feature to the openstack kernel and > > enable it for tap and veth devices or we modify quantum to bring down > > the interface before changing the mac address and bring it up again > > afterwards. > > Thanks Thomas. Or perhaps we need a fix to Quantum itself to create the > link with the proper MAC address to begin with rather than changing it > in a second step? > > With the above error, I wonder if the Quantum Quickstart ever fully > worked at all on either RHOS or RDO Grizzly? > > Terry, how did you work around the above issue when testing on RHOS? I didn't run into this issue when testing on RHOS or RDO. For me, on my test VMs, everything comes up properly on both systems (launching vm's, getting addresses, connecting vial floating ip, etc. just works)--although I haven't tested RHOS with the latest changes because the build was just done this morning. Terry From beagles at redhat.com Mon Aug 5 17:23:39 2013 From: beagles at redhat.com (Brent Eagles) Date: Mon, 05 Aug 2013 14:53:39 -0230 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FE5DDC.4010104@redhat.com> References: <51FE5DDC.4010104@redhat.com> Message-ID: <51FFDF9B.5020005@redhat.com> On 08/04/2013 11:27 AM, Perry Myers wrote: > Hi, > > I followed the instructions at: > http://openstack.redhat.com/Neutron-Quickstart > http://openstack.redhat.com/Running_an_instance_with_Neutron > > I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure > to install the netns enabled kernel from RDO repos and reboot with that > kernel before running packstack so that I didn't need to reboot the VM > after the packstack install (and have br-ex disappear) > > The packstack install went without incident. And I was able to follow > the launch an instance instructions. > > I noticed that the cirros VM took a long time to get to a login prompt > on the VNC console. From looking at the console output it appears that > the instance was waiting for a dhcp address. > > Once the VNC session got me to a login prompt, I logged in (as the > cirros user) and confirmed that eth0 did not have an ip address. > > So, something networking related prevented the instance from getting an > IP which of course makes ssh'ing into the instance via the floating ip > later in the instructions not work properly. > > I tried ifup'ing eth0 and dhcp discovers were sent out but not responded to. > > One thing is that on the host running OpenStack services (the VM I ran > packstack on), I don't see dnsmasq running except for the default > libvirt network: > >> [admin at rdo-mgmt ~(keystone_demo)]$ ps -ef | grep dnsmas >> nobody 1968 1 0 08:59 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --bind-interfaces --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts > > So... that seems to be a problem :) > > Just to confirm, I am running the right kernel: >> [root at rdo-mgmt log(keystone_demo)]# uname -a >> Linux rdo-mgmt 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 02:11:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux > >> [root at rdo-mgmt log(keystone_demo)]# rpm -q iproute kernel >> iproute-2.6.32-23.el6_4.netns.1.x86_64 >> kernel-2.6.32-358.114.1.openstack.el6.x86_64 > > From quantum server.log: >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error opening certificate file /var/lib/quantum/keystone-signing/signing_cert.pem >> 140222780139336:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/signing_cert.pem','r') >> 140222780139336:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: >> >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error loading file /var/lib/quantum/keystone-signing/cacert.pem >> 140279285741384:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/cacert.pem','r') >> 140279285741384:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: >> 140279285741384:error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib:by_file.c:279: > > From quantum dhcp-agent.log: > >> 2013-08-04 09:08:05 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >> data = self._dataqueue.get(timeout=self._timeout) >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >> return waiter.wait() >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >> return get_hub().switch() >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >> return self.greenlet.switch() >> Empty >> 2013-08-04 09:08:05 ERROR [quantum.agent.dhcp_agent] Failed reporting state! >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 702, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >> topic=self.topic) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >> rv = list(rv) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.853869 sec >> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] Synchronizing state >> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver >> getattr(driver, action)() >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable >> reuse_existing=True) >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup >> namespace=namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >> ns_dev.link.set_address(mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >> self._as_root('set', self.name, 'address', mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >> kwargs.get('use_root_namespace', False)) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >> namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >> root_helper=root_helper) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >> raise RuntimeError(m) >> RuntimeError: >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] >> Exit code: 2 >> Stdout: '' >> Stderr: 'RTNETLINK answers: Device or resource busy\n' >> 2013-08-04 09:32:36 INFO [quantum.agent.dhcp_agent] Synchronizing state >> 2013-08-04 09:32:41 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver >> getattr(driver, action)() >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable >> reuse_existing=True) >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup >> namespace=namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >> ns_dev.link.set_address(mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >> self._as_root('set', self.name, 'address', mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >> kwargs.get('use_root_namespace', False)) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >> namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >> root_helper=root_helper) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >> raise RuntimeError(m) > > The RTNETLINK errors just repeat indefinitely > > From openvswitch-agent.log: > >> 2013-08-04 09:08:29 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >> data = self._dataqueue.get(timeout=self._timeout) >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >> return waiter.wait() >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >> return get_hub().switch() >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >> return self.greenlet.switch() >> Empty >> 2013-08-04 09:08:29 ERROR [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed reporting state! >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py", line 201, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >> topic=self.topic) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >> rv = list(rv) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. > > Do we have a race condition wrt various Quantum agents connecting to the > qpid bus that is just generating initial qpid connection error messages > that can be safely ignored? > > If so, is there any way we can clean this up? > > From l3-agent.log: > >> 2013-08-04 09:08:06 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >> data = self._dataqueue.get(timeout=self._timeout) >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >> return waiter.wait() >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >> return get_hub().switch() >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >> return self.greenlet.switch() >> Empty >> 2013-08-04 09:08:06 ERROR [quantum.agent.l3_agent] Failed reporting state! >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 723, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >> topic=self.topic) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >> rv = list(rv) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-08-04 09:08:06 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.554131 sec >> 2013-08-04 09:08:10 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >> data = self._dataqueue.get(timeout=self._timeout) >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >> return waiter.wait() >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >> return get_hub().switch() >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >> return self.greenlet.switch() >> Empty >> 2013-08-04 09:08:10 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 637, in _sync_routers_task >> context, router_id) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 77, in get_routers >> topic=self.topic) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >> rv = list(rv) >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-08-04 09:08:10 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 20.022704 sec >> 2013-08-04 09:11:33 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task >> self._process_routers(routers, all_routers=True) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers >> self.process_router(ri) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added >> prefix=EXTERNAL_DEV_PREFIX) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >> ns_dev.link.set_address(mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >> self._as_root('set', self.name, 'address', mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >> kwargs.get('use_root_namespace', False)) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >> namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >> root_helper=root_helper) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >> raise RuntimeError(m) >> RuntimeError: >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'qg-46ed452c-5e', 'address', 'fa:16:3e:e7:d8:30'] >> Exit code: 2 >> Stdout: '' >> Stderr: 'RTNETLINK answers: Device or resource busy\n' >> 2013-08-04 09:12:11 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task >> self._process_routers(routers, all_routers=True) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers >> self.process_router(ri) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added >> prefix=EXTERNAL_DEV_PREFIX) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >> ns_dev.link.set_address(mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >> self._as_root('set', self.name, 'address', mac_address) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >> kwargs.get('use_root_namespace', False)) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >> namespace) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >> root_helper=root_helper) >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >> raise RuntimeError(m) > > Same qpid connection issue, which I'm assuming can just be ignored at > this point. But also similar device busy errors with creating the > namespace for the l2 agent > > It appears that the issue with both the l2 agent and the dhcp agent that > the namespace can't be created, which causes both of them to fail. > > Anyone have any thoughts on what to look at next here? > > Perry I ran into these issues as well. I noticed that ovs_use_veth was commented out in dhcp_agent.ini and l3_agent.ini. I uncommented them and set them to True and restarted. The vm now has an IP address. I noticed something else peculiar though... the public network.. the one set as the gateway for the router has dhcp enabled. I'm not sure why we would do that. Cheers, Brent From marun at redhat.com Mon Aug 5 17:31:33 2013 From: marun at redhat.com (Maru Newby) Date: Mon, 5 Aug 2013 10:31:33 -0700 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FFDF9B.5020005@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FFDF9B.5020005@redhat.com> Message-ID: On Aug 5, 2013, at 10:23 AM, Brent Eagles wrote: > On 08/04/2013 11:27 AM, Perry Myers wrote: >> Hi, >> >> I followed the instructions at: >> http://openstack.redhat.com/Neutron-Quickstart >> http://openstack.redhat.com/Running_an_instance_with_Neutron >> >> I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure >> to install the netns enabled kernel from RDO repos and reboot with that >> kernel before running packstack so that I didn't need to reboot the VM >> after the packstack install (and have br-ex disappear) >> >> The packstack install went without incident. And I was able to follow >> the launch an instance instructions. >> >> I noticed that the cirros VM took a long time to get to a login prompt >> on the VNC console. From looking at the console output it appears that >> the instance was waiting for a dhcp address. >> >> Once the VNC session got me to a login prompt, I logged in (as the >> cirros user) and confirmed that eth0 did not have an ip address. >> >> So, something networking related prevented the instance from getting an >> IP which of course makes ssh'ing into the instance via the floating ip >> later in the instructions not work properly. >> >> I tried ifup'ing eth0 and dhcp discovers were sent out but not responded to. >> >> One thing is that on the host running OpenStack services (the VM I ran >> packstack on), I don't see dnsmasq running except for the default >> libvirt network: >> >>> [admin at rdo-mgmt ~(keystone_demo)]$ ps -ef | grep dnsmas >>> nobody 1968 1 0 08:59 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --bind-interfaces --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts >> >> So... that seems to be a problem :) >> >> Just to confirm, I am running the right kernel: >>> [root at rdo-mgmt log(keystone_demo)]# uname -a >>> Linux rdo-mgmt 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 02:11:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux >> >>> [root at rdo-mgmt log(keystone_demo)]# rpm -q iproute kernel >>> iproute-2.6.32-23.el6_4.netns.1.x86_64 >>> kernel-2.6.32-358.114.1.openstack.el6.x86_64 >> >> From quantum server.log: >>> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error opening certificate file /var/lib/quantum/keystone-signing/signing_cert.pem >>> 140222780139336:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/signing_cert.pem','r') >>> 140222780139336:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: >>> >>> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error loading file /var/lib/quantum/keystone-signing/cacert.pem >>> 140279285741384:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/cacert.pem','r') >>> 140279285741384:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: >>> 140279285741384:error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib:by_file.c:279: >> >> From quantum dhcp-agent.log: >> >>> 2013-08-04 09:08:05 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >>> data = self._dataqueue.get(timeout=self._timeout) >>> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >>> return waiter.wait() >>> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >>> return get_hub().switch() >>> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >>> return self.greenlet.switch() >>> Empty >>> 2013-08-04 09:08:05 ERROR [quantum.agent.dhcp_agent] Failed reporting state! >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 702, in _report_state >>> self.agent_state) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >>> topic=self.topic) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >>> return _get_impl().call(CONF, context, topic, msg, timeout) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >>> rpc_amqp.get_connection_pool(conf, Connection)) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >>> rv = list(rv) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >>> raise rpc_common.Timeout() >>> Timeout: Timeout while waiting on RPC response. >>> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.853869 sec >>> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] Synchronizing state >>> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver >>> getattr(driver, action)() >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable >>> reuse_existing=True) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup >>> namespace=namespace) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >>> ns_dev.link.set_address(mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >>> self._as_root('set', self.name, 'address', mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >>> kwargs.get('use_root_namespace', False)) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >>> namespace) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >>> root_helper=root_helper) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >>> raise RuntimeError(m) >>> RuntimeError: >>> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] >>> Exit code: 2 >>> Stdout: '' >>> Stderr: 'RTNETLINK answers: Device or resource busy\n' >>> 2013-08-04 09:32:36 INFO [quantum.agent.dhcp_agent] Synchronizing state >>> 2013-08-04 09:32:41 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver >>> getattr(driver, action)() >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable >>> reuse_existing=True) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup >>> namespace=namespace) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >>> ns_dev.link.set_address(mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >>> self._as_root('set', self.name, 'address', mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >>> kwargs.get('use_root_namespace', False)) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >>> namespace) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >>> root_helper=root_helper) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >>> raise RuntimeError(m) >> >> The RTNETLINK errors just repeat indefinitely >> >> From openvswitch-agent.log: >> >>> 2013-08-04 09:08:29 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >>> data = self._dataqueue.get(timeout=self._timeout) >>> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >>> return waiter.wait() >>> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >>> return get_hub().switch() >>> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >>> return self.greenlet.switch() >>> Empty >>> 2013-08-04 09:08:29 ERROR [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed reporting state! >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py", line 201, in _report_state >>> self.agent_state) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >>> topic=self.topic) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >>> return _get_impl().call(CONF, context, topic, msg, timeout) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >>> rpc_amqp.get_connection_pool(conf, Connection)) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >>> rv = list(rv) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >>> raise rpc_common.Timeout() >>> Timeout: Timeout while waiting on RPC response. >> >> Do we have a race condition wrt various Quantum agents connecting to the >> qpid bus that is just generating initial qpid connection error messages >> that can be safely ignored? >> >> If so, is there any way we can clean this up? >> >> From l3-agent.log: >> >>> 2013-08-04 09:08:06 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >>> data = self._dataqueue.get(timeout=self._timeout) >>> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >>> return waiter.wait() >>> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >>> return get_hub().switch() >>> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >>> return self.greenlet.switch() >>> Empty >>> 2013-08-04 09:08:06 ERROR [quantum.agent.l3_agent] Failed reporting state! >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 723, in _report_state >>> self.agent_state) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state >>> topic=self.topic) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >>> return _get_impl().call(CONF, context, topic, msg, timeout) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >>> rpc_amqp.get_connection_pool(conf, Connection)) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >>> rv = list(rv) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >>> raise rpc_common.Timeout() >>> Timeout: Timeout while waiting on RPC response. >>> 2013-08-04 09:08:06 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.554131 sec >>> 2013-08-04 09:08:10 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ >>> data = self._dataqueue.get(timeout=self._timeout) >>> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get >>> return waiter.wait() >>> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait >>> return get_hub().switch() >>> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch >>> return self.greenlet.switch() >>> Empty >>> 2013-08-04 09:08:10 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 637, in _sync_routers_task >>> context, router_id) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 77, in get_routers >>> topic=self.topic) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call >>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call >>> return _get_impl().call(CONF, context, topic, msg, timeout) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call >>> rpc_amqp.get_connection_pool(conf, Connection)) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call >>> rv = list(rv) >>> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ >>> raise rpc_common.Timeout() >>> Timeout: Timeout while waiting on RPC response. >>> 2013-08-04 09:08:10 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 20.022704 sec >>> 2013-08-04 09:11:33 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task >>> self._process_routers(routers, all_routers=True) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers >>> self.process_router(ri) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router >>> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added >>> prefix=EXTERNAL_DEV_PREFIX) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >>> ns_dev.link.set_address(mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >>> self._as_root('set', self.name, 'address', mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >>> kwargs.get('use_root_namespace', False)) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >>> namespace) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >>> root_helper=root_helper) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >>> raise RuntimeError(m) >>> RuntimeError: >>> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'qg-46ed452c-5e', 'address', 'fa:16:3e:e7:d8:30'] >>> Exit code: 2 >>> Stdout: '' >>> Stderr: 'RTNETLINK answers: Device or resource busy\n' >>> 2013-08-04 09:12:11 ERROR [quantum.agent.l3_agent] Failed synchronizing routers >>> Traceback (most recent call last): >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task >>> self._process_routers(routers, all_routers=True) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers >>> self.process_router(ri) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router >>> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added >>> prefix=EXTERNAL_DEV_PREFIX) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug >>> ns_dev.link.set_address(mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address >>> self._as_root('set', self.name, 'address', mac_address) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root >>> kwargs.get('use_root_namespace', False)) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root >>> namespace) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute >>> root_helper=root_helper) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute >>> raise RuntimeError(m) >> >> Same qpid connection issue, which I'm assuming can just be ignored at >> this point. But also similar device busy errors with creating the >> namespace for the l2 agent >> >> It appears that the issue with both the l2 agent and the dhcp agent that >> the namespace can't be created, which causes both of them to fail. >> >> Anyone have any thoughts on what to look at next here? >> >> Perry > > I ran into these issues as well. I noticed that ovs_use_veth was commented out in dhcp_agent.ini and l3_agent.ini. I uncommented them and set them to True and restarted. The vm now has an IP address. > > I noticed something else peculiar though... the public network.. the one set as the gateway for the router has dhcp enabled. I'm not sure why we would do that. Good catch - an omission on my part. I'll update packstack accordingly and make sure there weren't any other deviations. m. > Cheers, > > Brent > From twilson at redhat.com Mon Aug 5 18:12:42 2013 From: twilson at redhat.com (Terry Wilson) Date: Mon, 5 Aug 2013 14:12:42 -0400 (EDT) Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FFDF9B.5020005@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FFDF9B.5020005@redhat.com> Message-ID: <37116974.11749438.1375726362036.JavaMail.root@redhat.com> ----- Original Message ----- > On 08/04/2013 11:27 AM, Perry Myers wrote: > > Hi, > > > > I followed the instructions at: > > http://openstack.redhat.com/Neutron-Quickstart > > http://openstack.redhat.com/Running_an_instance_with_Neutron > > > > I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure > > to install the netns enabled kernel from RDO repos and reboot with that > > kernel before running packstack so that I didn't need to reboot the VM > > after the packstack install (and have br-ex disappear) > > > > The packstack install went without incident. And I was able to follow > > the launch an instance instructions. > > > > I noticed that the cirros VM took a long time to get to a login prompt > > on the VNC console. From looking at the console output it appears that > > the instance was waiting for a dhcp address. > > > > Once the VNC session got me to a login prompt, I logged in (as the > > cirros user) and confirmed that eth0 did not have an ip address. > > > > So, something networking related prevented the instance from getting an > > IP which of course makes ssh'ing into the instance via the floating ip > > later in the instructions not work properly. > > > > I tried ifup'ing eth0 and dhcp discovers were sent out but not responded > > to. > > > > One thing is that on the host running OpenStack services (the VM I ran > > packstack on), I don't see dnsmasq running except for the default > > libvirt network: > > > >> [admin at rdo-mgmt ~(keystone_demo)]$ ps -ef | grep dnsmas > >> nobody 1968 1 0 08:59 ? 00:00:00 /usr/sbin/dnsmasq > >> --strict-order --local=// --domain-needed > >> --pid-file=/var/run/libvirt/network/default.pid --conf-file= > >> --except-interface lo --bind-interfaces --listen-address 192.168.122.1 > >> --dhcp-range 192.168.122.2,192.168.122.254 > >> --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases > >> --dhcp-lease-max=253 --dhcp-no-override > >> --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile > >> --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts > > > > So... that seems to be a problem :) > > > > Just to confirm, I am running the right kernel: > >> [root at rdo-mgmt log(keystone_demo)]# uname -a > >> Linux rdo-mgmt 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 > >> 02:11:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux > > > >> [root at rdo-mgmt log(keystone_demo)]# rpm -q iproute kernel > >> iproute-2.6.32-23.el6_4.netns.1.x86_64 > >> kernel-2.6.32-358.114.1.openstack.el6.x86_64 > > > > From quantum server.log: > >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: > >> Error opening certificate file > >> /var/lib/quantum/keystone-signing/signing_cert.pem > >> 140222780139336:error:02001002:system library:fopen:No such file or > >> directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/signing_cert.pem','r') > >> 140222780139336:error:2006D080:BIO routines:BIO_new_file:no such > >> file:bss_file.c:129: > >> > >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: > >> Error loading file /var/lib/quantum/keystone-signing/cacert.pem > >> 140279285741384:error:02001002:system library:fopen:No such file or > >> directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/cacert.pem','r') > >> 140279285741384:error:2006D080:BIO routines:BIO_new_file:no such > >> file:bss_file.c:129: > >> 140279285741384:error:0B084002:x509 certificate > >> routines:X509_load_cert_crl_file:system lib:by_file.c:279: > > > > From quantum dhcp-agent.log: > > > >> 2013-08-04 09:08:05 ERROR [quantum.openstack.common.rpc.amqp] Timed out > >> waiting for RPC response. > >> Traceback (most recent call last): > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 495, in __iter__ > >> data = self._dataqueue.get(timeout=self._timeout) > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in > >> get > >> return waiter.wait() > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in > >> wait > >> return get_hub().switch() > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, > >> in switch > >> return self.greenlet.switch() > >> Empty > >> 2013-08-04 09:08:05 ERROR [quantum.agent.dhcp_agent] Failed reporting > >> state! > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > >> line 702, in _report_state > >> self.agent_state) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, > >> in report_state > >> topic=self.topic) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > >> line 80, in call > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > >> line 140, in call > >> return _get_impl().call(CONF, context, topic, msg, timeout) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > >> line 611, in call > >> rpc_amqp.get_connection_pool(conf, Connection)) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 614, in call > >> rv = list(rv) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 500, in __iter__ > >> raise rpc_common.Timeout() > >> Timeout: Timeout while waiting on RPC response. > >> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] task > >> run outlasted interval by 56.853869 sec > >> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] Synchronizing > >> state > >> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to enable > >> dhcp. > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > >> line 131, in call_driver > >> getattr(driver, action)() > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", > >> line 124, in enable > >> reuse_existing=True) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > >> line 554, in setup > >> namespace=namespace) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > >> line 181, in plug > >> ns_dev.link.set_address(mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 180, in set_address > >> self._as_root('set', self.name, 'address', mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 167, in _as_root > >> kwargs.get('use_root_namespace', False)) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 47, in _as_root > >> namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 58, in _execute > >> root_helper=root_helper) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > >> line 61, in execute > >> raise RuntimeError(m) > >> RuntimeError: > >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', > >> 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] > >> Exit code: 2 > >> Stdout: '' > >> Stderr: 'RTNETLINK answers: Device or resource busy\n' > >> 2013-08-04 09:32:36 INFO [quantum.agent.dhcp_agent] Synchronizing > >> state > >> 2013-08-04 09:32:41 ERROR [quantum.agent.dhcp_agent] Unable to enable > >> dhcp. > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > >> line 131, in call_driver > >> getattr(driver, action)() > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", > >> line 124, in enable > >> reuse_existing=True) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > >> line 554, in setup > >> namespace=namespace) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > >> line 181, in plug > >> ns_dev.link.set_address(mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 180, in set_address > >> self._as_root('set', self.name, 'address', mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 167, in _as_root > >> kwargs.get('use_root_namespace', False)) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 47, in _as_root > >> namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 58, in _execute > >> root_helper=root_helper) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > >> line 61, in execute > >> raise RuntimeError(m) > > > > The RTNETLINK errors just repeat indefinitely > > > > From openvswitch-agent.log: > > > >> 2013-08-04 09:08:29 ERROR [quantum.openstack.common.rpc.amqp] Timed out > >> waiting for RPC response. > >> Traceback (most recent call last): > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 495, in __iter__ > >> data = self._dataqueue.get(timeout=self._timeout) > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in > >> get > >> return waiter.wait() > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in > >> wait > >> return get_hub().switch() > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, > >> in switch > >> return self.greenlet.switch() > >> Empty > >> 2013-08-04 09:08:29 ERROR > >> [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed reporting > >> state! > >> Traceback (most recent call last): > >> File > >> "/usr/lib/python2.6/site-packages/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py", > >> line 201, in _report_state > >> self.agent_state) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, > >> in report_state > >> topic=self.topic) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > >> line 80, in call > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > >> line 140, in call > >> return _get_impl().call(CONF, context, topic, msg, timeout) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > >> line 611, in call > >> rpc_amqp.get_connection_pool(conf, Connection)) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 614, in call > >> rv = list(rv) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 500, in __iter__ > >> raise rpc_common.Timeout() > >> Timeout: Timeout while waiting on RPC response. > > > > Do we have a race condition wrt various Quantum agents connecting to the > > qpid bus that is just generating initial qpid connection error messages > > that can be safely ignored? > > > > If so, is there any way we can clean this up? > > > > From l3-agent.log: > > > >> 2013-08-04 09:08:06 ERROR [quantum.openstack.common.rpc.amqp] Timed out > >> waiting for RPC response. > >> Traceback (most recent call last): > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 495, in __iter__ > >> data = self._dataqueue.get(timeout=self._timeout) > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in > >> get > >> return waiter.wait() > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in > >> wait > >> return get_hub().switch() > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, > >> in switch > >> return self.greenlet.switch() > >> Empty > >> 2013-08-04 09:08:06 ERROR [quantum.agent.l3_agent] Failed reporting > >> state! > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 723, in _report_state > >> self.agent_state) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, > >> in report_state > >> topic=self.topic) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > >> line 80, in call > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > >> line 140, in call > >> return _get_impl().call(CONF, context, topic, msg, timeout) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > >> line 611, in call > >> rpc_amqp.get_connection_pool(conf, Connection)) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 614, in call > >> rv = list(rv) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 500, in __iter__ > >> raise rpc_common.Timeout() > >> Timeout: Timeout while waiting on RPC response. > >> 2013-08-04 09:08:06 WARNING [quantum.openstack.common.loopingcall] task > >> run outlasted interval by 56.554131 sec > >> 2013-08-04 09:08:10 ERROR [quantum.openstack.common.rpc.amqp] Timed out > >> waiting for RPC response. > >> Traceback (most recent call last): > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 495, in __iter__ > >> data = self._dataqueue.get(timeout=self._timeout) > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in > >> get > >> return waiter.wait() > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in > >> wait > >> return get_hub().switch() > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, > >> in switch > >> return self.greenlet.switch() > >> Empty > >> 2013-08-04 09:08:10 ERROR [quantum.agent.l3_agent] Failed synchronizing > >> routers > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 637, in _sync_routers_task > >> context, router_id) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 77, in get_routers > >> topic=self.topic) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > >> line 80, in call > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > >> line 140, in call > >> return _get_impl().call(CONF, context, topic, msg, timeout) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > >> line 611, in call > >> rpc_amqp.get_connection_pool(conf, Connection)) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 614, in call > >> rv = list(rv) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > >> line 500, in __iter__ > >> raise rpc_common.Timeout() > >> Timeout: Timeout while waiting on RPC response. > >> 2013-08-04 09:08:10 WARNING [quantum.openstack.common.loopingcall] task > >> run outlasted interval by 20.022704 sec > >> 2013-08-04 09:11:33 ERROR [quantum.agent.l3_agent] Failed synchronizing > >> routers > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 638, in _sync_routers_task > >> self._process_routers(routers, all_routers=True) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 621, in _process_routers > >> self.process_router(ri) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 319, in process_router > >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 410, in external_gateway_added > >> prefix=EXTERNAL_DEV_PREFIX) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > >> line 181, in plug > >> ns_dev.link.set_address(mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 180, in set_address > >> self._as_root('set', self.name, 'address', mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 167, in _as_root > >> kwargs.get('use_root_namespace', False)) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 47, in _as_root > >> namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 58, in _execute > >> root_helper=root_helper) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > >> line 61, in execute > >> raise RuntimeError(m) > >> RuntimeError: > >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', > >> 'link', 'set', 'qg-46ed452c-5e', 'address', 'fa:16:3e:e7:d8:30'] > >> Exit code: 2 > >> Stdout: '' > >> Stderr: 'RTNETLINK answers: Device or resource busy\n' > >> 2013-08-04 09:12:11 ERROR [quantum.agent.l3_agent] Failed synchronizing > >> routers > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 638, in _sync_routers_task > >> self._process_routers(routers, all_routers=True) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 621, in _process_routers > >> self.process_router(ri) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 319, in process_router > >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line > >> 410, in external_gateway_added > >> prefix=EXTERNAL_DEV_PREFIX) > >> File > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > >> line 181, in plug > >> ns_dev.link.set_address(mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 180, in set_address > >> self._as_root('set', self.name, 'address', mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 167, in _as_root > >> kwargs.get('use_root_namespace', False)) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 47, in _as_root > >> namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > >> line 58, in _execute > >> root_helper=root_helper) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > >> line 61, in execute > >> raise RuntimeError(m) > > > > Same qpid connection issue, which I'm assuming can just be ignored at > > this point. But also similar device busy errors with creating the > > namespace for the l2 agent > > > > It appears that the issue with both the l2 agent and the dhcp agent that > > the namespace can't be created, which causes both of them to fail. > > > > Anyone have any thoughts on what to look at next here? > > > > Perry > > I ran into these issues as well. I noticed that ovs_use_veth was > commented out in dhcp_agent.ini and l3_agent.ini. I uncommented them and > set them to True and restarted. The vm now has an IP address. > > I noticed something else peculiar though... the public network.. the one > set as the gateway for the router has dhcp enabled. I'm not sure why we > would do that. > > Cheers, > > Brent I re-ran an installation with a yum updated/rebooted RDO VM. packstack --allinone, then logged in as 'demo' to horizon and created a VM. Everything worked. Apparently I have the magic test VM. I don't hit the "dhcp enabled on public" issue because horizon passes the --nic net-id stuff to make sure that the VM doesn't attach an interface to the public network, thereby spawning the dhcp stuff that is enabled on it. None of my configs have ovs_use_veth and it *still* works perfectly on my machine--which it seems like shouldn't be possible. But it is. Yum repos installed: [root at rhel-6 yum.repos.d(keystone_admin)]# cat `grep -l enabled=1 /etc/yum.repos.d/*` [epel] name=Extra Packages for Enterprise Linux 6 - $basearch baseurl=http://192.168.122.1/epel #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch failovermethod=priority enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 [openstack-grizzly] name=OpenStack Grizzly Repository baseurl=http://192.168.122.1/openstack-grizzly #baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/ enabled=1 skip_if_unavailable=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Grizzly priority=98 [rhel-server] name=Red Hat Enterprise Linux $releasever - $basearch - Server baseurl=http://192.168.122.1/rhel-server enabled=1 gpgcheck=0 [rhel-ha] name=Red Hat Enterprise Linux $releasever - $basearch - HA baseurl=http://192.168.122.1/rhel-ha #baseurl=http://download.devel.redhat.com/composes/latest-RHEL6/6/Server/$basearch/os/HighAvailability enabled=1 gpgcheck=0 [rhel-rs] name=Red Hat Enterprise Linux $releasever - $basearch - RS baseurl=http://192.168.122.1/rhel-rs #baseurl=http://download.devel.redhat.com/composes/latest-RHEL6/6/Server/$basearch/os/ResilientStorage enabled=1 gpgcheck=0 [rhel-z] name=Red Hat Enterprise Linux $releasever - $basearch - Z-Stream baseurl=http://192.168.122.1/rhel-z #baseurl=http://download.lab.bos.redhat.com/rel-eng/repos/RHEL-6.4-Z/$basearch/ enabled=1 gpgcheck=0 From rdo-info at redhat.com Mon Aug 5 21:07:54 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 5 Aug 2013 21:07:54 +0000 Subject: [Rdo-list] [RDO] nopainkiller started a discussion. Message-ID: <00000140504ec754-5faaf0c0-6197-4a5e-ae7f-22f8f7718843-000000@email.amazonses.com> nopainkiller started a discussion. packstack installation error on Fedora 19 arm --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/400/packstack-installation-error-on-fedora-19-arm Have a great day! From rdo-info at redhat.com Mon Aug 5 21:30:53 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 5 Aug 2013 21:30:53 +0000 Subject: [Rdo-list] [RDO] Charles started a discussion. Message-ID: <000001405063d178-d1207fa6-2b7c-4fcc-895e-f85fff4be1cb-000000@email.amazonses.com> Charles started a discussion. Havana/Neutron quickstart on Fedora 19 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/401/havananeutron-quickstart-on-fedora-19 Have a great day! From twilson at redhat.com Mon Aug 5 23:00:16 2013 From: twilson at redhat.com (Terry Wilson) Date: Mon, 5 Aug 2013 19:00:16 -0400 (EDT) Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <37116974.11749438.1375726362036.JavaMail.root@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FFDF9B.5020005@redhat.com> <37116974.11749438.1375726362036.JavaMail.root@redhat.com> Message-ID: <323272882.11920561.1375743616337.JavaMail.root@redhat.com> > ----- Original Message ----- > > On 08/04/2013 11:27 AM, Perry Myers wrote: > > > Hi, > > > > > > I followed the instructions at: > > > http://openstack.redhat.com/Neutron-Quickstart > > > http://openstack.redhat.com/Running_an_instance_with_Neutron > > > > > > I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure > > > to install the netns enabled kernel from RDO repos and reboot with that > > > kernel before running packstack so that I didn't need to reboot the VM > > > after the packstack install (and have br-ex disappear) > > > > > > The packstack install went without incident. And I was able to follow > > > the launch an instance instructions. > > > > > > I noticed that the cirros VM took a long time to get to a login prompt > > > on the VNC console. From looking at the console output it appears that > > > the instance was waiting for a dhcp address. > > > > > > Once the VNC session got me to a login prompt, I logged in (as the > > > cirros user) and confirmed that eth0 did not have an ip address. > > > > > > So, something networking related prevented the instance from getting an > > > IP which of course makes ssh'ing into the instance via the floating ip > > > later in the instructions not work properly. > > > > > > I tried ifup'ing eth0 and dhcp discovers were sent out but not responded > > > to. > > > > > > One thing is that on the host running OpenStack services (the VM I ran > > > packstack on), I don't see dnsmasq running except for the default > > > libvirt network: > > > > > >> [admin at rdo-mgmt ~(keystone_demo)]$ ps -ef | grep dnsmas > > >> nobody 1968 1 0 08:59 ? 00:00:00 /usr/sbin/dnsmasq > > >> --strict-order --local=// --domain-needed > > >> --pid-file=/var/run/libvirt/network/default.pid --conf-file= > > >> --except-interface lo --bind-interfaces --listen-address 192.168.122.1 > > >> --dhcp-range 192.168.122.2,192.168.122.254 > > >> --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases > > >> --dhcp-lease-max=253 --dhcp-no-override > > >> --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile > > >> --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts > > > > > > So... that seems to be a problem :) > > > > > > Just to confirm, I am running the right kernel: > > >> [root at rdo-mgmt log(keystone_demo)]# uname -a > > >> Linux rdo-mgmt 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 > > >> 02:11:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux > > > > > >> [root at rdo-mgmt log(keystone_demo)]# rpm -q iproute kernel > > >> iproute-2.6.32-23.el6_4.netns.1.x86_64 > > >> kernel-2.6.32-358.114.1.openstack.el6.x86_64 > > > > > > From quantum server.log: > > >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: > > >> Error opening certificate file > > >> /var/lib/quantum/keystone-signing/signing_cert.pem > > >> 140222780139336:error:02001002:system library:fopen:No such file or > > >> directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/signing_cert.pem','r') > > >> 140222780139336:error:2006D080:BIO routines:BIO_new_file:no such > > >> file:bss_file.c:129: > > >> > > >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: > > >> Error loading file /var/lib/quantum/keystone-signing/cacert.pem > > >> 140279285741384:error:02001002:system library:fopen:No such file or > > >> directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/cacert.pem','r') > > >> 140279285741384:error:2006D080:BIO routines:BIO_new_file:no such > > >> file:bss_file.c:129: > > >> 140279285741384:error:0B084002:x509 certificate > > >> routines:X509_load_cert_crl_file:system lib:by_file.c:279: > > > > > > From quantum dhcp-agent.log: > > > > > >> 2013-08-04 09:08:05 ERROR [quantum.openstack.common.rpc.amqp] Timed > > >> out > > >> waiting for RPC response. > > >> Traceback (most recent call last): > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 495, in __iter__ > > >> data = self._dataqueue.get(timeout=self._timeout) > > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, > > >> in > > >> get > > >> return waiter.wait() > > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, > > >> in > > >> wait > > >> return get_hub().switch() > > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line > > >> 177, > > >> in switch > > >> return self.greenlet.switch() > > >> Empty > > >> 2013-08-04 09:08:05 ERROR [quantum.agent.dhcp_agent] Failed reporting > > >> state! > > >> Traceback (most recent call last): > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > > >> line 702, in _report_state > > >> self.agent_state) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line > > >> 66, > > >> in report_state > > >> topic=self.topic) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > > >> line 80, in call > > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > > >> line 140, in call > > >> return _get_impl().call(CONF, context, topic, msg, timeout) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > > >> line 611, in call > > >> rpc_amqp.get_connection_pool(conf, Connection)) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 614, in call > > >> rv = list(rv) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 500, in __iter__ > > >> raise rpc_common.Timeout() > > >> Timeout: Timeout while waiting on RPC response. > > >> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] task > > >> run outlasted interval by 56.853869 sec > > >> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] Synchronizing > > >> state > > >> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to enable > > >> dhcp. > > >> Traceback (most recent call last): > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > > >> line 131, in call_driver > > >> getattr(driver, action)() > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", > > >> line 124, in enable > > >> reuse_existing=True) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > > >> line 554, in setup > > >> namespace=namespace) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > > >> line 181, in plug > > >> ns_dev.link.set_address(mac_address) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 180, in set_address > > >> self._as_root('set', self.name, 'address', mac_address) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 167, in _as_root > > >> kwargs.get('use_root_namespace', False)) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 47, in _as_root > > >> namespace) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 58, in _execute > > >> root_helper=root_helper) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > > >> line 61, in execute > > >> raise RuntimeError(m) > > >> RuntimeError: > > >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', > > >> 'ip', > > >> 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] > > >> Exit code: 2 > > >> Stdout: '' > > >> Stderr: 'RTNETLINK answers: Device or resource busy\n' > > >> 2013-08-04 09:32:36 INFO [quantum.agent.dhcp_agent] Synchronizing > > >> state > > >> 2013-08-04 09:32:41 ERROR [quantum.agent.dhcp_agent] Unable to enable > > >> dhcp. > > >> Traceback (most recent call last): > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > > >> line 131, in call_driver > > >> getattr(driver, action)() > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", > > >> line 124, in enable > > >> reuse_existing=True) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > > >> line 554, in setup > > >> namespace=namespace) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > > >> line 181, in plug > > >> ns_dev.link.set_address(mac_address) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 180, in set_address > > >> self._as_root('set', self.name, 'address', mac_address) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 167, in _as_root > > >> kwargs.get('use_root_namespace', False)) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 47, in _as_root > > >> namespace) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 58, in _execute > > >> root_helper=root_helper) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > > >> line 61, in execute > > >> raise RuntimeError(m) > > > > > > The RTNETLINK errors just repeat indefinitely > > > > > > From openvswitch-agent.log: > > > > > >> 2013-08-04 09:08:29 ERROR [quantum.openstack.common.rpc.amqp] Timed > > >> out > > >> waiting for RPC response. > > >> Traceback (most recent call last): > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 495, in __iter__ > > >> data = self._dataqueue.get(timeout=self._timeout) > > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, > > >> in > > >> get > > >> return waiter.wait() > > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, > > >> in > > >> wait > > >> return get_hub().switch() > > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line > > >> 177, > > >> in switch > > >> return self.greenlet.switch() > > >> Empty > > >> 2013-08-04 09:08:29 ERROR > > >> [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed reporting > > >> state! > > >> Traceback (most recent call last): > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py", > > >> line 201, in _report_state > > >> self.agent_state) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line > > >> 66, > > >> in report_state > > >> topic=self.topic) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > > >> line 80, in call > > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > > >> line 140, in call > > >> return _get_impl().call(CONF, context, topic, msg, timeout) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > > >> line 611, in call > > >> rpc_amqp.get_connection_pool(conf, Connection)) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 614, in call > > >> rv = list(rv) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 500, in __iter__ > > >> raise rpc_common.Timeout() > > >> Timeout: Timeout while waiting on RPC response. > > > > > > Do we have a race condition wrt various Quantum agents connecting to the > > > qpid bus that is just generating initial qpid connection error messages > > > that can be safely ignored? > > > > > > If so, is there any way we can clean this up? > > > > > > From l3-agent.log: > > > > > >> 2013-08-04 09:08:06 ERROR [quantum.openstack.common.rpc.amqp] Timed > > >> out > > >> waiting for RPC response. > > >> Traceback (most recent call last): > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 495, in __iter__ > > >> data = self._dataqueue.get(timeout=self._timeout) > > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, > > >> in > > >> get > > >> return waiter.wait() > > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, > > >> in > > >> wait > > >> return get_hub().switch() > > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line > > >> 177, > > >> in switch > > >> return self.greenlet.switch() > > >> Empty > > >> 2013-08-04 09:08:06 ERROR [quantum.agent.l3_agent] Failed reporting > > >> state! > > >> Traceback (most recent call last): > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 723, in _report_state > > >> self.agent_state) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line > > >> 66, > > >> in report_state > > >> topic=self.topic) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > > >> line 80, in call > > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > > >> line 140, in call > > >> return _get_impl().call(CONF, context, topic, msg, timeout) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > > >> line 611, in call > > >> rpc_amqp.get_connection_pool(conf, Connection)) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 614, in call > > >> rv = list(rv) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 500, in __iter__ > > >> raise rpc_common.Timeout() > > >> Timeout: Timeout while waiting on RPC response. > > >> 2013-08-04 09:08:06 WARNING [quantum.openstack.common.loopingcall] task > > >> run outlasted interval by 56.554131 sec > > >> 2013-08-04 09:08:10 ERROR [quantum.openstack.common.rpc.amqp] Timed > > >> out > > >> waiting for RPC response. > > >> Traceback (most recent call last): > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 495, in __iter__ > > >> data = self._dataqueue.get(timeout=self._timeout) > > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, > > >> in > > >> get > > >> return waiter.wait() > > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, > > >> in > > >> wait > > >> return get_hub().switch() > > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line > > >> 177, > > >> in switch > > >> return self.greenlet.switch() > > >> Empty > > >> 2013-08-04 09:08:10 ERROR [quantum.agent.l3_agent] Failed > > >> synchronizing > > >> routers > > >> Traceback (most recent call last): > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 637, in _sync_routers_task > > >> context, router_id) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 77, in get_routers > > >> topic=self.topic) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > > >> line 80, in call > > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > > >> line 140, in call > > >> return _get_impl().call(CONF, context, topic, msg, timeout) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > > >> line 611, in call > > >> rpc_amqp.get_connection_pool(conf, Connection)) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 614, in call > > >> rv = list(rv) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > > >> line 500, in __iter__ > > >> raise rpc_common.Timeout() > > >> Timeout: Timeout while waiting on RPC response. > > >> 2013-08-04 09:08:10 WARNING [quantum.openstack.common.loopingcall] task > > >> run outlasted interval by 20.022704 sec > > >> 2013-08-04 09:11:33 ERROR [quantum.agent.l3_agent] Failed > > >> synchronizing > > >> routers > > >> Traceback (most recent call last): > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 638, in _sync_routers_task > > >> self._process_routers(routers, all_routers=True) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 621, in _process_routers > > >> self.process_router(ri) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 319, in process_router > > >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 410, in external_gateway_added > > >> prefix=EXTERNAL_DEV_PREFIX) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > > >> line 181, in plug > > >> ns_dev.link.set_address(mac_address) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 180, in set_address > > >> self._as_root('set', self.name, 'address', mac_address) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 167, in _as_root > > >> kwargs.get('use_root_namespace', False)) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 47, in _as_root > > >> namespace) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 58, in _execute > > >> root_helper=root_helper) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > > >> line 61, in execute > > >> raise RuntimeError(m) > > >> RuntimeError: > > >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', > > >> 'ip', > > >> 'link', 'set', 'qg-46ed452c-5e', 'address', 'fa:16:3e:e7:d8:30'] > > >> Exit code: 2 > > >> Stdout: '' > > >> Stderr: 'RTNETLINK answers: Device or resource busy\n' > > >> 2013-08-04 09:12:11 ERROR [quantum.agent.l3_agent] Failed > > >> synchronizing > > >> routers > > >> Traceback (most recent call last): > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 638, in _sync_routers_task > > >> self._process_routers(routers, all_routers=True) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 621, in _process_routers > > >> self.process_router(ri) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 319, in process_router > > >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", > > >> line > > >> 410, in external_gateway_added > > >> prefix=EXTERNAL_DEV_PREFIX) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > > >> line 181, in plug > > >> ns_dev.link.set_address(mac_address) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 180, in set_address > > >> self._as_root('set', self.name, 'address', mac_address) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 167, in _as_root > > >> kwargs.get('use_root_namespace', False)) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 47, in _as_root > > >> namespace) > > >> File > > >> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >> line 58, in _execute > > >> root_helper=root_helper) > > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > > >> line 61, in execute > > >> raise RuntimeError(m) > > > > > > Same qpid connection issue, which I'm assuming can just be ignored at > > > this point. But also similar device busy errors with creating the > > > namespace for the l2 agent > > > > > > It appears that the issue with both the l2 agent and the dhcp agent that > > > the namespace can't be created, which causes both of them to fail. > > > > > > Anyone have any thoughts on what to look at next here? > > > > > > Perry > > > > I ran into these issues as well. I noticed that ovs_use_veth was > > commented out in dhcp_agent.ini and l3_agent.ini. I uncommented them and > > set them to True and restarted. The vm now has an IP address. > > > > I noticed something else peculiar though... the public network.. the one > > set as the gateway for the router has dhcp enabled. I'm not sure why we > > would do that. > > > > Cheers, > > > > Brent > > I re-ran an installation with a yum updated/rebooted RDO VM. packstack > --allinone, then logged in as 'demo' to horizon and created a VM. Everything > worked. Apparently I have the magic test VM. I don't hit the "dhcp enabled > on public" issue because horizon passes the --nic net-id stuff to make sure > that the VM doesn't attach an interface to the public network, thereby > spawning the dhcp stuff that is enabled on it. None of my configs have > ovs_use_veth and it *still* works perfectly on my machine--which it seems > like shouldn't be possible. But it is. Yum repos installed: > > [root at rhel-6 yum.repos.d(keystone_admin)]# cat `grep -l enabled=1 > /etc/yum.repos.d/*` > [epel] > name=Extra Packages for Enterprise Linux 6 - $basearch > baseurl=http://192.168.122.1/epel > #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch > failovermethod=priority > enabled=1 > gpgcheck=0 > gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 > > [openstack-grizzly] > name=OpenStack Grizzly Repository > baseurl=http://192.168.122.1/openstack-grizzly > #baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/ > enabled=1 > skip_if_unavailable=1 > gpgcheck=1 > gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Grizzly > priority=98 > > [rhel-server] > name=Red Hat Enterprise Linux $releasever - $basearch - Server > baseurl=http://192.168.122.1/rhel-server > enabled=1 > gpgcheck=0 > > [rhel-ha] > name=Red Hat Enterprise Linux $releasever - $basearch - HA > baseurl=http://192.168.122.1/rhel-ha > #baseurl=http://download.devel.redhat.com/composes/latest-RHEL6/6/Server/$basearch/os/HighAvailability > enabled=1 > gpgcheck=0 > > [rhel-rs] > name=Red Hat Enterprise Linux $releasever - $basearch - RS > baseurl=http://192.168.122.1/rhel-rs > #baseurl=http://download.devel.redhat.com/composes/latest-RHEL6/6/Server/$basearch/os/ResilientStorage > enabled=1 > gpgcheck=0 > > [rhel-z] > name=Red Hat Enterprise Linux $releasever - $basearch - Z-Stream > baseurl=http://192.168.122.1/rhel-z > #baseurl=http://download.lab.bos.redhat.com/rel-eng/repos/RHEL-6.4-Z/$basearch/ > enabled=1 > gpgcheck=0 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > Ok, I have verified that on my RHEL system configured as above, that ovs_use_veth=False: [root at rhel-6 ~]# grep ovs_use_veth /var/log/quantum/* /var/log/quantum/dhcp-agent.log:2013-07-31 13:46:15 DEBUG [quantum.openstack.common.service] ovs_use_veth = False /var/log/quantum/l3-agent.log:2013-07-31 13:46:20 DEBUG [quantum.openstack.common.service] ovs_use_veth = False That I can boot an image via horizon: [root at rhel-6 ~(keystone_demo)]# nova list +--------------------------------------+------+--------+------------------+ | ID | Name | Status | Networks | +--------------------------------------+------+--------+------------------+ | 2066588b-bc12-48e9-bcfe-cfc6775a3222 | test | ACTIVE | private=10.0.0.2 | +--------------------------------------+------+--------+------------------+ And that network namespaces *are* in use: [root at rhel-6 ~]# ip netns qrouter-35911b8e-9446-4b16-a523-870d77d1076b qdhcp-ed7e1884-a429-4a7d-8cd1-9656164af96b And that my VM is accessible via the IP that it has obtained. [root at rhel-6 ~(keystone_demo)]# ip netns exec qrouter-35911b8e-9446-4b16-a523-870d77d1076b ssh cirros at 10.0.0.2 The authenticity of host '10.0.0.2 (10.0.0.2)' can't be established. RSA key fingerprint is d4:7e:5d:9c:27:ad:8b:40:0d:e7:6a:dc:6a:71:37:b2. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.0.2' (RSA) to the list of known hosts. cirros at 10.0.0.2's password: $ ifconfig eth0 Link encap:Ethernet HWaddr FA:16:3E:87:C7:2F inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe87:c72f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:138 errors:0 dropped:0 overruns:0 frame:0 TX packets:163 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16836 (16.4 KiB) TX bytes:18246 (17.8 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I have also (once) run after installing NetworkManager w/o ovs_use_veth and had it fail to get an IP, and after enabling ovs_use_veth (with NetworkManager still running) having everything work properly again. I've run w/o NM and w/o ovs_use_veth=true *dozens* of times and had it work perfectly. So, apparently it is very possible to use namespaces w/o ovs_use_veth. BUT, it may not be possible to do so when Network Manager is enabled (or that could just be a red herring as I haven't repeated that experiment--also twice my VM froze when trying to launch a VM on RHEL with NM installed...just an FYI). Terry From gilles at redhat.com Tue Aug 6 00:46:11 2013 From: gilles at redhat.com (Gilles Dubreuil) Date: Tue, 06 Aug 2013 10:46:11 +1000 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FFDF9B.5020005@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FFDF9B.5020005@redhat.com> Message-ID: <1375749971.2126.32.camel@gil.surfgate.org> On Mon, 2013-08-05 at 14:53 -0230, Brent Eagles wrote: > On 08/04/2013 11:27 AM, Perry Myers wrote: > > Hi, > > > > I followed the instructions at: > > http://openstack.redhat.com/Neutron-Quickstart > > http://openstack.redhat.com/Running_an_instance_with_Neutron > > > > I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure > > to install the netns enabled kernel from RDO repos and reboot with that > > kernel before running packstack so that I didn't need to reboot the VM > > after the packstack install (and have br-ex disappear) > > > > The packstack install went without incident. And I was able to follow > > the launch an instance instructions. > > > > I noticed that the cirros VM took a long time to get to a login prompt > > on the VNC console. From looking at the console output it appears that > > the instance was waiting for a dhcp address. > > > > Once the VNC session got me to a login prompt, I logged in (as the > > cirros user) and confirmed that eth0 did not have an ip address. > > > > So, something networking related prevented the instance from getting an > > IP which of course makes ssh'ing into the instance via the floating ip > > later in the instructions not work properly. > > > > I tried ifup'ing eth0 and dhcp discovers were sent out but not responded to. > > > > One thing is that on the host running OpenStack services (the VM I ran > > packstack on), I don't see dnsmasq running except for the default > > libvirt network: > > > >> [admin at rdo-mgmt ~(keystone_demo)]$ ps -ef | grep dnsmas > >> nobody 1968 1 0 08:59 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --bind-interfaces --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts > > > > So... that seems to be a problem :) > > > > Just to confirm, I am running the right kernel: > >> [root at rdo-mgmt log(keystone_demo)]# uname -a > >> Linux rdo-mgmt 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 02:11:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux > > > >> [root at rdo-mgmt log(keystone_demo)]# rpm -q iproute kernel > >> iproute-2.6.32-23.el6_4.netns.1.x86_64 > >> kernel-2.6.32-358.114.1.openstack.el6.x86_64 > > > > From quantum server.log: > >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error opening certificate file /var/lib/quantum/keystone-signing/signing_cert.pem > >> 140222780139336:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/signing_cert.pem','r') > >> 140222780139336:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: > >> > >> 2013-08-04 09:10:48 ERROR [keystoneclient.common.cms] Verify error: Error loading file /var/lib/quantum/keystone-signing/cacert.pem > >> 140279285741384:error:02001002:system library:fopen:No such file or directory:bss_file.c:126:fopen('/var/lib/quantum/keystone-signing/cacert.pem','r') > >> 140279285741384:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:129: > >> 140279285741384:error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib:by_file.c:279: > > > > From quantum dhcp-agent.log: > > > >> 2013-08-04 09:08:05 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ > >> data = self._dataqueue.get(timeout=self._timeout) > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get > >> return waiter.wait() > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait > >> return get_hub().switch() > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch > >> return self.greenlet.switch() > >> Empty > >> 2013-08-04 09:08:05 ERROR [quantum.agent.dhcp_agent] Failed reporting state! > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 702, in _report_state > >> self.agent_state) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state > >> topic=self.topic) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call > >> return _get_impl().call(CONF, context, topic, msg, timeout) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call > >> rpc_amqp.get_connection_pool(conf, Connection)) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call > >> rv = list(rv) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ > >> raise rpc_common.Timeout() > >> Timeout: Timeout while waiting on RPC response. > >> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.853869 sec > >> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] Synchronizing state > >> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver > >> getattr(driver, action)() > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable > >> reuse_existing=True) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup > >> namespace=namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug > >> ns_dev.link.set_address(mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address > >> self._as_root('set', self.name, 'address', mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root > >> kwargs.get('use_root_namespace', False)) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root > >> namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute > >> root_helper=root_helper) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute > >> raise RuntimeError(m) > >> RuntimeError: > >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] > >> Exit code: 2 > >> Stdout: '' > >> Stderr: 'RTNETLINK answers: Device or resource busy\n' > >> 2013-08-04 09:32:36 INFO [quantum.agent.dhcp_agent] Synchronizing state > >> 2013-08-04 09:32:41 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 131, in call_driver > >> getattr(driver, action)() > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 124, in enable > >> reuse_existing=True) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 554, in setup > >> namespace=namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug > >> ns_dev.link.set_address(mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address > >> self._as_root('set', self.name, 'address', mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root > >> kwargs.get('use_root_namespace', False)) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root > >> namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute > >> root_helper=root_helper) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute > >> raise RuntimeError(m) > > > > The RTNETLINK errors just repeat indefinitely > > > > From openvswitch-agent.log: > > > >> 2013-08-04 09:08:29 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ > >> data = self._dataqueue.get(timeout=self._timeout) > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get > >> return waiter.wait() > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait > >> return get_hub().switch() > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch > >> return self.greenlet.switch() > >> Empty > >> 2013-08-04 09:08:29 ERROR [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed reporting state! > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py", line 201, in _report_state > >> self.agent_state) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state > >> topic=self.topic) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call > >> return _get_impl().call(CONF, context, topic, msg, timeout) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call > >> rpc_amqp.get_connection_pool(conf, Connection)) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call > >> rv = list(rv) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ > >> raise rpc_common.Timeout() > >> Timeout: Timeout while waiting on RPC response. > > > > Do we have a race condition wrt various Quantum agents connecting to the > > qpid bus that is just generating initial qpid connection error messages > > that can be safely ignored? > > > > If so, is there any way we can clean this up? > > > > From l3-agent.log: > > > >> 2013-08-04 09:08:06 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ > >> data = self._dataqueue.get(timeout=self._timeout) > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get > >> return waiter.wait() > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait > >> return get_hub().switch() > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch > >> return self.greenlet.switch() > >> Empty > >> 2013-08-04 09:08:06 ERROR [quantum.agent.l3_agent] Failed reporting state! > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 723, in _report_state > >> self.agent_state) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state > >> topic=self.topic) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call > >> return _get_impl().call(CONF, context, topic, msg, timeout) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call > >> rpc_amqp.get_connection_pool(conf, Connection)) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call > >> rv = list(rv) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ > >> raise rpc_common.Timeout() > >> Timeout: Timeout while waiting on RPC response. > >> 2013-08-04 09:08:06 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.554131 sec > >> 2013-08-04 09:08:10 ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response. > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__ > >> data = self._dataqueue.get(timeout=self._timeout) > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 298, in get > >> return waiter.wait() > >> File "/usr/lib/python2.6/site-packages/eventlet/queue.py", line 129, in wait > >> return get_hub().switch() > >> File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177, in switch > >> return self.greenlet.switch() > >> Empty > >> 2013-08-04 09:08:10 ERROR [quantum.agent.l3_agent] Failed synchronizing routers > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 637, in _sync_routers_task > >> context, router_id) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 77, in get_routers > >> topic=self.topic) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call > >> return rpc.call(context, self._get_topic(topic), msg, timeout) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call > >> return _get_impl().call(CONF, context, topic, msg, timeout) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call > >> rpc_amqp.get_connection_pool(conf, Connection)) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call > >> rv = list(rv) > >> File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__ > >> raise rpc_common.Timeout() > >> Timeout: Timeout while waiting on RPC response. > >> 2013-08-04 09:08:10 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 20.022704 sec > >> 2013-08-04 09:11:33 ERROR [quantum.agent.l3_agent] Failed synchronizing routers > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task > >> self._process_routers(routers, all_routers=True) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers > >> self.process_router(ri) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router > >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added > >> prefix=EXTERNAL_DEV_PREFIX) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug > >> ns_dev.link.set_address(mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address > >> self._as_root('set', self.name, 'address', mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root > >> kwargs.get('use_root_namespace', False)) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root > >> namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute > >> root_helper=root_helper) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute > >> raise RuntimeError(m) > >> RuntimeError: > >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'qg-46ed452c-5e', 'address', 'fa:16:3e:e7:d8:30'] > >> Exit code: 2 > >> Stdout: '' > >> Stderr: 'RTNETLINK answers: Device or resource busy\n' > >> 2013-08-04 09:12:11 ERROR [quantum.agent.l3_agent] Failed synchronizing routers > >> Traceback (most recent call last): > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 638, in _sync_routers_task > >> self._process_routers(routers, all_routers=True) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 621, in _process_routers > >> self.process_router(ri) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 319, in process_router > >> self.external_gateway_added(ri, ex_gw_port, internal_cidrs) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/l3_agent.py", line 410, in external_gateway_added > >> prefix=EXTERNAL_DEV_PREFIX) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", line 181, in plug > >> ns_dev.link.set_address(mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 180, in set_address > >> self._as_root('set', self.name, 'address', mac_address) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 167, in _as_root > >> kwargs.get('use_root_namespace', False)) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 47, in _as_root > >> namespace) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", line 58, in _execute > >> root_helper=root_helper) > >> File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", line 61, in execute > >> raise RuntimeError(m) > > > > Same qpid connection issue, which I'm assuming can just be ignored at > > this point. But also similar device busy errors with creating the > > namespace for the l2 agent > > > > It appears that the issue with both the l2 agent and the dhcp agent that > > the namespace can't be created, which causes both of them to fail. > > > > Anyone have any thoughts on what to look at next here? > > > > Perry > > I ran into these issues as well. I noticed that ovs_use_veth was > commented out in dhcp_agent.ini and l3_agent.ini. I uncommented them and > set them to True and restarted. The vm now has an IP address. > This seems to be the case on RDO. Meanwhile, in RHOS, this seems to be set by default in /usr/share/quantum/quantum-dist.conf. > I noticed something else peculiar though... the public network.. the one > set as the gateway for the router has dhcp enabled. I'm not sure why we > would do that. > > Cheers, > > Brent > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From gilles at redhat.com Tue Aug 6 01:12:00 2013 From: gilles at redhat.com (Gilles Dubreuil) Date: Tue, 06 Aug 2013 11:12:00 +1000 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <1107455714.11703980.1375720031570.JavaMail.root@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FE779C.5020008@redhat.com> <51FF61E9.4090601@redhat.com> <51FF91E8.6050205@redhat.com> <1107455714.11703980.1375720031570.JavaMail.root@redhat.com> Message-ID: <1375751520.2126.49.camel@gil.surfgate.org> On Mon, 2013-08-05 at 12:27 -0400, Terry Wilson wrote: > > ----- Original Message ----- > > On 08/05/2013 04:27 AM, Thomas Graf wrote: > > > On 08/04/2013 05:47 PM, Kashyap Chamarthy wrote: > > >>>> 2013-08-04 09:08:05 WARNING [quantum.openstack.common.loopingcall] > > >>>> task run outlasted interval by 56.853869 sec > > >>>> 2013-08-04 09:08:06 INFO [quantum.agent.dhcp_agent] > > >>>> Synchronizing state > > >>>> 2013-08-04 09:32:34 ERROR [quantum.agent.dhcp_agent] Unable to > > >>>> enable dhcp. > > >>>> Traceback (most recent call last): > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line > > >>>> 131, in call_driver > > >>>> getattr(driver, action)() > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line > > >>>> 124, in enable > > >>>> reuse_existing=True) > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line > > >>>> 554, in setup > > >>>> namespace=namespace) > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/interface.py", > > >>>> line 181, in plug > > >>>> ns_dev.link.set_address(mac_address) > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >>>> line 180, in set_address > > >>>> self._as_root('set', self.name, 'address', mac_address) > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >>>> line 167, in _as_root > > >>>> kwargs.get('use_root_namespace', False)) > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >>>> line 47, in _as_root > > >>>> namespace) > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib.py", > > >>>> line 58, in _execute > > >>>> root_helper=root_helper) > > >>>> File > > >>>> "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py", > > >>>> line 61, in execute > > >>>> raise RuntimeError(m) > > >>>> RuntimeError: > > >>>> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', > > >>>> 'ip', 'link', 'set', 'tap07d8cc77-fc', 'address', 'fa:16:3e:da:66:28'] > > >>>> Exit code: 2 > > >>>> Stdout: '' > > >>>> Stderr: 'RTNETLINK answers: Device or resource busy\n' > > > > > > Quantum attempts to change the MAC address while the link is up. The > > > live MAC address change feature is not supported in the openstack > > > kernel at this point. > > > > > > We can attempt a backport of the feature to the openstack kernel and > > > enable it for tap and veth devices or we modify quantum to bring down > > > the interface before changing the mac address and bring it up again > > > afterwards. > > > > Thanks Thomas. Or perhaps we need a fix to Quantum itself to create the > > link with the proper MAC address to begin with rather than changing it > > in a second step? > > > > With the above error, I wonder if the Quantum Quickstart ever fully > > worked at all on either RHOS or RDO Grizzly? > > > > Terry, how did you work around the above issue when testing on RHOS? > > I didn't run into this issue when testing on RHOS or RDO. For me, on my test VMs, everything comes up properly on both systems (launching vm's, getting addresses, connecting vial floating ip, etc. just works)--although I haven't tested RHOS with the latest changes because the build was just done this morning. > > Terry Interestingly we're having similar issues with l2-agent and dhcp on RDO, using ovs_veth to True. Meanwhile our environment is not a all-in-one architecture but using a dedicated controller node and a dedicated network node plus two compute nodes. The instances sitting on the same host are talking to each others - assuming same tenant subnet/router of course. But no instances across hosts cat see each others. Also, none of the VMs are getting dhcp. The MAC address reset solution would be great. Because when managing remote servers is just a blocker especially when trying to push the whole thing automatically. It'll force to have a dedicated mgmt interface, which of course is the recommended thing in production but for now.... Gilles From pmyers at redhat.com Tue Aug 6 04:50:58 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 06 Aug 2013 00:50:58 -0400 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FE5DDC.4010104@redhat.com> References: <51FE5DDC.4010104@redhat.com> Message-ID: <520080B2.1010104@redhat.com> On 08/04/2013 09:57 AM, Perry Myers wrote: > Hi, > > I followed the instructions at: > http://openstack.redhat.com/Neutron-Quickstart > http://openstack.redhat.com/Running_an_instance_with_Neutron > > I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure > to install the netns enabled kernel from RDO repos and reboot with that > kernel before running packstack so that I didn't need to reboot the VM > after the packstack install (and have br-ex disappear) > > The packstack install went without incident. And I was able to follow > the launch an instance instructions. Ok, retried this but took advice from folks on this thread. Since l3 agent and dhcp agent in RDO are not right (they comment out ovs_use_veth=True and veths are required for the netns support in RHEL kernels) marun summarized this nicely: "if ovs_use_veth is set to false, a regular interface and an internal ovs port will be used, and the regular interface will be moved to a namespace during setup. if ovs_use_veth is set to true, a veth pair will be used with one endpoint created in the namespace. it is a limitation of rhel's netns implementation that requires the second approach, as virtual interfaces can only be created in namespaces, not moved post-creation." With manually enabling ovs_use_veth=True for l3 and dhcp agents, I was able to get cirros VM to get an ip address on launching. What doesn't work now is pinging/sshing to the floating ip address from the host (which is itself a VM) Yes, I did open those ports in the default security group, and I also made sure the instance was launched with the default security group. But that being said, I wanted to check the logs to see if some of the previous errors went away. dhcp-agent and l3 agent logs look clean now (aside from the amqp initial connection errors) My next test will be to run this exact same scenario but with NetworkManager disabled. Perry From pmyers at redhat.com Tue Aug 6 05:57:58 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 06 Aug 2013 01:57:58 -0400 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <520080B2.1010104@redhat.com> References: <51FE5DDC.4010104@redhat.com> <520080B2.1010104@redhat.com> Message-ID: <52009066.9090307@redhat.com> On 08/06/2013 12:50 AM, Perry Myers wrote: > On 08/04/2013 09:57 AM, Perry Myers wrote: >> Hi, >> >> I followed the instructions at: >> http://openstack.redhat.com/Neutron-Quickstart >> http://openstack.redhat.com/Running_an_instance_with_Neutron >> >> I ran this on a RHEL 6.4 VM with latest updates from 6.4.z. I made sure >> to install the netns enabled kernel from RDO repos and reboot with that >> kernel before running packstack so that I didn't need to reboot the VM >> after the packstack install (and have br-ex disappear) >> >> The packstack install went without incident. And I was able to follow >> the launch an instance instructions. > > Ok, retried this but took advice from folks on this thread. > > Since l3 agent and dhcp agent in RDO are not right (they comment out > ovs_use_veth=True and veths are required for the netns support in RHEL > kernels) > > marun summarized this nicely: > > "if ovs_use_veth is set to false, a regular interface and an internal > ovs port will be used, and the regular interface will be moved to a > namespace during setup. if ovs_use_veth is set to true, a veth pair will > be used with one endpoint created in the namespace. it is a limitation > of rhel's netns implementation that requires the second approach, as > virtual interfaces can only be created in namespaces, not moved > post-creation." > > With manually enabling ovs_use_veth=True for l3 and dhcp agents, I was > able to get cirros VM to get an ip address on launching. > > What doesn't work now is pinging/sshing to the floating ip address from > the host (which is itself a VM) > > Yes, I did open those ports in the default security group, and I also > made sure the instance was launched with the default security group. > > But that being said, I wanted to check the logs to see if some of the > previous errors went away. dhcp-agent and l3 agent logs look clean now > (aside from the amqp initial connection errors) > > My next test will be to run this exact same scenario but with > NetworkManager disabled. Ok, ran the exact steps above but this time I started with a guest where networkmanager was completely removed via: yum remove *NetworkManager* editing /etc/sysconfig/network-scripts/ifcfg-eth0 to set NM_CONTROLED=no rebooting I got the exact same results with and without NM on the system. mainly... cirros VM could get private IP from dhcp agent (10.0.0.3) but I can't access the VM via the floating IP Someone double check me, but here is what my default secgroup looks like: > [admin at rdo-mgmt ~(keystone_demo)]$ nova secgroup-list-rules default > +-------------+-----------+---------+-----------+--------------+ > | IP Protocol | From Port | To Port | IP Range | Source Group | > +-------------+-----------+---------+-----------+--------------+ > | | | | | default | > | | | | | default | > | icmp | -1 | -1 | 0.0.0.0/0 | | > | tcp | 22 | 22 | 0.0.0.0/0 | | > +-------------+-----------+---------+-----------+--------------+ And as you can see above, I'm using the demo tenant and networks created for that tenant Also, I noticed with NM enabled or disabled, I cannot access the external network from the cirros VM. Ok, so in case this info is useful: > [admin at rdo-mgmt ~(keystone_demo)]$ sudo ovs-vsctl show > 25588688-af82-4bc9-b053-8009d2718738 > Bridge br-int > Port "tap1582253a-01" > Interface "tap1582253a-01" > Port "qr-1582253a-01" > tag: 1 > Interface "qr-1582253a-01" > type: internal > Port "tapdce36595-4c" > tag: 1 > Interface "tapdce36595-4c" > Port br-int > Interface br-int > type: internal > Port "qvocd93d0e2-69" > tag: 1 > Interface "qvocd93d0e2-69" > Port "tap99bc4804-f3" > tag: 2 > Interface "tap99bc4804-f3" > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Port "qg-1448e7df-47" > Interface "qg-1448e7df-47" > type: internal > Port "tap1448e7df-47" > Interface "tap1448e7df-47" > ovs_version: "1.10.0" quamtum logs look relatively benign except I see this warning in server.log: > 2013-08-06 01:39:32 WARNING [quantum.db.agentschedulers_db] Fail scheduling network {'status': u'ACTIVE', 'subnets': [u'585ec59b-005d-4460-a094-5394be2bb3a1'], 'name': u'private', 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id': u'd297494482aa44ebb30243f624f9d5fc', 'provider:network_type': u'local', 'router:external': False, 'shared': False, 'id': u'7878056e-b4eb-4d26-a711-95ced35e7f98', 'provider:segmentation_id': None} Appreciate any pointers on what to check/look for next... :) Perry From rdo-info at redhat.com Tue Aug 6 07:56:22 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 6 Aug 2013 07:56:22 +0000 Subject: [Rdo-list] [RDO] red_trela started a discussion. Message-ID: <0000014052a07442-34af144d-4ad4-42e6-b73b-e2c3879e6597-000000@email.amazonses.com> red_trela started a discussion. Quantum (L3) IOError when logging --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/402/quantum-l3-ioerror-when-logging Have a great day! From rdo-info at redhat.com Tue Aug 6 08:34:27 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 6 Aug 2013 08:34:27 +0000 Subject: [Rdo-list] [RDO] satoshi started a discussion. Message-ID: <0000014052c354b4-b1288543-6c18-4449-9a32-9ab2eb4f570d-000000@email.amazonses.com> satoshi started a discussion. Available VM is only one issue --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/403/available-vm-is-only-one-issue Have a great day! From rdo-info at redhat.com Tue Aug 6 09:09:22 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 6 Aug 2013 09:09:22 +0000 Subject: [Rdo-list] [RDO] red_trela started a discussion. Message-ID: <0000014052e34c82-82f40128-2302-43a3-9b6f-230cd77f0d3d-000000@email.amazonses.com> red_trela started a discussion. "Timeout: Timeout while waiting on RPC response." with Qpid --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/404/timeout-timeout-while-waiting-on-rpc-response-with-qpid Have a great day! From mrunge at redhat.com Tue Aug 6 11:05:44 2013 From: mrunge at redhat.com (Matthias Runge) Date: Tue, 06 Aug 2013 13:05:44 +0200 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <51FE6B8A.3050501@redhat.com> References: <51FE6B8A.3050501@redhat.com> Message-ID: <5200D888.1010206@redhat.com> On 04/08/13 16:56, Perry Myers wrote: > > So, checklist: > > * packstack in Havana needs to create demo tenant and import cirros > image just like in RDO Grizzly > * Horizon seems to have screen refresh/updates and More button issues > * Need Neutron packages for Havana so that we can use Neutron in Havana > * Need Packstack available so that RDO Nightly users can install the > nightly builds Perry, sadly I was unable to install via packstack for several due several issues, so I can not reproduce your issue. One thing I mentioned for Havana is, we require a newer version of python-django-compressor: http://kojipkgs.fedoraproject.org//packages/python-django-compressor/1.3/2.el6/noarch/python-django-compressor-1.3-2.el6.noarch.rpm Does that change the behaviour of Horizon in your install? yum localinstall ... && service httpd restart Matthias From rdo-info at redhat.com Tue Aug 6 12:56:33 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 6 Aug 2013 12:56:33 +0000 Subject: [Rdo-list] [RDO] acalinciuc started a discussion. Message-ID: <0000014053b34b0b-4f895983-c954-4581-b62e-5504a65ef970-000000@email.amazonses.com> acalinciuc started a discussion. Instances on compute node do not get IP --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/405/instances-on-compute-node-do-not-get-ip Have a great day! From pmyers at redhat.com Tue Aug 6 13:52:09 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 06 Aug 2013 09:52:09 -0400 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <5200D888.1010206@redhat.com> References: <51FE6B8A.3050501@redhat.com> <5200D888.1010206@redhat.com> Message-ID: <5200FF89.6050007@redhat.com> On 08/06/2013 07:05 AM, Matthias Runge wrote: > On 04/08/13 16:56, Perry Myers wrote: >> >> So, checklist: >> >> * packstack in Havana needs to create demo tenant and import cirros >> image just like in RDO Grizzly >> * Horizon seems to have screen refresh/updates and More button issues >> * Need Neutron packages for Havana so that we can use Neutron in Havana >> * Need Packstack available so that RDO Nightly users can install the >> nightly builds > Perry, > > sadly I was unable to install via packstack for several due several > issues, so I can not reproduce your issue. > > One thing I mentioned for Havana is, we require a newer version of > python-django-compressor: > http://kojipkgs.fedoraproject.org//packages/python-django-compressor/1.3/2.el6/noarch/python-django-compressor-1.3-2.el6.noarch.rpm > > Does that change the behaviour of Horizon in your install? > yum localinstall ... && service httpd restart I'll try this when I get a chance, but a few questions... If this is required for Horizon, why is it not listed with an explicit Requires in the spec file? If it's required for Horizon, why is the package not installed right now when I install RDO Havana milestone 2? If the package just hasn't hit EPEL yet, we should be carrying it in RDO repos explicitly until it does hit EPEL P?draig, can you work with Matthias to resolve this? Perry From pmyers at redhat.com Tue Aug 6 14:55:15 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 06 Aug 2013 10:55:15 -0400 Subject: [Rdo-list] high cpu usage for an idle allinone RDO Grizzly Message-ID: <52010E53.4070703@redhat.com> Installed RDO Grizzly on RHEL 6.4 running in a VM on top of Fedora 19 using kvm Even with no nested VMs running inside of the allinone node, and just the core services running, I'm seeing CPU spikes every second. Basically CPU utilization reported by top goes from 2% to 60% and back to 2% or so about every second. Anyone have any idea if this is expected for an idle system? Perry From rdo-info at redhat.com Tue Aug 6 14:57:10 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 6 Aug 2013 14:57:10 +0000 Subject: [Rdo-list] [RDO] rbowen started a discussion. Message-ID: <000001405421b5d2-675b5a60-293c-474e-9361-8a003a50b257-000000@email.amazonses.com> rbowen started a discussion. Anyone able to translate subtitles? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/406/anyone-able-to-translate-subtitles Have a great day! From pbrady at redhat.com Tue Aug 6 15:52:18 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 06 Aug 2013 16:52:18 +0100 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <5200FF89.6050007@redhat.com> References: <51FE6B8A.3050501@redhat.com> <5200D888.1010206@redhat.com> <5200FF89.6050007@redhat.com> Message-ID: <52011BB2.9080309@redhat.com> On 08/06/2013 02:52 PM, Perry Myers wrote: > On 08/06/2013 07:05 AM, Matthias Runge wrote: >> On 04/08/13 16:56, Perry Myers wrote: >>> >>> So, checklist: >>> >>> * packstack in Havana needs to create demo tenant and import cirros >>> image just like in RDO Grizzly >>> * Horizon seems to have screen refresh/updates and More button issues >>> * Need Neutron packages for Havana so that we can use Neutron in Havana >>> * Need Packstack available so that RDO Nightly users can install the >>> nightly builds >> Perry, >> >> sadly I was unable to install via packstack for several due several >> issues, so I can not reproduce your issue. >> >> One thing I mentioned for Havana is, we require a newer version of >> python-django-compressor: >> http://kojipkgs.fedoraproject.org//packages/python-django-compressor/1.3/2.el6/noarch/python-django-compressor-1.3-2.el6.noarch.rpm >> >> Does that change the behaviour of Horizon in your install? >> yum localinstall ... && service httpd restart > > I'll try this when I get a chance, but a few questions... > > If this is required for Horizon, why is it not listed with an explicit > Requires in the spec file? It's best to avoid explicit versions if possible. It adds a maintenance burden/cruft and might preclude using otherwise OK older versions with backports on some systems. > If it's required for Horizon, why is the package not installed right now > when I install RDO Havana milestone 2? I presume more testing is required. I see the updated el6-havana version was just built today. I did a quick test. The package installed without issue, and horizon seems to work without issue with basic tests, so I've now added this to the havana el6 repo (the fedora 19 version of this package is already at 1.3 in the Fedora repos). > If the package just hasn't hit EPEL yet, we should be carrying it in RDO > repos explicitly until it does hit EPEL I'm not sure that epel-6 can be updated due to explicit requires on Django14. Well more accurately it might benefit from an update, but would probably need to be a separate package for epel-6 and rdo-havana thanks, P?draig. From rdo-info at redhat.com Tue Aug 6 15:57:18 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 6 Aug 2013 15:57:18 +0000 Subject: [Rdo-list] [RDO] rbowen started a discussion. Message-ID: <000001405458c5b1-57ca2907-8b1e-4be1-b6f3-6993ff29d7ad-000000@email.amazonses.com> rbowen started a discussion. August newsletter, in case you missed it --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/407/august-newsletter-in-case-you-missed-it Have a great day! From rdo-info at redhat.com Wed Aug 7 03:38:03 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 7 Aug 2013 03:38:03 +0000 Subject: [Rdo-list] [RDO] zhyu started a discussion. Message-ID: <0000014056da5234-1899ff93-3b20-4c3a-9907-ab658472fcf0-000000@email.amazonses.com> zhyu started a discussion. openstack-nova-scheduler dead?but pid file exists --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/408/openstack-nova-scheduler-deadbut-pid-file-exists Have a great day! From rdo-info at redhat.com Wed Aug 7 03:43:25 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 7 Aug 2013 03:43:25 +0000 Subject: [Rdo-list] [RDO] zhyu started a discussion. Message-ID: <0000014056df3d62-a0efb328-8a65-45f7-9a86-2c345d04e2e7-000000@email.amazonses.com> zhyu started a discussion. quantum dead?but pid file exists --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/409/quantum-deadbut-pid-file-exists Have a great day! From rdo-info at redhat.com Wed Aug 7 03:58:21 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 7 Aug 2013 03:58:21 +0000 Subject: [Rdo-list] [RDO] miss_heptagone started a discussion. Message-ID: <0000014056ece9ce-efbb040b-bf90-4d9a-843c-e86f8cfdb895-000000@email.amazonses.com> miss_heptagone started a discussion. how long is rdo/grizzly going to be supported? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/410/how-long-is-rdogrizzly-going-to-be-supported Have a great day! From rdo-info at redhat.com Wed Aug 7 07:12:37 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 7 Aug 2013 07:12:37 +0000 Subject: [Rdo-list] [RDO] leeuwenrjj started a discussion. Message-ID: <00000140579ec555-9a8d2c34-7922-4810-bae1-bc12359f3c52-000000@email.amazonses.com> leeuwenrjj started a discussion. Swift and XFS inode size --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/411/swift-and-xfs-inode-size Have a great day! From mrunge at redhat.com Wed Aug 7 07:46:34 2013 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 07 Aug 2013 09:46:34 +0200 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <5200FF89.6050007@redhat.com> References: <51FE6B8A.3050501@redhat.com> <5200D888.1010206@redhat.com> <5200FF89.6050007@redhat.com> Message-ID: <5201FB5A.2040605@redhat.com> On 06/08/13 15:52, Perry Myers wrote: > > I'll try this when I get a chance, but a few questions... > > If this is required for Horizon, why is it not listed with an explicit > Requires in the spec file? > > If it's required for Horizon, why is the package not installed right now > when I install RDO Havana milestone 2? you got an older version from EPEL. Using explicit requires (including version numbers is discouraged). So the question is: why wasn't the newer version included in the RDO repo: I can't answer that, it was just forgotten. The same is also true for e.g python-keystoneclient in a updated version. > > If the package just hasn't hit EPEL yet, we should be carrying it in RDO > repos explicitly until it does hit EPEL python-django-compressor-1.2 is in EPEL for nearly 10 months now. Still, the version from EPEL is too old; sadly, I can not simply upgrade it. Matthias From mrunge at redhat.com Wed Aug 7 07:50:45 2013 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 07 Aug 2013 09:50:45 +0200 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <52011BB2.9080309@redhat.com> References: <51FE6B8A.3050501@redhat.com> <5200D888.1010206@redhat.com> <5200FF89.6050007@redhat.com> <52011BB2.9080309@redhat.com> Message-ID: <5201FC55.3040209@redhat.com> On 06/08/13 17:52, P?draig Brady wrote: > > I'm not sure that epel-6 can be updated due to > explicit requires on Django14. Well more accurately > it might benefit from an update, but would probably > need to be a separate package for epel-6 and rdo-havana imho python-django-compressor-1.3 is not 100% backwards compatible to python-django-compressor-1.2 Django14 is the only Django version in EPEL right now. Matthias From rdo-info at redhat.com Wed Aug 7 08:19:46 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 7 Aug 2013 08:19:46 +0000 Subject: [Rdo-list] [RDO] ppyy started a discussion. Message-ID: <0000014057dc3ed0-ea7d26cd-753e-4280-865a-02021608dafc-000000@email.amazonses.com> ppyy started a discussion. qcow2 backing file corrupt? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/412/qcow2-backing-file-corrupt Have a great day! From aortega at redhat.com Wed Aug 7 08:40:15 2013 From: aortega at redhat.com (Alvaro Lopez Ortega) Date: Wed, 7 Aug 2013 10:40:15 +0200 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <51FE6B8A.3050501@redhat.com> References: <51FE6B8A.3050501@redhat.com> Message-ID: <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> On Aug 4, 2013, at 4:56 PM, Perry Myers wrote: > Then I thought to check the nightly repos here: > http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6/x86_64/ > > Packstack isn't even in those repos, which means that folks can't use > packstack to install nightly builds easily. This is something the CI team discussed last Monday. SmokeStack ought to pack Packstack along the rest of the components. Testing it may be out of scope, but at least it should generate the PackStack RPM. It'd avoid the problem you just described so people could test the very latest version of all the packages with ease. It's a work item for this week, so we should get it sorted out within the next few days. All the best, Alvaro -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Wed Aug 7 10:25:44 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 07 Aug 2013 11:25:44 +0100 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> Message-ID: <520220A8.2060508@redhat.com> On 08/07/2013 09:40 AM, Alvaro Lopez Ortega wrote: > On Aug 4, 2013, at 4:56 PM, Perry Myers > wrote: > >> Then I thought to check the nightly repos here: >> http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6/x86_64/ >> >> Packstack isn't even in those repos, which means that folks can't use >> packstack to install nightly builds easily. > > This is something the CI team discussed last Monday. SmokeStack ought to pack Packstack along the rest of the components. Testing it may be out of scope, but at least it should generate the PackStack RPM. It'd avoid the problem you just described so people could test the very latest version of all the packages with ease. > > It's a work item for this week, so we should get it sorted out within the next few days. > > All the best, > Alvaro What I had previously suggested was that the trunk packages were just updates on top of the existing pre-release standard repo. So to test the trunk repo you would currently enable for example the Havana milestone repo _and_ the trunk repo, and in that way get all the ancillary packages without the space and maintenance overhead of keeping two very similar repos in sync. thanks, P?draig. From pmyers at redhat.com Wed Aug 7 13:42:16 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 07 Aug 2013 09:42:16 -0400 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <520220A8.2060508@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <520220A8.2060508@redhat.com> Message-ID: <52024EB8.1000800@redhat.com> On 08/07/2013 06:25 AM, P?draig Brady wrote: > On 08/07/2013 09:40 AM, Alvaro Lopez Ortega wrote: >> On Aug 4, 2013, at 4:56 PM, Perry Myers > wrote: >> >>> Then I thought to check the nightly repos here: >>> http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6/x86_64/ >>> >>> Packstack isn't even in those repos, which means that folks can't use >>> packstack to install nightly builds easily. >> >> This is something the CI team discussed last Monday. SmokeStack ought to pack Packstack along the rest of the components. Testing it may be out of scope, but at least it should generate the PackStack RPM. It'd avoid the problem you just described so people could test the very latest version of all the packages with ease. >> >> It's a work item for this week, so we should get it sorted out within the next few days. >> >> All the best, >> Alvaro > > What I had previously suggested was that the trunk packages were just updates > on top of the existing pre-release standard repo. > So to test the trunk repo you would currently enable for example > the Havana milestone repo _and_ the trunk repo, > and in that way get all the ancillary packages without the > space and maintenance overhead of keeping two very similar repos in sync. +1, this is a good idea Do we need a release RPM though that makes getting the combination of nightly + stable repos enabled simultaneously? Perry From rbowen at redhat.com Tue Aug 6 15:24:21 2013 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 06 Aug 2013 11:24:21 -0400 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter: August, 2013 Message-ID: <52011525.9010204@redhat.com> *Thanks for being part of the RDO community!* July was a busy month, and we have a lot to tell you about. If you want to keep up with what's going on with RDO in the coming month, the best way is to follow the RDO forum, at http://bit.ly/13YEwCe, or to follow us on Twitter at @rdocommunity If you'd like to manage your mailing list subscription, or invite other people to join the list, you can do that at http://red.ht/11amemL *OSCon and Flock* In July, we were at the O'Reilly Open Source Convention in Portland, Oregon, as part of the OpenStack pavilion. Thanks to those of you who stopped by to talk with us. There was a lot of great OpenStack content at OSCon, and our own Dave Neary led the 'OpenStack Distro Smackdown' session (http://bit.ly/15FwwfK ). We also attended the OpenStack third birthday party, where we got to celebrate this great journey we're on. One of the (many) things that makes Open Source so cool is the ability to collaborate with our competitors to make the world a better place, and OpenStack is a great example of that. A big thank you and congratulations to the entire OpenStack family. If you missed OSCon, come to Flock (http://flocktofedora.org/ ), August 9-12 (this weekend!) in Charleston, South Carolina. Flock is a gathering of Fedora developers, and we'll be there talking with some of the people who make RDO happen. On Saturday, Kashyap Chamarthy will be leading the OpenStack Test event, in which participants will set up and test the latest OpenStack packages on Fedora. (Details at http://bit.ly/15KJGFV ) If you're attending, or hosting, any OpenStack meetups, we'd love to hear about them and help you get the word out. Drop us a note on the RDO forum, or send us a tweet. *Videos* In the last few weeks, we've put a vew videos on YouTube, showing some simple tasks with OpenStack and RDO. We started with a demo of TryStack.org, the site where you can take OpenStack for a test run without having to set it up yourself. (http://www.youtube.com/watch?v=7cVL1NDWWyY ). And, more recently, we published the video we put together for OSCon, showing an installation of OpenStack using RDO, all the way through spinning up a virtual machine and ssh'ing into it. (http://youtu.be/ixJtmQNId1Y ) In the coming weeks and months, we'll be publishing more of these, showing you how to do various other tasks with OpenStack. You'll find those in our YouTube channel at http://www.youtube.com/channel/UCWYIPZ4lm4P3_pzZ9Hx9awg *Networking with RDO** * We've had a lot of discussion in the RDO forum about networking over the last month. With Neutron support coming to RDO, but not quite mature yet, some people are having trouble getting networking running. To address this difficulty, we've added a networking resource in the wiki (http://openstack.redhat.com/Networking ) where we can share what works, and what doesn't, to help you get things running smoothly. Also, we've updated the QuickStart instructions (http://openstack.redhat.com/Quickstart ) to disable Neutron (formerly known as Quantum) networking, so that you can have a better first-time experience when installing RDO. Remember that you can also generate an answer file, using the '--gen-answer-file=ANSWER_FILE' argument, edit that file to set your preferences, and then run packstack using this file as input, using the '--answer-file=ANSWER_FILE' argument. *Using Ceph for Block Storage* There's been a lot of interest lately around Ceph (http://ceph.com/ ), the distributed object store and filesystem, and using it with OpenStack. Our friends at Inktank were kind enough to add some documentation to the RDO wiki about using Ceph for block storage with RDO - http://bit.ly/13K3m9G This is a detailed, step-by-step how-to, showing the entire process of installing RDO, configuring Ceph, and getting the two to talk to each other. *Troubleshooting* Because RDO tracks the latest releases of OpenStack, and because of the inevitable variation in deployment platforms, you may encounter some problems during deployment. (The Red Hat Entperprise Linux OpenStack Platform, which trails upstream by a few months, is more thoroughly hardened and tested.) As you're working through these problems, you may wish to have a look at the troubleshooting page on our wiki - http://openstack.redhat.com/Troubleshooting - where we're trying to document common scenarios that people are encountering, and tips for solving these problems. We welcome your participation in this process. If you find the solution to some problem, please write it up so that everyone can benefit. *Other sources* We recently came across this writeup of using RDO to deploy a multi-node OpenStack cloud: http://www.cloudbase.it/rdo-multi-node/ . It's a very deep dive into how everything fits together, as well as hands-on configuration tips and examples. This is part one of a promised series, so we encourage you to check back there. If you come across helpful articles that you think will benefit the entire community, please post them to the RDO forum, or send them directly to me at rbowen at redhat.com *In closing ...* Thanks again for being part of the RDO community. We'd love to hear how we can do better. Let us know on the forum (http://openstack.redhat.com/forum/ ), or on Twitter (@rdocommunity), or just drop us email at rbowen at redhat.com or on the RDO mailing list (http://red.ht/12XFRiy ) Until next month ... Rich and Dave, for the RDO community -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From sgordon at redhat.com Wed Aug 7 13:48:39 2013 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 7 Aug 2013 09:48:39 -0400 (EDT) Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <52024EB8.1000800@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <520220A8.2060508@redhat.com> <52024EB8.1000800@redhat.com> Message-ID: <1821019258.12707458.1375883319055.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Perry Myers" > To: "P?draig Brady" > Cc: "Alvaro Lopez Ortega" , "rdo-list" redhat.com>, "Dan Prince" > Sent: Wednesday, August 7, 2013 9:42:16 AM > Subject: Re: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 > > On 08/07/2013 06:25 AM, P?draig Brady wrote: > > On 08/07/2013 09:40 AM, Alvaro Lopez Ortega wrote: > >> On Aug 4, 2013, at 4:56 PM, Perry Myers >> > wrote: > >> > >>> Then I thought to check the nightly repos here: > >>> http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6/x86_64/ > >>> > >>> Packstack isn't even in those repos, which means that folks can't use > >>> packstack to install nightly builds easily. > >> > >> This is something the CI team discussed last Monday. SmokeStack ought to > >> pack Packstack along the rest of the components. Testing it may be out of > >> scope, but at least it should generate the PackStack RPM. It'd avoid the > >> problem you just described so people could test the very latest version > >> of all the packages with ease. > >> > >> It's a work item for this week, so we should get it sorted out within the > >> next few days. > >> > >> All the best, > >> Alvaro > > > > What I had previously suggested was that the trunk packages were just > > updates > > on top of the existing pre-release standard repo. > > So to test the trunk repo you would currently enable for example > > the Havana milestone repo _and_ the trunk repo, > > and in that way get all the ancillary packages without the > > space and maintenance overhead of keeping two very similar repos in sync. > > +1, this is a good idea > > Do we need a release RPM though that makes getting the combination of > nightly + stable repos enabled simultaneously? > > Perry Probably that and a page on the wiki, linked in from the QuickStart (similar to how the "get fedora" page has a little note when alpha/beta versions are available), making the information discoverable for users wanting to try Havana packages. Thanks, Steve From pbrady at redhat.com Wed Aug 7 13:57:17 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 07 Aug 2013 14:57:17 +0100 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <52024EB8.1000800@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <520220A8.2060508@redhat.com> <52024EB8.1000800@redhat.com> Message-ID: <5202523D.9090405@redhat.com> On 08/07/2013 02:42 PM, Perry Myers wrote: > On 08/07/2013 06:25 AM, P?draig Brady wrote: >> On 08/07/2013 09:40 AM, Alvaro Lopez Ortega wrote: >>> On Aug 4, 2013, at 4:56 PM, Perry Myers > wrote: >>> >>>> Then I thought to check the nightly repos here: >>>> http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6/x86_64/ >>>> >>>> Packstack isn't even in those repos, which means that folks can't use >>>> packstack to install nightly builds easily. >>> >>> This is something the CI team discussed last Monday. SmokeStack ought to pack Packstack along the rest of the components. Testing it may be out of scope, but at least it should generate the PackStack RPM. It'd avoid the problem you just described so people could test the very latest version of all the packages with ease. >>> >>> It's a work item for this week, so we should get it sorted out within the next few days. >>> >>> All the best, >>> Alvaro >> >> What I had previously suggested was that the trunk packages were just updates >> on top of the existing pre-release standard repo. >> So to test the trunk repo you would currently enable for example >> the Havana milestone repo _and_ the trunk repo, >> and in that way get all the ancillary packages without the >> space and maintenance overhead of keeping two very similar repos in sync. > > +1, this is a good idea > > Do we need a release RPM though that makes getting the combination of > nightly + stable repos enabled simultaneously? Probably should include the nightly repo info in the Havana+ rdo-release.rpm, but disabled. Then it can be a documented step to do this before packstack/foreman ... sudo crudini --set /etc/yum.repos.d/rdo-release.repo rdo-nightly enabled 1 thanks, P?draig. From rbowen at redhat.com Wed Aug 7 14:01:05 2013 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 07 Aug 2013 10:01:05 -0400 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> Message-ID: <52025321.5010407@redhat.com> On 08/07/2013 04:40 AM, Alvaro Lopez Ortega wrote: > > This is something the CI team discussed last Monday. SmokeStack ought > to pack Packstack along the rest of the components. Testing it may be > out of scope, but at least it should generate the PackStack RPM. It'd > avoid the problem you just described so people could test the very > latest version of all the packages with ease. Would this also reduce the three step QuickStart to two steps? (i.e., no need to install packstack as a separate step 2.) -- Rich Bowen OpenStack Community Liaison http://openstack.redhat.com/ From pbrady at redhat.com Wed Aug 7 14:05:55 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 07 Aug 2013 15:05:55 +0100 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <52025321.5010407@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <52025321.5010407@redhat.com> Message-ID: <52025443.6080703@redhat.com> On 08/07/2013 03:01 PM, Rich Bowen wrote: > On 08/07/2013 04:40 AM, Alvaro Lopez Ortega wrote: >> >> This is something the CI team discussed last Monday. SmokeStack ought to pack Packstack along the rest of the components. Testing it may be out of scope, but at least it should generate the PackStack RPM. It'd avoid the problem you just described so people could test the very latest version of all the packages with ease. > > Would this also reduce the three step QuickStart to two steps? (i.e., no need to install packstack as a separate step 2.) I don't think so due to catch 22. You need to install the repo rpm without other deps first, to make the repo available to yum. Only then can you install packages from that repo (that may depend on other packages within that repo) cheers, P?draig. From rbowen at redhat.com Wed Aug 7 14:43:36 2013 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 07 Aug 2013 10:43:36 -0400 Subject: [Rdo-list] Fedora 18 images? Message-ID: <52025D18.1060507@redhat.com> The link to the Fedora 18 images (http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2) is now 404'ing. What's the preferred URL that I should put on http://openstack.redhat.com/Image_resources ? --Rich -- Rich Bowen OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Wed Aug 7 14:51:30 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 07 Aug 2013 10:51:30 -0400 Subject: [Rdo-list] Fedora 18 images? In-Reply-To: <52025D18.1060507@redhat.com> References: <52025D18.1060507@redhat.com> Message-ID: <52025EF2.2040203@redhat.com> On 08/07/2013 10:43 AM, Rich Bowen wrote: > The link to the Fedora 18 images > (http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2) is now 404'ing.\ That link is for the F19 image, which seems to work fine. But swapping out F19 for F18 with: http://cloud.fedoraproject.org/fedora-18.x86_64.qcow2 does appear to be broken. Looks like Fedora pulled the F18 images? Perry From rbowen at redhat.com Wed Aug 7 14:54:28 2013 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 07 Aug 2013 10:54:28 -0400 Subject: [Rdo-list] Fedora 18 images? In-Reply-To: <52025EF2.2040203@redhat.com> References: <52025D18.1060507@redhat.com> <52025EF2.2040203@redhat.com> Message-ID: <52025FA4.2040709@redhat.com> On 08/07/2013 10:51 AM, Perry Myers wrote: > On 08/07/2013 10:43 AM, Rich Bowen wrote: >> The link to the Fedora 18 images >> (http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2) is now 404'ing.\ > That link is for the F19 image, which seems to work fine. > > But swapping out F19 for F18 with: > http://cloud.fedoraproject.org/fedora-18.x86_64.qcow2 > does appear to be broken. > > Looks like Fedora pulled the F18 images? > I'm sorry, that's the URL I meant - the 18 one on http://openstack.redhat.com/Image_resources not the 19. Anybody have 18 images somewhere else? -- Rich Bowen OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekh at redhat.com Wed Aug 7 15:00:04 2013 From: derekh at redhat.com (Derek Higgins) Date: Wed, 07 Aug 2013 16:00:04 +0100 Subject: [Rdo-list] Fedora 18 images? In-Reply-To: <52025FA4.2040709@redhat.com> References: <52025D18.1060507@redhat.com> <52025EF2.2040203@redhat.com> <52025FA4.2040709@redhat.com> Message-ID: <520260F4.2@redhat.com> On 07/08/13 15:54, Rich Bowen wrote: > On 08/07/2013 10:51 AM, Perry Myers wrote: >> On 08/07/2013 10:43 AM, Rich Bowen wrote: >>> The link to the Fedora 18 images >>> (http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2) is now 404'ing.\ >> That link is for the F19 image, which seems to work fine. >> >> But swapping out F19 for F18 with: >> http://cloud.fedoraproject.org/fedora-18.x86_64.qcow2 >> does appear to be broken. >> >> Looks like Fedora pulled the F18 images? >> > > I'm sorry, that's the URL I meant - the 18 one on > http://openstack.redhat.com/Image_resources not the 19. > > Anybody have 18 images somewhere else? Try here http://mattdm.fedorapeople.org/cloud-images/Fedora18-Cloud-x86_64-latest.qcow2 > > -- > Rich Bowen > OpenStack Community Liaison > http://openstack.redhat.com/ > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From rbowen at redhat.com Wed Aug 7 15:07:25 2013 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 07 Aug 2013 11:07:25 -0400 Subject: [Rdo-list] Fedora 18 images? In-Reply-To: <520260F4.2@redhat.com> References: <52025D18.1060507@redhat.com> <52025EF2.2040203@redhat.com> <52025FA4.2040709@redhat.com> <520260F4.2@redhat.com> Message-ID: <520262AD.4050405@redhat.com> On 08/07/2013 11:00 AM, Derek Higgins wrote: > On 07/08/13 15:54, Rich Bowen wrote: >> On 08/07/2013 10:51 AM, Perry Myers wrote: >>> On 08/07/2013 10:43 AM, Rich Bowen wrote: >>>> The link to the Fedora 18 images >>>> (http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2) is now 404'ing.\ >>> That link is for the F19 image, which seems to work fine. >>> >>> But swapping out F19 for F18 with: >>> http://cloud.fedoraproject.org/fedora-18.x86_64.qcow2 >>> does appear to be broken. >>> >>> Looks like Fedora pulled the F18 images? >>> >> I'm sorry, that's the URL I meant - the 18 one on >> http://openstack.redhat.com/Image_resources not the 19. >> >> Anybody have 18 images somewhere else? > Try here > > http://mattdm.fedorapeople.org/cloud-images/Fedora18-Cloud-x86_64-latest.qcow2 > Thanks. --Rich -- Rich Bowen OpenStack Community Liaison http://openstack.redhat.com/ From mattdm at fedoraproject.org Wed Aug 7 21:08:24 2013 From: mattdm at fedoraproject.org (Matthew Miller) Date: Wed, 7 Aug 2013 17:08:24 -0400 Subject: [Rdo-list] Fedora 18 images? In-Reply-To: <52025EF2.2040203@redhat.com> References: <52025D18.1060507@redhat.com> <52025EF2.2040203@redhat.com> Message-ID: <20130807210824.GA21355@disco.bu.edu> On Wed, Aug 07, 2013 at 10:51:30AM -0400, Perry Myers wrote: > That link is for the F19 image, which seems to work fine. > But swapping out F19 for F18 with: > http://cloud.fedoraproject.org/fedora-18.x86_64.qcow2 > does appear to be broken. > Looks like Fedora pulled the F18 images? Nope -- we never _had_ F18 images. If you really need them, you can get my unofficial images from http://mattdm.fedorapeople.org/cloud-images/Fedora18-Cloud-x86_64-20130115.2.qcow2 But I really recommend F19 -- it's nicer in many ways! -- Matthew Miller ??? Fedora Cloud Architect ??? From rdo-info at redhat.com Wed Aug 7 23:04:34 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 7 Aug 2013 23:04:34 +0000 Subject: [Rdo-list] [RDO] Instances fail to run after quickstart on F19 Message-ID: <000001405b064b70-3cb5fe4b-ce6b-436d-b322-19ba4cb4f525-000000@email.amazonses.com> bbreard started a discussion. Instances fail to run after quickstart on F19 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/413/instances-fail-to-run-after-quickstart-on-f19 Have a great day! From rdo-info at redhat.com Thu Aug 8 01:44:43 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 8 Aug 2013 01:44:43 +0000 Subject: [Rdo-list] [RDO] Use RabbitMQ instead of QPID Message-ID: <000001405b98ecda-ad9dbd87-4edc-4258-9c31-ee3249487d5d-000000@email.amazonses.com> zhidong started a discussion. Use RabbitMQ instead of QPID --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/414/use-rabbitmq-instead-of-qpid Have a great day! From rdo-info at redhat.com Thu Aug 8 09:02:19 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 8 Aug 2013 09:02:19 +0000 Subject: [Rdo-list] [RDO] How to Install/Enable Quantum-lbaas? Message-ID: <000001405d2990d2-342504e9-2654-42a5-8680-a3daf8fb1258-000000@email.amazonses.com> iLikeIT started a discussion. How to Install/Enable Quantum-lbaas? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/415/how-to-installenable-quantum-lbaas Have a great day! From rdo-info at redhat.com Thu Aug 8 11:31:34 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 8 Aug 2013 11:31:34 +0000 Subject: [Rdo-list] [RDO] CentOS 6.4 and veth MTU Message-ID: <000001405db2336e-9c9fd1c1-2781-4411-831e-5b76d82651c4-000000@email.amazonses.com> usimchoni started a discussion. CentOS 6.4 and veth MTU --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/416/centos-6-4-and-veth-mtu Have a great day! From rdo-info at redhat.com Thu Aug 8 14:56:45 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 8 Aug 2013 14:56:45 +0000 Subject: [Rdo-list] [RDO] Flock starting tomorrow Message-ID: <000001405e6e0e7d-d6756645-4f61-483e-8b22-dd71738823ab-000000@email.amazonses.com> rbowen started a discussion. Flock starting tomorrow --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/417/flock-starting-tomorrow Have a great day! From rdo-info at redhat.com Thu Aug 8 15:30:47 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 8 Aug 2013 15:30:47 +0000 Subject: [Rdo-list] [RDO] best way to get openstack running in a production environment Message-ID: <000001405e8d35ea-4a3415d7-8a0d-414c-9e7a-26d5f5bb4a7d-000000@email.amazonses.com> miss_heptagone started a discussion. best way to get openstack running in a production environment --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/418/best-way-to-get-openstack-running-in-a-production-environment Have a great day! From rdo-info at redhat.com Fri Aug 9 06:59:00 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 9 Aug 2013 06:59:00 +0000 Subject: [Rdo-list] [RDO] Install/Enable Baremetal OS deploment Message-ID: <0000014061df038d-1e20fdec-9d46-46dd-8dcc-5ec2acda43fb-000000@email.amazonses.com> iLikeIT started a discussion. Install/Enable Baremetal OS deploment --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/419/installenable-baremetal-os-deploment Have a great day! From rdo-info at redhat.com Fri Aug 9 10:00:11 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 9 Aug 2013 10:00:11 +0000 Subject: [Rdo-list] [RDO] Setting up ssh keys... ERROR Message-ID: <000001406284e283-57334f61-b978-4a7a-9d81-a55c621644aa-000000@email.amazonses.com> mloobo started a discussion. Setting up ssh keys... ERROR --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/420/setting-up-ssh-keys-error Have a great day! From beagles at redhat.com Fri Aug 9 14:22:56 2013 From: beagles at redhat.com (Brent Eagles) Date: Fri, 09 Aug 2013 11:52:56 -0230 Subject: [Rdo-list] Revisited: Neutron Quickstart, ovs_use_veth and RHEL 6.4. Message-ID: <5204FB40.4070501@redhat.com> Hi Thomas, You may recall a thread started by Perry from earlier this month with the title "Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent)" where the DHCP and L3 agents were complaining of RNETLINK resource busy type of errors when trying to set the MAC addr. The workaround was to enable ovs_use_veth. I had been stating that this was a required configuration for some time without knowing the correct reason why but Maru explained it to us later on, so everything is great in that respect. However... (there is always a "however", isn't there?) One of our team was *not* having this issue. In fact everything has been working for him for some time and he has never touched that configuration item. It turns out he was getting updates from the RHEL 6.5 composes build. After we discovered the differences, I downloaded the kernel and firmware rpms and installed them to a 6.4 VM and I also could get away without using the ovs_use_veth. We are doing some additional testing to see just how much works with this kind of setup (devstack, tempest, etc.). So far, it seems that whatever is changed between the openstack kernel and the one in the 6.5 composes addresses the issues that ovs_use_veth works around. Hopefully this info is of some use to you! I'm going to copy and preserve the current state of my VM so let me know if there is anything specific you want me to try with this configuration Cheers, Brent From rdo-info at redhat.com Fri Aug 9 18:11:21 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 9 Aug 2013 18:11:21 +0000 Subject: [Rdo-list] [RDO] REMOTE_USER Message-ID: <000001406446929d-1c74a79a-0997-46ee-bad8-f147e928b1ab-000000@email.amazonses.com> kfox1111 started a discussion. REMOTE_USER --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/421/remote_user Have a great day! From rdo-info at redhat.com Fri Aug 9 22:23:56 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 9 Aug 2013 22:23:56 +0000 Subject: [Rdo-list] [RDO] Im glad I finally registered Message-ID: <00000140652dd140-7fd2d407-8cda-4216-872d-407d86e56449-000000@email.amazonses.com> GraigBurg started a discussion. Im glad I finally registered --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/422/im-glad-i-finally-registered Have a great day! From rdo-info at redhat.com Sat Aug 10 00:04:33 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 10 Aug 2013 00:04:33 +0000 Subject: [Rdo-list] [RDO] Im happy I finally registered Message-ID: <000001406589efe5-afe6b785-5996-48d9-b30d-e7cba9778ed9-000000@email.amazonses.com> MichaelBr started a discussion. Im happy I finally registered --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/423/im-happy-i-finally-registered Have a great day! From rdo-info at redhat.com Sat Aug 10 12:05:49 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 10 Aug 2013 12:05:49 +0000 Subject: [Rdo-list] [RDO] Anything a tattoo artist is capable of drawing or tracing and coloring can be made into a tattoo, Message-ID: <00000140681e43c8-8a4d1e83-d936-4a72-b4ac-d1276f89f87e-000000@email.amazonses.com> EdwinaTay started a discussion. Anything a tattoo artist is capable of drawing or tracing and coloring can be made into a tattoo, --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/424/anything-a-tattoo-artist-is-capable-of-drawing-or-tracing-and-coloring-can-be-made-into-a-tattoo Have a great day! From rdo-info at redhat.com Sat Aug 10 18:50:00 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 10 Aug 2013 18:50:00 +0000 Subject: [Rdo-list] [RDO] Im happy I now signed up Message-ID: <000001406990501c-5c68e0b8-c02a-4b3f-b2fb-7305aa1096a1-000000@email.amazonses.com> Refugio90 started a discussion. Im happy I now signed up --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/425/im-happy-i-now-signed-up Have a great day! From rdo-info at redhat.com Sat Aug 10 23:53:19 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 10 Aug 2013 23:53:19 +0000 Subject: [Rdo-list] [RDO] Just want to say Hello! Message-ID: <000001406aa6025e-4efa7909-c211-464e-a1ad-e4a978763022-000000@email.amazonses.com> MSWJanett started a discussion. Just want to say Hello! --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/426/just-want-to-say-hello Have a great day! From rdo-info at redhat.com Sun Aug 11 04:08:46 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sun, 11 Aug 2013 04:08:46 +0000 Subject: [Rdo-list] [RDO] Some Emerging Options For Speedy Programs In Michael Kors Handbags - An Important A-Z Message-ID: <000001406b8fe45f-579d2f2e-27fb-4486-bf60-4270119d1845-000000@email.amazonses.com> ElidaStan started a discussion. Some Emerging Options For Speedy Programs In Michael Kors Handbags - An Important A-Z --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/427/some-emerging-options-for-speedy-programs-in-michael-kors-handbags-an-important-a-z Have a great day! From rdo-info at redhat.com Sun Aug 11 20:12:25 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sun, 11 Aug 2013 20:12:25 +0000 Subject: [Rdo-list] [RDO] I am the new one Message-ID: <000001406f0220e3-fc02de1c-e7bf-4a46-ae95-7cb992390772-000000@email.amazonses.com> ElaineOwe started a discussion. I am the new one --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/428/i-am-the-new-one Have a great day! From red at fedoraproject.org Sun Aug 11 22:33:02 2013 From: red at fedoraproject.org (Sandro "red" Mathys) Date: Mon, 12 Aug 2013 00:33:02 +0200 Subject: [Rdo-list] Fedora release mixup in Havana Repo Message-ID: The Fedora repo for the Havana-2 packages is mixed up a bit. While the repo is called fedora-19, the packages in it are almost all built for F20 (i.e. they have fc20 in their dist tag). That's a bit worrisome, as things could get different enough between a F19 and F20 build host so things start misbehaving or failing otherwise. Is there a chance to actually build F19 packages on F19 build hosts? Also, since F20 packages are apparently being built, I wonder why no fedora-20 repo is created. The earlier such a repo exists, the earlier we can test the packages and make RDO work on F20. That could help avoid the situation as-is with F19+RDO, where lots of things are broken pretty bad and need workarounds. I know, technically one can just use the fedora-19 repo with F20 but it would make things easier to have a separate fedora-20 repo and it will become necessary once there's real F19 packages in the fedora-19 repo, anyway. At the same time, it should be noted that Packstack is currently not working on un-branched F20, i.e. Rawhide. Three Puppet modules need each a fix, which have been sent upstream now. The rather small issue is that the operatingsystemrelease fact does not return an Integer as expected but the string "Rawhide" on such systems. -- Sandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Mon Aug 12 00:29:47 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 12 Aug 2013 00:29:47 +0000 Subject: [Rdo-list] [RDO] Initialize Cinder to use Physical LVM Partition Message-ID: <000001406fedc1c1-97d43818-bd7e-4702-917d-2c184c734a27-000000@email.amazonses.com> kj4ohh started a discussion. Initialize Cinder to use Physical LVM Partition --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/429/initialize-cinder-to-use-physical-lvm-partition Have a great day! From rdo-info at redhat.com Mon Aug 12 02:28:50 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 12 Aug 2013 02:28:50 +0000 Subject: [Rdo-list] [RDO] Im happy I finally signed up Message-ID: <00000140705abf5c-32f05ff5-86f5-4cff-a14b-4806ef46e6ca-000000@email.amazonses.com> HattieTTM started a discussion. Im happy I finally signed up --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/430/im-happy-i-finally-signed-up Have a great day! From rdo-info at redhat.com Mon Aug 12 02:55:52 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 12 Aug 2013 02:55:52 +0000 Subject: [Rdo-list] [RDO] high heel sandals from the Christian Louboutin combined with nice Message-ID: <0000014070737f67-2a9a1da2-cec9-4bd7-9102-9af8814fad39-000000@email.amazonses.com> cheap27 started a discussion. high heel sandals from the Christian Louboutin combined with nice --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/431/high-heel-sandals-from-the-christian-louboutin-combined-with-nice Have a great day! From rdo-info at redhat.com Mon Aug 12 02:56:27 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 12 Aug 2013 02:56:27 +0000 Subject: [Rdo-list] [RDO] consentrate on on ideal butt sweat denims which have Abercrombie and fitch london Message-ID: <00000140707409c6-61721b3e-3459-4d29-9d7b-a5a3e959898c-000000@email.amazonses.com> cheap27 started a discussion. consentrate on on ideal butt sweat denims which have Abercrombie and fitch london --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/432/consentrate-on-on-ideal-butt-sweat-denims-which-have-abercrombie-and-fitch-london Have a great day! From rdo-info at redhat.com Mon Aug 12 02:56:54 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 12 Aug 2013 02:56:54 +0000 Subject: [Rdo-list] [RDO] person who wears jewelry finds themselves in a fix with Pandora charm Message-ID: <0000014070747223-465c24dd-784d-40f8-a3ef-619e45aeef8d-000000@email.amazonses.com> cheap27 started a discussion. person who wears jewelry finds themselves in a fix with Pandora charm --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/433/person-who-wears-jewelry-finds-themselves-in-a-fix-with-pandora-charm Have a great day! From rdo-info at redhat.com Mon Aug 12 02:57:27 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 12 Aug 2013 02:57:27 +0000 Subject: [Rdo-list] [RDO] between national and state laws as a reason to err on the side Hollister jeans Message-ID: <000001407074f1e7-764246e1-f499-4e38-82dc-3a163a38396c-000000@email.amazonses.com> cheap27 started a discussion. between national and state laws as a reason to err on the side Hollister jeans --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/434/between-national-and-state-laws-as-a-reason-to-err-on-the-side-hollister-jeans Have a great day! From rdo-info at redhat.com Mon Aug 12 07:30:21 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 12 Aug 2013 07:30:21 +0000 Subject: [Rdo-list] [RDO] Docs should be updated Message-ID: <00000140716ecce8-1f764117-8368-4944-89dd-4c369344eab7-000000@email.amazonses.com> rongze_zhu started a discussion. Docs should be updated --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/435/docs-should-be-updated Have a great day! From apevec at redhat.com Mon Aug 12 10:22:10 2013 From: apevec at redhat.com (Alan Pevec) Date: Mon, 12 Aug 2013 06:22:10 -0400 (EDT) Subject: [Rdo-list] Fedora release mixup in Havana Repo In-Reply-To: References: Message-ID: <545942329.482908.1376302930527.JavaMail.root@redhat.com> > The Fedora repo for the Havana-2 packages is mixed up a bit. While the repo > is called fedora-19, the packages in it are almost all built for F20 (i.e. > they have fc20 in their dist tag). That's a bit worrisome, as things could > get different enough between a F19 and F20 build host so things start > misbehaving or failing otherwise. Is there a chance to actually build F19 > packages on F19 build hosts? It worked between f18/f19 for Grizzly, do you see anything actually getting different between f19/f20? > Also, since F20 packages are apparently being built, I wonder why no > fedora-20 repo is created. For f20 you can just use Rawhide repos for now and f20 repos, when it branches. For Grizzly, we had fedora-19 symlink at some point but removed it to encourage usage for Rawhide instead. > At the same time, it should be noted that Packstack is currently not > working on un-branched F20, i.e. Rawhide. Three Puppet modules need each a > fix, which have been sent upstream now. The rather small issue is that the > operatingsystemrelease fact does not return an Integer as expected but the > string "Rawhide" on such systems. CCing Martin, Packstack maintainer to include those fixes in Rawhide builds. Cheers, Alan From red at fedoraproject.org Mon Aug 12 14:03:00 2013 From: red at fedoraproject.org (Sandro "red" Mathys) Date: Mon, 12 Aug 2013 16:03:00 +0200 Subject: [Rdo-list] Fedora release mixup in Havana Repo In-Reply-To: <545942329.482908.1376302930527.JavaMail.root@redhat.com> References: <545942329.482908.1376302930527.JavaMail.root@redhat.com> Message-ID: On Mon, Aug 12, 2013 at 12:22 PM, Alan Pevec wrote: > > The Fedora repo for the Havana-2 packages is mixed up a bit. While the > repo > > is called fedora-19, the packages in it are almost all built for F20 > (i.e. > > they have fc20 in their dist tag). That's a bit worrisome, as things > could > > get different enough between a F19 and F20 build host so things start > > misbehaving or failing otherwise. Is there a chance to actually build F19 > > packages on F19 build hosts? > > It worked between f18/f19 for Grizzly, do you see anything actually > getting different between f19/f20? Not right now, but I wouldn't necessarily want to wait until we run into unpredictable issue as they tend to cost a lot of people a lot of agony. Just saying. Mostly working with EL6 myself, so I probably won't be that person and therefore don't care too much. ;) > > Also, since F20 packages are apparently being built, I wonder why no > > fedora-20 repo is created. > > For f20 you can just use Rawhide repos for now and f20 repos, when it > branches. > For Grizzly, we had fedora-19 symlink at some point but removed it to > encourage usage for Rawhide instead. Oh, I didn't notice they were built into Rawhide as well. My bad. Encouraging works better when people know about it, though. Maybe put a README in a fedora-20 folder or so? Just in case someone else is as stupid as I was... :) > > At the same time, it should be noted that Packstack is currently not > > working on un-branched F20, i.e. Rawhide. Three Puppet modules need each > a > > fix, which have been sent upstream now. The rather small issue is that > the > > operatingsystemrelease fact does not return an Integer as expected but > the > > string "Rawhide" on such systems. > > CCing Martin, Packstack maintainer to include those fixes in Rawhide > builds. > Packstack folks are informed already: https://bugzilla.redhat.com/show_bug.cgi?id=995872 -- Sandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Mon Aug 12 22:39:28 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 12 Aug 2013 22:39:28 +0000 Subject: [Rdo-list] [RDO] Live migration Network Issues Message-ID: <0000014074af208a-cb0bfe2c-4bca-4d24-9787-c7676db5dbda-000000@email.amazonses.com> lrr started a discussion. Live migration Network Issues --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/436/live-migration-network-issues Have a great day! From rdo-info at redhat.com Tue Aug 13 05:51:24 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 13 Aug 2013 05:51:24 +0000 Subject: [Rdo-list] [RDO] Launching windows instance in openstack Message-ID: <00000140763a9043-dbf2a50a-94b4-4249-ae5d-6a3db6a6115f-000000@email.amazonses.com> anandts started a discussion. Launching windows instance in openstack --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/437/launching-windows-instance-in-openstack Have a great day! From rdo-info at redhat.com Tue Aug 13 07:22:42 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 13 Aug 2013 07:22:42 +0000 Subject: [Rdo-list] [RDO] Herve leger bandage dress has launched a lot of series for its fans Message-ID: <00000140768e27f3-a7f1c640-d144-494a-a51b-605222649121-000000@email.amazonses.com> eronica432 started a discussion. Herve leger bandage dress has launched a lot of series for its fans --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/438/herve-leger-bandage-dress-has-launched-a-lot-of-series-for-its-fans Have a great day! From rdo-info at redhat.com Tue Aug 13 07:23:25 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 13 Aug 2013 07:23:25 +0000 Subject: [Rdo-list] [RDO] They was able to brace cash as Abercrombie Fitch corp Message-ID: <00000140768ecdae-0ded7a00-788d-42f4-a0fc-62a6cef13c7f-000000@email.amazonses.com> eronica432 started a discussion. They was able to brace cash as Abercrombie Fitch corp --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/439/they-was-able-to-brace-cash-as-abercrombie-fitch-corp Have a great day! From rdo-info at redhat.com Tue Aug 13 13:41:26 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 13 Aug 2013 13:41:26 +0000 Subject: [Rdo-list] [RDO] qpidd on havana Message-ID: <0000014077e8df4a-1c70f3e4-7676-4701-b826-86c751fd68d8-000000@email.amazonses.com> Charles started a discussion. qpidd on havana --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/440/qpidd-on-havana Have a great day! From rdo-info at redhat.com Tue Aug 13 13:48:30 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 13 Aug 2013 13:48:30 +0000 Subject: [Rdo-list] [RDO] two node install [compute and storage] Message-ID: <0000014077ef5edb-e063897f-1065-434f-a3ad-8824b4be7ba8-000000@email.amazonses.com> Vijai started a discussion. two node install [compute and storage] --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/441/two-node-install-compute-and-storage Have a great day! From mmagr at redhat.com Tue Aug 13 18:01:31 2013 From: mmagr at redhat.com (Martin Magr) Date: Tue, 13 Aug 2013 20:01:31 +0200 Subject: [Rdo-list] [package announce] openstack-packstack updated Message-ID: <520A747B.6090500@redhat.com> Greetings, Packstack packages has been updated in RDO Grizzly and Havana repos to openstack-packstack-2013.1.1-0.24.dev660 (grizzly) and openstack-packstack-2013.2.1-0.2.dev702 (havana) %changelog * Tue Aug 13 2013 Martin M?gr - 2013.1.1-0.24.dev660 - ovs_use_veth=True is no longer required - Allow tempest repo uri and revision configuration - Update inifile module to support empty values %changelog * Tue Aug 13 2013 Martin M?gr - 2013.2.1-0.2.dev702 - ovs_use_veth=True is no longer required - Remove libvirt's default network (i.e. virbr0) to avoid confusion - Rename Quantum to Neutron - Added support for configuration of Cinder NFS backend driver (#916301) - Removed CONFIG_QUANTUM_USE_NAMESPACES option Regards, Martin -- Martin M?gr Openstack Red Hat Czech From rdo-info at redhat.com Tue Aug 13 18:12:25 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 13 Aug 2013 18:12:25 +0000 Subject: [Rdo-list] [RDO] [package announce] openstack-packstack updated Message-ID: <0000014078e0fd2b-4b4ffc6e-3415-4939-960f-d9804c077830-000000@email.amazonses.com> rbowen started a discussion. [package announce] openstack-packstack updated --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/442/package-announce-openstack-packstack-updated Have a great day! From rdo-info at redhat.com Tue Aug 13 18:59:16 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 13 Aug 2013 18:59:16 +0000 Subject: [Rdo-list] [RDO] GRE tenant networks Message-ID: <00000140790be03d-70f5baf3-a3f5-401f-9548-9081652e49f7-000000@email.amazonses.com> rkukura started a discussion. GRE tenant networks --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/443/gre-tenant-networks Have a great day! From rkukura at redhat.com Tue Aug 13 19:01:32 2013 From: rkukura at redhat.com (Robert Kukura) Date: Tue, 13 Aug 2013 15:01:32 -0400 Subject: [Rdo-list] GRE tenant networks Message-ID: <520A828C.4030308@redhat.com> Support for GRE tenant networks is now available in RDO! See http://openstack.redhat.com/Using_GRE_Tenant_Networks for details. Feedback is welcome (on this thread or at http://openstack.redhat.com/forum/discussion/443/gre-tenant-networks). -Bob From pmyers at redhat.com Wed Aug 14 00:09:51 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 13 Aug 2013 20:09:51 -0400 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FE5DDC.4010104@redhat.com> References: <51FE5DDC.4010104@redhat.com> Message-ID: <520ACACF.5080109@redhat.com> On 08/04/2013 09:57 AM, Perry Myers wrote: > Hi, > > I followed the instructions at: > http://openstack.redhat.com/Neutron-Quickstart > http://openstack.redhat.com/Running_an_instance_with_Neutron Ok, I wanted to close the loop on this thread with some things I found. It appears that the issues I was seeing had to do specifically with the 114.openstack kernel in the RDO Grizzly repos. This kernel had issues with being able to change MAC addresses with an interface that is in the up state. A new kernel was uploaded today 114.openstack.gre.2 and rkukura sent an announcement out about it, as it enables gre tunnel support. A side effect of this new kernel is that it fixed the bugs with changing MAC addresses on interfaces that are up. So, with this new kernel, I'm now able to successfully get RDO Grizzly on RHEL 6.4 with the 114.openstack.gre.2 kernel running. I can boot a Cirros VM, get an ip address, get external connectivity from the VM and also ssh/ping it via the floating IP. So... all in all, success! Now, the next step is doing a multi host setup. This is where the new support for gre tunnels will be handy, since moving from 1 to many nodes requires either VLAN support or tunnels, and since I don't have an intelligent switch, gre tunnels it is. Cheers, Perry From pmyers at redhat.com Wed Aug 14 00:10:48 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 13 Aug 2013 20:10:48 -0400 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <520ACACF.5080109@redhat.com> References: <51FE5DDC.4010104@redhat.com> <520ACACF.5080109@redhat.com> Message-ID: <520ACB08.3010502@redhat.com> On 08/13/2013 08:09 PM, Perry Myers wrote: > On 08/04/2013 09:57 AM, Perry Myers wrote: >> Hi, >> >> I followed the instructions at: >> http://openstack.redhat.com/Neutron-Quickstart >> http://openstack.redhat.com/Running_an_instance_with_Neutron > > Ok, I wanted to close the loop on this thread with some things I found. > > It appears that the issues I was seeing had to do specifically with the > 114.openstack kernel in the RDO Grizzly repos. This kernel had issues > with being able to change MAC addresses with an interface that is in the > up state. > > A new kernel was uploaded today 114.openstack.gre.2 and rkukura sent an > announcement out about it, as it enables gre tunnel support. > > A side effect of this new kernel is that it fixed the bugs with changing > MAC addresses on interfaces that are up. > > So, with this new kernel, I'm now able to successfully get RDO Grizzly > on RHEL 6.4 with the 114.openstack.gre.2 kernel running. I can boot a > Cirros VM, get an ip address, get external connectivity from the VM and > also ssh/ping it via the floating IP. > > So... all in all, success! > > Now, the next step is doing a multi host setup. This is where the new > support for gre tunnels will be handy, since moving from 1 to many nodes > requires either VLAN support or tunnels, and since I don't have an > intelligent switch, gre tunnels it is. Oh, one thing to note... I did all of the above on hosts (VMs) where NetworkManager was removed. I should try on hosts where NM is enabled to see if I still have success... From pmyers at redhat.com Wed Aug 14 01:20:29 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 13 Aug 2013 21:20:29 -0400 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <520ACB08.3010502@redhat.com> References: <51FE5DDC.4010104@redhat.com> <520ACACF.5080109@redhat.com> <520ACB08.3010502@redhat.com> Message-ID: <520ADB5D.7000203@redhat.com> On 08/13/2013 08:10 PM, Perry Myers wrote: > On 08/13/2013 08:09 PM, Perry Myers wrote: >> On 08/04/2013 09:57 AM, Perry Myers wrote: >>> Hi, >>> >>> I followed the instructions at: >>> http://openstack.redhat.com/Neutron-Quickstart >>> http://openstack.redhat.com/Running_an_instance_with_Neutron >> >> Ok, I wanted to close the loop on this thread with some things I found. >> >> It appears that the issues I was seeing had to do specifically with the >> 114.openstack kernel in the RDO Grizzly repos. This kernel had issues >> with being able to change MAC addresses with an interface that is in the >> up state. >> >> A new kernel was uploaded today 114.openstack.gre.2 and rkukura sent an >> announcement out about it, as it enables gre tunnel support. >> >> A side effect of this new kernel is that it fixed the bugs with changing >> MAC addresses on interfaces that are up. >> >> So, with this new kernel, I'm now able to successfully get RDO Grizzly >> on RHEL 6.4 with the 114.openstack.gre.2 kernel running. I can boot a >> Cirros VM, get an ip address, get external connectivity from the VM and >> also ssh/ping it via the floating IP. >> >> So... all in all, success! >> >> Now, the next step is doing a multi host setup. This is where the new >> support for gre tunnels will be handy, since moving from 1 to many nodes >> requires either VLAN support or tunnels, and since I don't have an >> intelligent switch, gre tunnels it is. > > Oh, one thing to note... I did all of the above on hosts (VMs) where > NetworkManager was removed. I should try on hosts where NM is enabled > to see if I still have success... Ok, semi-encouraging results. Here's what I did. Test environment: * RDO Grizzly * 114.openstack.gre.2 kernel on RHEL 6.4.z * single VM for an allinone install with eth0 as the NIC on the VM Tests run were: (a) booting cirros instance (b) check for dhcp lease (c) outbound connectivity from guest (d) ping/ssh to FIP * NM enabled and eth0 on the host with NM_CONTROLLED=yes a+ b+ c+ d- * I reran the base test above to double check that I could repeat the failure a+ b+ c+ d+ So... inconsistent results. I'd like to see what other folks find, and if anyone else has issues with NM being enabled. As of right now, I don't have enough data to say that it is problematic or not. Perry From pmyers at redhat.com Wed Aug 14 02:43:40 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 13 Aug 2013 22:43:40 -0400 Subject: [Rdo-list] RDO Havana + Neutron allinone success :) Message-ID: <520AEEDC.1030208@redhat.com> Just tried: * RDO Havana (not the nightlies. I used the stable repos) * RHEL 6.4.z + 114.openstack.gre.2 kernel (in the RDO Havana repo) * allinone install on a VM running on top of a Fedora host * Neutron enabled! And was able to successfully boot a cirros instance, and run the usual tests (outbound connectivity, FIP access via ssh/ping) I ran this with the base image where my host network interfaces were managed by NetworkManager, and I didn't notice any ill effects. The _one_ caveat is that there were selinux denials, probably having to do with the quantum/neutron renaming. Terry Wilson found this earlier when he ran some tests, so I was able to run setenforce Permissive before running the packstack install to avoid those issues (Terry filed bugs 996773 and 996776 for these issues) Next up, to try out Bob's GRE instructions to see if I can get a multi-node setup running with tunneling. Cheers, Perry From pmyers at redhat.com Wed Aug 14 02:46:54 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 13 Aug 2013 22:46:54 -0400 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <52024EB8.1000800@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <520220A8.2060508@redhat.com> <52024EB8.1000800@redhat.com> Message-ID: <520AEF9E.9040705@redhat.com> On 08/07/2013 09:42 AM, Perry Myers wrote: > On 08/07/2013 06:25 AM, P?draig Brady wrote: >> On 08/07/2013 09:40 AM, Alvaro Lopez Ortega wrote: >>> On Aug 4, 2013, at 4:56 PM, Perry Myers > wrote: >>> >>>> Then I thought to check the nightly repos here: >>>> http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6/x86_64/ >>>> >>>> Packstack isn't even in those repos, which means that folks can't use >>>> packstack to install nightly builds easily. >>> >>> This is something the CI team discussed last Monday. SmokeStack ought to pack Packstack along the rest of the components. Testing it may be out of scope, but at least it should generate the PackStack RPM. It'd avoid the problem you just described so people could test the very latest version of all the packages with ease. >>> >>> It's a work item for this week, so we should get it sorted out within the next few days. >>> >>> All the best, >>> Alvaro >> >> What I had previously suggested was that the trunk packages were just updates >> on top of the existing pre-release standard repo. >> So to test the trunk repo you would currently enable for example >> the Havana milestone repo _and_ the trunk repo, >> and in that way get all the ancillary packages without the >> space and maintenance overhead of keeping two very similar repos in sync. > > +1, this is a good idea > > Do we need a release RPM though that makes getting the combination of > nightly + stable repos enabled simultaneously? I chatted with apevec with this and we came to conclusion that a release rpm isn't needed... what we do need to do is take this repo file: http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6-openstack-trunk.repo Right now it just has: # Place this file in your /etc/yum.repos.d/ directory [el6-openstack-trunk] name=Openstack Upstream Repository for EL6 baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6/x86_64 enabled=1 skip_if_unavailable=1 gpgcheck=0 priority=98 But we need to add to that repo file an additional repo: [openstack-havana] name=OpenStack Havana Repository for EPEL 6 baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-havana/epel-6 enabled=1 skip_if_unavailable=0 gpgcheck=0 priority=98 That way when you grab the nightly repo file, you would get packstack from the openstack-havana repo and other base packages like kernel, etc. But then the nightly openstack specific RPMs would come from the nightly repo. Alan, can you make this change tomorrow? Perry From pmyers at redhat.com Wed Aug 14 02:49:29 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 13 Aug 2013 22:49:29 -0400 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <51FE6B8A.3050501@redhat.com> References: <51FE6B8A.3050501@redhat.com> Message-ID: <520AF039.8040605@redhat.com> > * packstack in Havana needs to create demo tenant and import cirros > image just like in RDO Grizzly Done! > * Horizon seems to have screen refresh/updates and More button issues Resolved! > * Need Neutron packages for Havana so that we can use Neutron in Havana Done! > * Need Packstack available so that RDO Nightly users can install the > nightly builds This is still pending, and being discussed w/ apevec on other branch of this thread From apevec at redhat.com Wed Aug 14 09:31:08 2013 From: apevec at redhat.com (Alan Pevec) Date: Wed, 14 Aug 2013 05:31:08 -0400 (EDT) Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <520AEF9E.9040705@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <520220A8.2060508@redhat.com> <52024EB8.1000800@redhat.com> <520AEF9E.9040705@redhat.com> Message-ID: <911742753.2143970.1376472668770.JavaMail.root@redhat.com> > Alan, can you make this change tomorrow? I did that already yesterday, but looks like FPO has aggressive caching - you might need shift-ctrl-R to get a fresh copy. Cheers, Alan From pmyers at redhat.com Wed Aug 14 12:03:10 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 14 Aug 2013 08:03:10 -0400 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <911742753.2143970.1376472668770.JavaMail.root@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <520220A8.2060508@redhat.com> <52024EB8.1000800@redhat.com> <520AEF9E.9040705@redhat.com> <911742753.2143970.1376472668770.JavaMail.root@redhat.com> Message-ID: <520B71FE.3040800@redhat.com> On 08/14/2013 05:31 AM, Alan Pevec wrote: >> Alan, can you make this change tomorrow? > > I did that already yesterday, but looks like FPO has aggressive caching - you might need shift-ctrl-R to get a fresh copy. Indeed. Thanks, it looks right now! Though, should openstack-havana have priority=98 though? Perry From apevec at redhat.com Wed Aug 14 12:49:16 2013 From: apevec at redhat.com (Alan Pevec) Date: Wed, 14 Aug 2013 08:49:16 -0400 (EDT) Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <520B71FE.3040800@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <520220A8.2060508@redhat.com> <52024EB8.1000800@redhat.com> <520AEF9E.9040705@redhat.com> <911742753.2143970.1376472668770.JavaMail.root@redhat.com> <520B71FE.3040800@redhat.com> Message-ID: <1703327461.2243337.1376484556800.JavaMail.root@redhat.com> > Though, should openstack-havana have priority=98 though? No, we should get rid of yum-prio in RDO, it should win with higher NVRs instead. The only reason could be python-django 1.4 issue we had but I think horizon spec was fixed in the meantime, Matthias? Cheers, Alan From mrunge at redhat.com Wed Aug 14 17:02:50 2013 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 14 Aug 2013 19:02:50 +0200 Subject: [Rdo-list] experiences trying out RDO Havana (H2 milestone) on RHEL 6.4 In-Reply-To: <1703327461.2243337.1376484556800.JavaMail.root@redhat.com> References: <51FE6B8A.3050501@redhat.com> <90CF740A-B441-4FF7-86E4-9CD9777BD35C@redhat.com> <520220A8.2060508@redhat.com> <52024EB8.1000800@redhat.com> <520AEF9E.9040705@redhat.com> <911742753.2143970.1376472668770.JavaMail.root@redhat.com> <520B71FE.3040800@redhat.com> <1703327461.2243337.1376484556800.JavaMail.root@redhat.com> Message-ID: <520BB83A.8060706@redhat.com> On 14/08/13 14:49, Alan Pevec wrote: >> Though, should openstack-havana have priority=98 though? > > No, we should get rid of yum-prio in RDO, it should win with higher NVRs instead. > The only reason could be python-django 1.4 issue we had but I think horizon spec was fixed in the meantime, Matthias? > > Cheers, > Alan > Better, I deprecated and retired Django, so there's just only one left (Django14). Matthias From rdo-info at redhat.com Wed Aug 14 17:07:52 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 14 Aug 2013 17:07:52 +0000 Subject: [Rdo-list] [RDO] Live Migration Problem Message-ID: <000001407dcc4182-86737ff8-d0d3-44dd-9962-ce3dac729ebe-000000@email.amazonses.com> lrr started a discussion. Live Migration Problem --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/445/live-migration-problem Have a great day! From rdo-info at redhat.com Wed Aug 14 19:23:47 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 14 Aug 2013 19:23:47 +0000 Subject: [Rdo-list] [RDO] Fedora 19 Install Issue HTTPD + Nagios.conf Message-ID: <000001407e48ae2b-600be9ce-81a7-4283-ae85-c7b5fa7ca4ed-000000@email.amazonses.com> rbrady started a discussion. Fedora 19 Install Issue HTTPD + Nagios.conf --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/446/fedora-19-install-issue-httpd-nagios-conf Have a great day! From rdo-info at redhat.com Wed Aug 14 19:36:00 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 14 Aug 2013 19:36:00 +0000 Subject: [Rdo-list] [RDO] Just want to say Hi. Message-ID: <000001407e53dfec-f0e69073-82b0-471b-9058-49655a4ee76d-000000@email.amazonses.com> Frederick started a discussion. Just want to say Hi. --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/447/just-want-to-say-hi- Have a great day! From rdo-info at redhat.com Wed Aug 14 19:40:10 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 14 Aug 2013 19:40:10 +0000 Subject: [Rdo-list] [RDO] F19 + Havana packstack error Message-ID: <000001407e57afc0-a529cbb2-f527-4b3b-8a71-9440de40a70b-000000@email.amazonses.com> vch started a discussion. F19 + Havana packstack error --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/448/f19-havana-packstack-error Have a great day! From rdo-info at redhat.com Thu Aug 15 02:43:22 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 15 Aug 2013 02:43:22 +0000 Subject: [Rdo-list] [RDO] TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30 Message-ID: <000001407fdb2121-3147f59a-5ea3-4062-95a3-ce63facc556f-000000@email.amazonses.com> LiChen started a discussion. TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/449/timeouterror-queuepool-limit-of-size-5-overflow-10-reached-connection-timed-out-timeout-30 Have a great day! From rdo-info at redhat.com Thu Aug 15 03:25:37 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 15 Aug 2013 03:25:37 +0000 Subject: [Rdo-list] [RDO] I am the new one Message-ID: <000001408001cf75-8dd632ac-be6b-4352-8fd3-e11f62e43ddf-000000@email.amazonses.com> TerenceSc started a discussion. I am the new one --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/450/i-am-the-new-one Have a great day! From rdo-info at redhat.com Thu Aug 15 04:04:38 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 15 Aug 2013 04:04:38 +0000 Subject: [Rdo-list] [RDO] no leases left Message-ID: <0000014080258944-e469c078-3209-4798-84f6-24b5a7990e4f-000000@email.amazonses.com> LiChen started a discussion. no leases left --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/451/no-leases-left Have a great day! From rdo-info at redhat.com Thu Aug 15 06:15:46 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 15 Aug 2013 06:15:46 +0000 Subject: [Rdo-list] [RDO] Error when attach volume Message-ID: <00000140809d97ef-8dc40840-9b3c-4119-b76b-107bb2784724-000000@email.amazonses.com> LiChen started a discussion. Error when attach volume --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/452/error-when-attach-volume Have a great day! From rdo-info at redhat.com Thu Aug 15 11:50:31 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 15 Aug 2013 11:50:31 +0000 Subject: [Rdo-list] [RDO] Instance Fails To Run Fedora 19 Message-ID: <0000014081d010cd-d1b184f4-b6a1-423a-a769-9086b4aac99a-000000@email.amazonses.com> rbrady started a discussion. Instance Fails To Run Fedora 19 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/453/instance-fails-to-run-fedora-19 Have a great day! From pmyers at redhat.com Thu Aug 15 15:00:17 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 15 Aug 2013 11:00:17 -0400 Subject: [Rdo-list] iproute yum update issues in RDO Grizzly Message-ID: <520CED01.6010805@redhat.com> Just noticed this morning (thanks to jmartin to pointing it out) We had two versions of iproute in: http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/ iproute-2.6.32-23.el6ost.netns.2.x86_64.rpm iproute-2.6.32-23.el_6.netns.1.x86_64.rpm The .2 package is newer, but unfortunately since el_6 is > el6ost in yum dependency resolution, folks were getting error messages like: "package iproute-2.6.32-23.el6_4.netns.1.x86_64 (which is newer than iproute-2.6.32-23.el6ost.netns.2.x86_64) is already installed" We have since removed the el_6 package from the repo, so going forward there should be no problems. However, if you have an existing system and need to update to the .2 iproute, you'll need to force remove the old one and install the new via something like: $ sudo rpm -e --nodeps iproute; yum install -y iproute Sorry about the mixup Cheers! Perry From rdo-info at redhat.com Thu Aug 15 22:04:48 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 15 Aug 2013 22:04:48 +0000 Subject: [Rdo-list] [RDO] F19/Havana (single instance) nova errors Message-ID: <0000014084027717-10503524-f3a5-4d98-ab85-b4d5ba0ec277-000000@email.amazonses.com> vch started a discussion. F19/Havana (single instance) nova errors --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/454/f19havana-single-instance-nova-errors Have a great day! From rdo-info at redhat.com Fri Aug 16 00:25:16 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 16 Aug 2013 00:25:16 +0000 Subject: [Rdo-list] [RDO] embedded vnc console only works in full screen mode Message-ID: <0000014084830fcb-18cc6911-7cd4-4559-a7cd-0abaed5403cc-000000@email.amazonses.com> marafa started a discussion. embedded vnc console only works in full screen mode --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/455/embedded-vnc-console-only-works-in-full-screen-mode Have a great day! From rdo-info at redhat.com Fri Aug 16 01:34:38 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 16 Aug 2013 01:34:38 +0000 Subject: [Rdo-list] [RDO] Just want to say Hello! Message-ID: <0000014084c2932b-a8b23f59-8280-4339-ba1a-aa5aab889c4f-000000@email.amazonses.com> PreciousR started a discussion. Just want to say Hello! --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/456/just-want-to-say-hello Have a great day! From rdo-info at redhat.com Fri Aug 16 09:51:44 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 16 Aug 2013 09:51:44 +0000 Subject: [Rdo-list] [RDO] Class diagram for Keystone and Quantum? Message-ID: <000001408689ac26-90b04925-e197-4165-bdd1-4ae35aa4d1c2-000000@email.amazonses.com> mloobo started a discussion. Class diagram for Keystone and Quantum? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/457/class-diagram-for-keystone-and-quantum Have a great day! From lhh at redhat.com Fri Aug 16 15:08:46 2013 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 16 Aug 2013 11:08:46 -0400 Subject: [Rdo-list] iproute yum update issues in RDO Grizzly In-Reply-To: <520CED01.6010805@redhat.com> References: <520CED01.6010805@redhat.com> Message-ID: <520E407E.70709@redhat.com> On 08/15/2013 11:00 AM, Perry Myers wrote: > Just noticed this morning (thanks to jmartin to pointing it out) > > We had two versions of iproute in: > http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/ > > iproute-2.6.32-23.el6ost.netns.2.x86_64.rpm > iproute-2.6.32-23.el_6.netns.1.x86_64.rpm > > The .2 package is newer, but unfortunately since el_6 is > el6ost in yum > dependency resolution, folks were getting error messages like: > > "package iproute-2.6.32-23.el6_4.netns.1.x86_64 (which is newer than > iproute-2.6.32-23.el6ost.netns.2.x86_64) is already installed" > > We have since removed the el_6 package from the repo, so going forward > there should be no problems. > > However, if you have an existing system and need to update to the .2 > iproute, you'll need to force remove the old one and install the new via > something like: Just to add a detail to what Perry pointed out here - The two packages are code-equivalent; the .1 package was a test build; the .2 package is the same code with some patches renamed, nothing more. For existing installations, there is no pressing need to update to the .2 package from the .1 package. -- Lon From rdo-info at redhat.com Fri Aug 16 15:20:21 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 16 Aug 2013 15:20:21 +0000 Subject: [Rdo-list] [RDO] Google Hangout in September? Message-ID: <0000014087b689bb-c41713c6-df43-4275-99b2-fd5b1dc84815-000000@email.amazonses.com> rbowen started a discussion. Google Hangout in September? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/458/google-hangout-in-september Have a great day! From rdo-info at redhat.com Fri Aug 16 18:47:05 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 16 Aug 2013 18:47:05 +0000 Subject: [Rdo-list] [RDO] I am the new guy Message-ID: <000001408873cec8-5ee01d71-af21-4894-b2a3-e73711132232-000000@email.amazonses.com> KurtZ18 started a discussion. I am the new guy --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/459/i-am-the-new-guy Have a great day! From rdo-info at redhat.com Sat Aug 17 08:20:13 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 17 Aug 2013 08:20:13 +0000 Subject: [Rdo-list] [RDO] I am the new guy Message-ID: <000001408b5c3f3a-33562316-7c01-498d-b2a0-6d38e21a2cd6-000000@email.amazonses.com> AaronCrai started a discussion. I am the new guy --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/460/i-am-the-new-guy Have a great day! From rdo-info at redhat.com Sat Aug 17 11:22:06 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 17 Aug 2013 11:22:06 +0000 Subject: [Rdo-list] [RDO] Obtaining Help On Rapid Systems In Louis Vuitton Outlet Message-ID: <000001408c02c3b0-f998ff6b-3f0e-45b3-a358-e8b4ba9a9dd3-000000@email.amazonses.com> KerryLTJ started a discussion. Obtaining Help On Rapid Systems In Louis Vuitton Outlet --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/461/obtaining-help-on-rapid-systems-in-louis-vuitton-outlet Have a great day! From rdo-info at redhat.com Sat Aug 17 12:09:30 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 17 Aug 2013 12:09:30 +0000 Subject: [Rdo-list] [RDO] Just want to say Hello. Message-ID: <000001408c2e2b37-a76cf581-7e6b-4bd0-a898-798a46c6ff48-000000@email.amazonses.com> AlvaFitts started a discussion. Just want to say Hello. --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/462/just-want-to-say-hello- Have a great day! From rdo-info at redhat.com Sat Aug 17 12:14:36 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 17 Aug 2013 12:14:36 +0000 Subject: [Rdo-list] [RDO] VM network bandwidth is terrible on RHEL Message-ID: <000001408c32d6d4-50e15e4a-b1fb-4f0f-954d-2512cc36c4d8-000000@email.amazonses.com> chenww started a discussion. VM network bandwidth is terrible on RHEL --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/463/vm-network-bandwidth-is-terrible-on-rhel Have a great day! From rdo-info at redhat.com Sat Aug 17 14:34:18 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 17 Aug 2013 14:34:18 +0000 Subject: [Rdo-list] [RDO] I am the new guy Message-ID: <000001408cb2bc89-7023cbab-856e-4655-97a7-f1a025e8b841-000000@email.amazonses.com> RichelleA started a discussion. I am the new guy --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/464/i-am-the-new-guy Have a great day! From rdo-info at redhat.com Sat Aug 17 15:35:45 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 17 Aug 2013 15:35:45 +0000 Subject: [Rdo-list] [RDO] FreeBSD Instance Can't See Volume Attached To It Message-ID: <000001408ceafee0-285e5dfb-ec7d-4518-8f49-b9cfe45ffec3-000000@email.amazonses.com> kj4ohh started a discussion. FreeBSD Instance Can't See Volume Attached To It --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/465/freebsd-instance-cant-see-volume-attached-to-it Have a great day! From rdo-info at redhat.com Sat Aug 17 21:18:56 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 17 Aug 2013 21:18:56 +0000 Subject: [Rdo-list] [RDO] Just want to say Hello. Message-ID: <000001408e252f78-851d7b16-9210-4c42-a17a-5ca6bc0d1945-000000@email.amazonses.com> AaronCrai started a discussion. Just want to say Hello. --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/466/just-want-to-say-hello- Have a great day! From rdo-info at redhat.com Mon Aug 19 04:37:52 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 04:37:52 +0000 Subject: [Rdo-list] [RDO] Just wanted to say Hi! Message-ID: <0000014094dd67ae-cfbfbfe1-151d-4b26-814b-ce3ba7678f10-000000@email.amazonses.com> AaronCrai started a discussion. Just wanted to say Hi! --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/467/just-wanted-to-say-hi Have a great day! From rdo-info at redhat.com Mon Aug 19 07:20:09 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 07:20:09 +0000 Subject: [Rdo-list] [RDO] Nova-compute shows down Message-ID: <000001409571f82e-db3a97b6-1e0e-4f6a-9e27-7cd788157c10-000000@email.amazonses.com> anandts started a discussion. Nova-compute shows down --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/468/nova-compute-shows-down Have a great day! From rdo-info at redhat.com Mon Aug 19 12:43:09 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 12:43:09 +0000 Subject: [Rdo-list] [RDO] Users without domain Message-ID: <000001409699b1ae-28d8c250-74a7-4c25-aed5-fcad0899324a-000000@email.amazonses.com> mloobo started a discussion. Users without domain --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/469/users-without-domain Have a great day! From rdo-info at redhat.com Mon Aug 19 13:44:52 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 13:44:52 +0000 Subject: [Rdo-list] [RDO] linuxbridge dropping packets... Message-ID: <0000014096d2317d-3e9f7282-3e2e-4529-a0d2-711ccc5e9e5c-000000@email.amazonses.com> Prashanth started a discussion. linuxbridge dropping packets... --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/470/linuxbridge-dropping-packets- Have a great day! From rdo-info at redhat.com Mon Aug 19 14:26:37 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 14:26:37 +0000 Subject: [Rdo-list] [RDO] Just wanted to say Hello! Message-ID: <0000014096f86af3-7a46b386-72eb-485a-a13b-bf93bcbfc45d-000000@email.amazonses.com> MarlonFle started a discussion. Just wanted to say Hello! --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/471/just-wanted-to-say-hello Have a great day! From red at fedoraproject.org Mon Aug 19 15:53:54 2013 From: red at fedoraproject.org (Sandro "red" Mathys) Date: Mon, 19 Aug 2013 17:53:54 +0200 Subject: [Rdo-list] Howto: Install OpenStack Havana-2 from RDO on Fedora 19 and avoid the pitfalls Message-ID: Most people here have probably already seen this (sorry for the spam!) but for those who don't watch Planet Fedora or Planet OpenStack (you should!), I thought I'd share my blog post on how to successfully install OpenStack Havana Milestone 2 from RDO on Fedora 19. http://www.blog.sandro-mathys.ch/2013/08/install-rdo-havana-2-on-fedora-19-and.html -- Sandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Mon Aug 19 17:26:04 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 17:26:04 +0000 Subject: [Rdo-list] [RDO] Heat template files moved? Message-ID: <00000140979cb429-7709c96d-a811-441c-a850-c376b98117d8-000000@email.amazonses.com> rbowen started a discussion. Heat template files moved? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/472/heat-template-files-moved Have a great day! From rdo-info at redhat.com Mon Aug 19 18:03:21 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 18:03:21 +0000 Subject: [Rdo-list] [RDO] I am the new one Message-ID: <0000014097bed6bb-b24d2afd-92a1-40dc-a6fc-dbe40492c7dc-000000@email.amazonses.com> TerrellR7 started a discussion. I am the new one --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/473/i-am-the-new-one Have a great day! From rdo-info at redhat.com Mon Aug 19 18:35:03 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 18:35:03 +0000 Subject: [Rdo-list] [RDO] Expanding Disk Space Message-ID: <0000014097dbdaac-e027e91c-9a7a-4f7b-b793-9a40317e194c-000000@email.amazonses.com> danield started a discussion. Expanding Disk Space --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/474/expanding-disk-space Have a great day! From rdo-info at redhat.com Mon Aug 19 20:00:23 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 20:00:23 +0000 Subject: [Rdo-list] [RDO] Document translation: Request for help Message-ID: <000001409829fedc-db7447db-1bfd-4c6d-8d1e-1ad03d99a56a-000000@email.amazonses.com> rbowen started a discussion. Document translation: Request for help --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/475/document-translation-request-for-help Have a great day! From rdo-info at redhat.com Mon Aug 19 20:19:27 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 19 Aug 2013 20:19:27 +0000 Subject: [Rdo-list] [RDO] Cinder Quota Update Question Message-ID: <00000140983b73c5-5c524242-d7c3-43d9-bc1f-4e8d0040842f-000000@email.amazonses.com> kj4ohh started a discussion. Cinder Quota Update Question --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/476/cinder-quota-update-question Have a great day! From mmagr at redhat.com Tue Aug 20 10:01:49 2013 From: mmagr at redhat.com (Martin Magr) Date: Tue, 20 Aug 2013 12:01:49 +0200 Subject: [Rdo-list] [package announce] openstack-packstack Message-ID: <52133E8D.2050802@redhat.com> Greetings, Packstack package has been updated in RDO Grizzly EPEL6 repo to openstack-packstack-2013.1.1-0.27.dev672.el6. Changelog: * Mon Aug 19 2013 Martin M?gr - 2013.1.1-0.27.dev672 - Added net.bridge.bridge-nf-call*=1 for --allinone installation (#997941) - Added global option --exclude-servers=EXCLUDE_SERVERS (#996782) Regards, Martin From rdo-info at redhat.com Tue Aug 20 16:33:43 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 20 Aug 2013 16:33:43 +0000 Subject: [Rdo-list] [RDO] Packstack updated in EPEL6 Message-ID: <000001409c932351-93090292-9f46-446e-8e7d-8e0317269dcf-000000@email.amazonses.com> rbowen started a discussion. Packstack updated in EPEL6 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/477/packstack-updated-in-epel6 Have a great day! From rdo-info at redhat.com Tue Aug 20 16:36:33 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 20 Aug 2013 16:36:33 +0000 Subject: [Rdo-list] [RDO] Fedora 19 / Havana Swift "Error: Unable to list containers" [SOLVED] Message-ID: <000001409c95ba2e-bdac8158-fb39-42e7-8009-c2ad39e9f6a5-000000@email.amazonses.com> rbrady started a discussion. Fedora 19 / Havana Swift "Error: Unable to list containers" [SOLVED] --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/478/fedora-19-havana-swift-error-unable-to-list-containers-solved Have a great day! From rdo-info at redhat.com Wed Aug 21 06:29:12 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 21 Aug 2013 06:29:12 +0000 Subject: [Rdo-list] [RDO] Just wanted to say Hi. Message-ID: <000001409f900ae6-4fe783a5-1667-416c-ad2a-22d2cc75cf34-000000@email.amazonses.com> HymanLast started a discussion. Just wanted to say Hi. --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/479/just-wanted-to-say-hi- Have a great day! From rbowen at redhat.com Wed Aug 21 14:32:35 2013 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 21 Aug 2013 10:32:35 -0400 Subject: [Rdo-list] RDO forum moderators Message-ID: <5214CF83.1050200@redhat.com> If you're regularly active on the RDO forum, and are willing to click an extra button now and then, we could use your help in spam prevention. If you're willing to be a moderator, please let me know, and I'll put you in the group. The more the merrier. Thanks. -- Rich Bowen OpenStack Community Liaison http://openstack.redhat.com/ From rdo-info at redhat.com Wed Aug 21 23:04:29 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 21 Aug 2013 23:04:29 +0000 Subject: [Rdo-list] [RDO] Need assistance with network configuration Message-ID: <00000140a31f4074-73302ba8-8d82-4541-9161-e8ba2bbc634d-000000@email.amazonses.com> ryacketta started a discussion. Need assistance with network configuration --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/480/need-assistance-with-network-configuration Have a great day! From rdo-info at redhat.com Thu Aug 22 07:15:18 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 22 Aug 2013 07:15:18 +0000 Subject: [Rdo-list] [RDO] I am the new guy Message-ID: <00000140a4e09bd7-7ded6687-fc5b-43f6-ae83-0fa51db2e0fe-000000@email.amazonses.com> VetaBartl started a discussion. I am the new guy --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/481/i-am-the-new-guy Have a great day! From rdo-info at redhat.com Thu Aug 22 07:34:00 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 22 Aug 2013 07:34:00 +0000 Subject: [Rdo-list] [RDO] Just want to say Hello. Message-ID: <00000140a4f1ba2e-55b5784d-739a-49df-9d64-e02a2c5167ea-000000@email.amazonses.com> ElissaHer started a discussion. Just want to say Hello. --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/482/just-want-to-say-hello- Have a great day! From rdo-info at redhat.com Thu Aug 22 07:42:38 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 22 Aug 2013 07:42:38 +0000 Subject: [Rdo-list] [RDO] Just want to say Hi. Message-ID: <00000140a4f9a4ab-0b0e023f-5d7a-4120-acce-e11d1ba9e468-000000@email.amazonses.com> MarlaFlec started a discussion. Just want to say Hi. --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/483/just-want-to-say-hi- Have a great day! From rdo-info at redhat.com Thu Aug 22 12:27:15 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 22 Aug 2013 12:27:15 +0000 Subject: [Rdo-list] [RDO] Just wanted to say Hello. Message-ID: <00000140a5fe3720-868fba9d-8668-4cc5-a3ef-e79b0883135e-000000@email.amazonses.com> DulcieLeh started a discussion. Just wanted to say Hello. --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/484/just-wanted-to-say-hello- Have a great day! From rdo-info at redhat.com Thu Aug 22 14:51:10 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 22 Aug 2013 14:51:10 +0000 Subject: [Rdo-list] [RDO] I am the new one Message-ID: <00000140a681f9e2-094a4236-d730-4c63-9687-643fd601e4e6-000000@email.amazonses.com> KathieTIO started a discussion. I am the new one --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/485/i-am-the-new-one Have a great day! From rdo-info at redhat.com Thu Aug 22 17:34:25 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 22 Aug 2013 17:34:25 +0000 Subject: [Rdo-list] [RDO] Im happy I finally registered Message-ID: <00000140a7176dfa-017c5f9f-e047-427a-884c-80ef9846abc6-000000@email.amazonses.com> KennySiz started a discussion. Im happy I finally registered --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/486/im-happy-i-finally-registered Have a great day! From rdo-info at redhat.com Thu Aug 22 18:45:06 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 22 Aug 2013 18:45:06 +0000 Subject: [Rdo-list] [RDO] OpenStack Summit, Hong Kong Message-ID: <00000140a7582509-4d2343e2-ad37-49cc-8c60-a6618d776fd5-000000@email.amazonses.com> rbowen started a discussion. OpenStack Summit, Hong Kong --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/487/openstack-summit-hong-kong Have a great day! From rdo-info at redhat.com Fri Aug 23 01:14:03 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 23 Aug 2013 01:14:03 +0000 Subject: [Rdo-list] [RDO] Just wanted to say Hello. Message-ID: <00000140a8bc3d05-d68fd468-1aff-407e-97cf-083e6e97f1dc-000000@email.amazonses.com> DennyXUQL started a discussion. Just wanted to say Hello. --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/488/just-wanted-to-say-hello- Have a great day! From rdo-info at redhat.com Fri Aug 23 03:28:13 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 23 Aug 2013 03:28:13 +0000 Subject: [Rdo-list] [RDO] Just want to say Hello! Message-ID: <00000140a93711dd-7375f0f7-2799-4a6a-9f9b-b0c90f5702b5-000000@email.amazonses.com> LionelMcm started a discussion. Just want to say Hello! --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/489/just-want-to-say-hello Have a great day! From rdo-info at redhat.com Fri Aug 23 08:34:22 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 23 Aug 2013 08:34:22 +0000 Subject: [Rdo-list] [RDO] GlusterFS backed Cinder Message-ID: <00000140aa4f5d38-287a17a0-dd47-4e67-969e-7ee05953fe5a-000000@email.amazonses.com> moimael started a discussion. GlusterFS backed Cinder --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/490/glusterfs-backed-cinder Have a great day! From rbowen at redhat.com Fri Aug 23 13:28:53 2013 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 23 Aug 2013 09:28:53 -0400 Subject: [Rdo-list] Spam in the forum Message-ID: <52176395.1060905@redhat.com> By the way, yes, we're pursuing the issue of spam in the forum. I'm trying to get a few additional plugins working to stem the flood. And we greatly appreciate the folks that have stepped up as moderators to nuke the garbage as it appears. --Rich -- Rich Bowen OpenStack Community Liaison http://openstack.redhat.com/ From rdo-info at redhat.com Fri Aug 23 21:44:15 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 23 Aug 2013 21:44:15 +0000 Subject: [Rdo-list] [RDO] Multi Node Setup, second compute lost net after vm launch Message-ID: <00000140ad228547-85b9d6e2-a8da-46ad-b2c2-a14cf5d9c8cf-000000@email.amazonses.com> ryacketta started a discussion. Multi Node Setup, second compute lost net after vm launch --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/491/multi-node-setup-second-compute-lost-net-after-vm-launch Have a great day! From rdo-info at redhat.com Sat Aug 24 10:52:50 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 24 Aug 2013 10:52:50 +0000 Subject: [Rdo-list] [RDO] openstack logo in dashboard - lets make a change for packstack Message-ID: <00000140aff47dec-efdc94f6-814b-4300-a9e2-dabb8fe6e70c-000000@email.amazonses.com> marafa started a discussion. openstack logo in dashboard - lets make a change for packstack --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/492/openstack-logo-in-dashboard-lets-make-a-change-for-packstack Have a great day! From rdo-info at redhat.com Sat Aug 24 13:44:26 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 24 Aug 2013 13:44:26 +0000 Subject: [Rdo-list] [RDO] Just wanted to say Hi! Message-ID: <00000140b0919859-26e5b2d7-4e78-4277-9d96-33a6069672ff-000000@email.amazonses.com> RichelleS started a discussion. Just wanted to say Hi! --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/493/just-wanted-to-say-hi Have a great day! From rdo-info at redhat.com Sat Aug 24 15:30:13 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 24 Aug 2013 15:30:13 +0000 Subject: [Rdo-list] [RDO] nova image-list ERROR: Unauthorized (HTTP 401) Message-ID: <00000140b0f27123-7080940b-e39d-4e2a-8509-57671ad1da99-000000@email.amazonses.com> wuyohee started a discussion. nova image-list ERROR: Unauthorized (HTTP 401) --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/494/nova-image-list-error-unauthorized-http-401 Have a great day! From rdo-info at redhat.com Sat Aug 24 15:46:35 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sat, 24 Aug 2013 15:46:35 +0000 Subject: [Rdo-list] [RDO] Im happy I now signed up Message-ID: <00000140b1016d93-90bb7f37-d62b-442d-82a7-78cd4dbb9ecd-000000@email.amazonses.com> RodolfoSt started a discussion. Im happy I now signed up --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/495/im-happy-i-now-signed-up Have a great day! From rdo-info at redhat.com Sun Aug 25 00:12:21 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sun, 25 Aug 2013 00:12:21 +0000 Subject: [Rdo-list] [RDO] ram overallocation Message-ID: <00000140b2d0770a-7aea732b-01a4-429e-ac68-ff952ab92920-000000@email.amazonses.com> marafa started a discussion. ram overallocation --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/496/ram-overallocation Have a great day! From rdo-info at redhat.com Sun Aug 25 13:09:50 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sun, 25 Aug 2013 13:09:50 +0000 Subject: [Rdo-list] [RDO] Can't connect to nova-metada api on fresh debian image Message-ID: <00000140b59846a6-fd49b46b-a37a-41bf-b0f2-b3d129b7a7f8-000000@email.amazonses.com> holms started a discussion. Can't connect to nova-metada api on fresh debian image --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/497/cant-connect-to-nova-metada-api-on-fresh-debian-image Have a great day! From rdo-info at redhat.com Sun Aug 25 13:12:08 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sun, 25 Aug 2013 13:12:08 +0000 Subject: [Rdo-list] [RDO] What's it the root password of provided centos openstack image? Message-ID: <00000140b59a6181-1dee3045-b179-4b7b-a0e9-2d0e206a03d7-000000@email.amazonses.com> holms started a discussion. What's it the root password of provided centos openstack image? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/498/whats-it-the-root-password-of-provided-centos-openstack-image Have a great day! From rdo-info at redhat.com Sun Aug 25 14:33:49 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sun, 25 Aug 2013 14:33:49 +0000 Subject: [Rdo-list] [RDO] How to move all stored images and instances to another drive? Message-ID: <00000140b5e52b4f-509201d7-705d-47a4-ad2e-e4b12282e735-000000@email.amazonses.com> holms started a discussion. How to move all stored images and instances to another drive? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/499/how-to-move-all-stored-images-and-instances-to-another-drive Have a great day! From rdo-info at redhat.com Sun Aug 25 14:49:12 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sun, 25 Aug 2013 14:49:12 +0000 Subject: [Rdo-list] [RDO] RDO install issues on F19 (eth1 & django14) Message-ID: <00000140b5f34115-0695b308-8a40-488c-a004-243ba81ebd69-000000@email.amazonses.com> mattf started a discussion. RDO install issues on F19 (eth1 & django14) --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/500/rdo-install-issues-on-f19-eth1-django14 Have a great day! From rdo-info at redhat.com Mon Aug 26 12:51:58 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 26 Aug 2013 12:51:58 +0000 Subject: [Rdo-list] [RDO] RDO run issues on F19 (auth errors after ~12 hours) Message-ID: <00000140baae4766-43928cfe-a74b-4d8c-afce-4c9f3f64260f-000000@email.amazonses.com> mattf started a discussion. RDO run issues on F19 (auth errors after ~12 hours) --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/501/rdo-run-issues-on-f19-auth-errors-after-12-hours Have a great day! From rdo-info at redhat.com Mon Aug 26 14:13:20 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 26 Aug 2013 14:13:20 +0000 Subject: [Rdo-list] [RDO] Compute node HA with Heat and GlusterFS Message-ID: <00000140baf8c4f5-e5b8a2c3-a68d-46eb-a1a6-0367ab2fda56-000000@email.amazonses.com> moimael started a discussion. Compute node HA with Heat and GlusterFS --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/502/compute-node-ha-with-heat-and-glusterfs Have a great day! From rdo-info at redhat.com Mon Aug 26 21:23:14 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 26 Aug 2013 21:23:14 +0000 Subject: [Rdo-list] [RDO] Page - Rank is a numeric value that represents the importance (or relevance) of a particular web Message-ID: <00000140bc825b64-1aa2dfa5-e979-4092-a6e0-23c5af662238-000000@email.amazonses.com> DillonBoc started a discussion. Page - Rank is a numeric value that represents the importance (or relevance) of a particular web --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/503/page-rank-is-a-numeric-value-that-represents-the-importance-or-relevance-of-a-particular-web Have a great day! From rdo-info at redhat.com Mon Aug 26 21:41:57 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 26 Aug 2013 21:41:57 +0000 Subject: [Rdo-list] [RDO] I am the new girl Message-ID: <00000140bc937f5c-e966e082-cae4-4835-b3d3-a1fc96ea4a60-000000@email.amazonses.com> MonteFlor started a discussion. I am the new girl --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/504/i-am-the-new-girl Have a great day! From rdo-info at redhat.com Tue Aug 27 01:13:49 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 27 Aug 2013 01:13:49 +0000 Subject: [Rdo-list] [RDO] Im glad I finally registered Message-ID: <00000140bd55764d-019dfa14-e36f-47eb-9085-f57c773c7e71-000000@email.amazonses.com> AndraXVT started a discussion. Im glad I finally registered --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/505/im-glad-i-finally-registered Have a great day! From rdo-info at redhat.com Tue Aug 27 04:39:05 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 27 Aug 2013 04:39:05 +0000 Subject: [Rdo-list] [RDO] Im happy I now signed up Message-ID: <00000140be1165ba-23434075-80d2-49e6-a38c-2c57b5d9bc56-000000@email.amazonses.com> MalcolmS5 started a discussion. Im happy I now signed up --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/506/im-happy-i-now-signed-up Have a great day! From pmyers at redhat.com Tue Aug 27 11:58:46 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 27 Aug 2013 07:58:46 -0400 Subject: [Rdo-list] [rhos-list] Remote Cinder access In-Reply-To: References: Message-ID: <521C9476.1050307@redhat.com> On 08/27/2013 07:29 AM, Lutz Christoph wrote: > Hi! > > I'm in the last tests for a three node RDO setup, and I found that with Since this is RDO related, I'm going to move this over to the community oriented list :) > the current default setup, qemu-kvm can't access a volume: > > qemu-kvm: -drive > file=/dev/disk/by-path/ip-192.168.104.61:3260-iscsi-iqn.2010-10.org.openstack:volume-229b80d0-ad10-4a3b-b022-d632de368001-lun-1,if=none,id=drive-virtio-disk0,format=raw,serial=229b80d0-ad10-4a3b-b022-d632de368001,cache=none: > could not open disk image > /dev/disk/by-path/ip-192.168.104.61:3260-iscsi-iqn.2010-10.org.openstack:volume-229b80d0-ad10-4a3b-b022-d632de368001-lun-1: > Permission denied SELinux issue perhaps? Whenever I see a permission denied that's always the first thing I check. Try: # getenforce and # audit2why -a If it's not that, then maybe Eric (cc'd) from the Cinder team can help. > The device looks just like any other disk device: > > lrwxrwxrwx. 1 root root 9 Aug 27 10:40 > /dev/disk/by-path/ip-192.168.104.61:3260-iscsi-iqn.2010-10.org.openstack:volume-229b80d0-ad10-4a3b-b022-d632de368001-lun-1 > -> ../../sdj > brw-rw----. 1 root disk 8, 144 Aug 27 10:40 /dev/sdj > > qemu is running under the "nova" user (it is running as "qemu" on an > all-in-one server). When I added the "disk" group to the "nova" user, > the problem went away. Hm, this seems to indicate that it might not be an SELinux issue, but still run the above commands just to be sure. Never hurts to check that :) > Doing the same on the all-in-one machine did not have this problem, but > them access is directly to the LV, not via iSCSI, and the user is > different, though it does not have the "disk" group attached. > > Now, I'm wondering if adding the "disk" group is the right thing to so, > considering that the all-in-one does not need this, or there is a more > elegant solution. > > Best regards / Mit freundlichen Gr??en > Lutz Christoph From lchristoph at arago.de Tue Aug 27 12:25:30 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 27 Aug 2013 12:25:30 +0000 Subject: [Rdo-list] [rhos-list] Remote Cinder access In-Reply-To: <521C9476.1050307@redhat.com> References: , <521C9476.1050307@redhat.com> Message-ID: <2ffaf6f9cd9f40c08c869bb5713c279d@AMSPR07MB145.eurprd07.prod.outlook.com> Hello! I wasn't aware of the rdo list until now. I just subscribed. No, it isn't SELinux. I had already fixed all SELinux problems the packstack setup had. No "denied" when the permission problem occurs. It's a good old Linux permission problem. Just as you surmised. So I hope Eric Harney will have something. Thank so far! Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________________ Von: Perry Myers Gesendet: Dienstag, 27. August 2013 13:58 An: Lutz Christoph Cc: rdo-list; Eric Harney; Ayal Baron Betreff: Re: [rhos-list] Remote Cinder access On 08/27/2013 07:29 AM, Lutz Christoph wrote: > Hi! > > I'm in the last tests for a three node RDO setup, and I found that with Since this is RDO related, I'm going to move this over to the community oriented list :) > the current default setup, qemu-kvm can't access a volume: > > qemu-kvm: -drive > file=/dev/disk/by-path/ip-192.168.104.61:3260-iscsi-iqn.2010-10.org.openstack:volume-229b80d0-ad10-4a3b-b022-d632de368001-lun-1,if=none,id=drive-virtio-disk0,format=raw,serial=229b80d0-ad10-4a3b-b022-d632de368001,cache=none: > could not open disk image > /dev/disk/by-path/ip-192.168.104.61:3260-iscsi-iqn.2010-10.org.openstack:volume-229b80d0-ad10-4a3b-b022-d632de368001-lun-1: > Permission denied SELinux issue perhaps? Whenever I see a permission denied that's always the first thing I check. Try: # getenforce and # audit2why -a If it's not that, then maybe Eric (cc'd) from the Cinder team can help. > The device looks just like any other disk device: > > lrwxrwxrwx. 1 root root 9 Aug 27 10:40 > /dev/disk/by-path/ip-192.168.104.61:3260-iscsi-iqn.2010-10.org.openstack:volume-229b80d0-ad10-4a3b-b022-d632de368001-lun-1 > -> ../../sdj > brw-rw----. 1 root disk 8, 144 Aug 27 10:40 /dev/sdj > > qemu is running under the "nova" user (it is running as "qemu" on an > all-in-one server). When I added the "disk" group to the "nova" user, > the problem went away. Hm, this seems to indicate that it might not be an SELinux issue, but still run the above commands just to be sure. Never hurts to check that :) > Doing the same on the all-in-one machine did not have this problem, but > them access is directly to the LV, not via iSCSI, and the user is > different, though it does not have the "disk" group attached. > > Now, I'm wondering if adding the "disk" group is the right thing to so, > considering that the all-in-one does not need this, or there is a more > elegant solution. > > Best regards / Mit freundlichen Gr??en > Lutz Christoph From rdo-info at redhat.com Tue Aug 27 14:17:56 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 27 Aug 2013 14:17:56 +0000 Subject: [Rdo-list] [RDO] Quickstart problems on CentOS 6.4 Message-ID: <00000140c02358ed-b248bbc6-b0b6-4421-8409-a4fbd02d4dd0-000000@email.amazonses.com> fale started a discussion. Quickstart problems on CentOS 6.4 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/507/quickstart-problems-on-centos-6-4 Have a great day! From rdo-info at redhat.com Wed Aug 28 04:43:58 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 28 Aug 2013 04:43:58 +0000 Subject: [Rdo-list] [RDO] RDO quantum CentOs6.4 Message-ID: <00000140c33c351f-0e7034ce-cb40-4d60-bb69-a613f5669b38-000000@email.amazonses.com> yungho started a discussion. RDO quantum CentOs6.4 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/508/rdo-quantum-centos6-4 Have a great day! From rdo-info at redhat.com Wed Aug 28 04:45:51 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 28 Aug 2013 04:45:51 +0000 Subject: [Rdo-list] [RDO] RDO quantum CentOs6.4 Message-ID: <00000140c33df349-0c953670-4826-4272-b01f-9581fde82c36-000000@email.amazonses.com> yungho started a discussion. RDO quantum CentOs6.4 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/509/rdo-quantum-centos6-4 Have a great day! From rdo-info at redhat.com Wed Aug 28 12:35:40 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 28 Aug 2013 12:35:40 +0000 Subject: [Rdo-list] [RDO] Keystone user-list work but role-list dont Message-ID: <00000140c4ec10d4-21a1cccc-e997-497f-92f8-facd68b632b9-000000@email.amazonses.com> oslampa started a discussion. Keystone user-list work but role-list dont --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/510/keystone-user-list-work-but-role-list-dont Have a great day! From rdo-info at redhat.com Thu Aug 29 15:14:56 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 29 Aug 2013 15:14:56 +0000 Subject: [Rdo-list] [RDO] About Virtual Machine can not get an IP address Message-ID: <00000140caa43e30-8e809450-93f9-4c74-b7ef-c3a8169ccfdb-000000@email.amazonses.com> yungho started a discussion. About Virtual Machine can not get an IP address --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/511/about-virtual-machine-can-not-get-an-ip-address Have a great day! From pbrady at redhat.com Thu Aug 29 18:51:56 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 29 Aug 2013 19:51:56 +0100 Subject: [Rdo-list] [package announce] Stable Grizzly 2013.1.3 update Message-ID: <521F984C.2060708@redhat.com> The RDO Grizzly repositories were updated with the latest stable 2013.1.3 update Details of the changes can be drilled down to from: https://launchpad.net/nova/grizzly/2013.1.3 https://launchpad.net/glance/grizzly/2013.1.3 https://launchpad.net/horizon/grizzly/2013.1.3 https://launchpad.net/keystone/grizzly/2013.1.3 https://launchpad.net/cinder/grizzly/2013.1.3 https://launchpad.net/quantum/grizzly/2013.1.3 https://launchpad.net/ceilometer/grizzly/2013.1.3 https://launchpad.net/heat/grizzly/2013.1.3 thanks, P?draig. From pbrady at redhat.com Thu Aug 29 19:15:03 2013 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Thu, 29 Aug 2013 20:15:03 +0100 Subject: [Rdo-list] [package announce] 2013.1.3-2 nova security updates Message-ID: <521F9DB7.7060406@redhat.com> In addition to the security fixes recently included in the 2013.1.3 upstream stable release as detailed at https://launchpad.net/nova/grizzly/2013.1.3 there were a couple more related fixes which are also included in the RDO repos: https://access.redhat.com/security/cve/CVE-2013-4261 openstack-nova-compute console-log DoS https://access.redhat.com/security/cve/CVE-2013-4278 Enforce flavor access during instance boot thanks, P?draig. From pbrady at redhat.com Thu Aug 29 19:28:18 2013 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Thu, 29 Aug 2013 20:28:18 +0100 Subject: [Rdo-list] [package announce] intention to retire the RDO Fedora Folsom repo Message-ID: <521FA0D2.3070800@redhat.com> To ease the transition between various versions of Fedora and OpenStack, RDO provides alternative versions of OpenStack than in the standard Fedora repositories. For example OpenStack Folsom packages were provided for Fedora 17 at: http://repos.fedorapeople.org/repos/openstack/openstack-folsom/fedora-17/ while the standard Fedora 17 repositories contain OpenStack Essex. Given that OpenStack Folsom has been released on Fedora 18 (and OpenStack Grizzly has been released on Fedora 19), we'll no longer maintain the Folsom packages for Fedora 17 and so will remove this repository in the next week. Users wanting to stick with Folsom on Fedora will need to use Fedora 18 where security updates etc. are still maintained. Correspondingly The Grizzly repo for Fedora 18 will be retired when OpenStack Havana is released to the next version of Fedora. (An announcement will be made at this time too). thanks, P?draig. From rdo-info at redhat.com Thu Aug 29 20:13:09 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 29 Aug 2013 20:13:09 +0000 Subject: [Rdo-list] [RDO] RDO Install completes "successfully" but does nothing Message-ID: <00000140cbb54562-3c7fe6a5-6dd5-4594-b512-1ef73dc5f77e-000000@email.amazonses.com> rgoldstone started a discussion. RDO Install completes "successfully" but does nothing --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/512/rdo-install-completes-successfully-but-does-nothing Have a great day! From mattdm at fedoraproject.org Thu Aug 29 22:02:33 2013 From: mattdm at fedoraproject.org (Matthew Miller) Date: Thu, 29 Aug 2013 18:02:33 -0400 Subject: [Rdo-list] [package announce] intention to retire the RDO Fedora Folsom repo In-Reply-To: <521FA0D2.3070800@redhat.com> References: <521FA0D2.3070800@redhat.com> Message-ID: <20130829220233.GA3942@disco.bu.edu> On Thu, Aug 29, 2013 at 08:28:18PM +0100, P?draig Brady wrote: > Given that OpenStack Folsom has been released on Fedora 18 > (and OpenStack Grizzly has been released on Fedora 19), > we'll no longer maintain the Folsom packages for Fedora 17 > and so will remove this repository in the next week. > Users wanting to stick with Folsom on Fedora will need > to use Fedora 18 where security updates etc. are still maintained. If this change impacts you, and you'd like to provide feedback on that, please tell me. I'm Cloud Person For Fedora, and I'm very concerned with making users' lives better here. I expect that many people who are using OpenStack on Fedora are also following the leading edge of OpenStack -- but I do want to hear from you! -- Matthew Miller ??? Fedora Cloud Architect ??? From rdo-info at redhat.com Thu Aug 29 22:05:03 2013 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 29 Aug 2013 22:05:03 +0000 Subject: [Rdo-list] [RDO] packstack + foreman Message-ID: <00000140cc1bb91d-2f25d765-e5da-4f78-a6e7-1fff8dd1ef8e-000000@email.amazonses.com> marafa started a discussion. packstack + foreman --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/513/packstack-foreman Have a great day! From rdo-info at redhat.com Fri Aug 30 07:56:40 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 30 Aug 2013 07:56:40 +0000 Subject: [Rdo-list] [RDO] No DHCP response from the Quantum Network node Message-ID: <00000140ce395e5b-23a2ddad-62c6-4c42-8af5-b9af9af75a0f-000000@email.amazonses.com> CBpretechCon started a discussion. No DHCP response from the Quantum Network node --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/514/no-dhcp-response-from-the-quantum-network-node Have a great day! From dneary at redhat.com Fri Aug 30 09:21:56 2013 From: dneary at redhat.com (Dave Neary) Date: Fri, 30 Aug 2013 11:21:56 +0200 Subject: [Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent) In-Reply-To: <51FFDF9B.5020005@redhat.com> References: <51FE5DDC.4010104@redhat.com> <51FFDF9B.5020005@redhat.com> Message-ID: <52206434.5090905@redhat.com> Hi, On 08/05/2013 07:23 PM, Brent Eagles wrote: > I ran into these issues as well. I noticed that ovs_use_veth was > commented out in dhcp_agent.ini and l3_agent.ini. I uncommented them and > set them to True and restarted. The vm now has an IP address. > > I noticed something else peculiar though... the public network.. the one > set as the gateway for the router has dhcp enabled. I'm not sure why we > would do that. That bothered me too. Adding the public network for me in a way that worked required the following when doing subnet-create: * --enable_dhcp False * Matching subnet/netmask exactlky with my LAN * Using --allocation_pool to limit floating IP allocation to IP addresses not already in use on the LAN Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From rdo-info at redhat.com Fri Aug 30 09:43:19 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 30 Aug 2013 09:43:19 +0000 Subject: [Rdo-list] [RDO] I am the new one Message-ID: <00000140ce9b0214-2f153022-f95e-4858-9885-a30d6b7393dd-000000@email.amazonses.com> AguedaRamos80 started a discussion. I am the new one --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/515/i-am-the-new-one Have a great day! From rdo-info at redhat.com Fri Aug 30 17:06:23 2013 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 30 Aug 2013 17:06:23 +0000 Subject: [Rdo-list] [RDO] Which Linux should we use to install RDO ? Message-ID: <00000140d030a36d-54edf070-110e-472c-937f-6afbf7d84ef1-000000@email.amazonses.com> iamopen started a discussion. Which Linux should we use to install RDO ? --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/516/which-linux-should-we-use-to-install-rdo- Have a great day! From pmyers at redhat.com Fri Aug 30 19:55:33 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 30 Aug 2013 15:55:33 -0400 Subject: [Rdo-list] [rhos-list] Kernel/userspace tools security updates In-Reply-To: References: Message-ID: <5220F8B5.9080803@redhat.com> Moving thread to rdo-list and adding a few folks to cc On 08/30/2013 02:20 PM, Andrey Korolyov wrote: > Hello, > > Is there an existing milestone for getting security updates for RDO > packages in predictable times? For example, imagining if such practice > exists to date, what will be timelines for RHSA-2013-1173? > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list >