From rbowen at redhat.com Mon Aug 4 15:15:00 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 04 Aug 2014 11:15:00 -0400 Subject: [Rdo-list] Vote for the OpenStack Summit schedule Message-ID: <53DFA374.8070205@redhat.com> There's three days left for you to influence what presentations will be at the OpenStack Summit in Paris in November. To vote, go to https://www.openstack.org/vote-paris/ and dive right in. (You'll need an openstack.org account.) There's a *huge* number of talks that have been submitted, so it can take a long time to get through them all. You can search for a particular topic that you're interested in, to narrow the list a little. If you're particularly interested in OpenStack on Red Hat Enterprise Linux, CentOS, and Fedora, you can find presentations by Red Hat engineers listed at http://redhatstackblog.redhat.com/2014/07/31/session-voting-now-open-for-openstack-summit-paris/ Remember, the vote closes on Wednesday, August 6, so don't wait! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From lists+rdo at blundellonline.com Mon Aug 4 21:46:05 2014 From: lists+rdo at blundellonline.com (David Blundell) Date: Mon, 4 Aug 2014 22:46:05 +0100 Subject: [Rdo-list] Delayed EPEL-7 RDO updates? Message-ID: I have been using RDO on RHEL6 with no problems and wanted to try it on RHEL7. After running into issues with Cinder on RHEL7 that turned out to be bugs that were closed in the last few months I noticed that some of the RDO packages for RHEL7 are much older than the RHEL6 equivalent packages. Comparing openstack-cinder in the following two repos: https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/ https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6 The latest in epel-7 is openstack-cinder-2014.1-2.el7.noarch.rpm from 23-Apr-2014 The latest in epel-6 is openstack-cinder-2014.1.1-3.el6.noarch.rpm from 30-Jul-2014 Have the 2014.1.1 updates been suspended for cinder in the EPEL-7 repo or am I looking in the wrong place? Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Tue Aug 5 08:06:35 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 05 Aug 2014 10:06:35 +0200 Subject: [Rdo-list] Delayed EPEL-7 RDO updates? In-Reply-To: References: Message-ID: <53E0908B.8070801@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 04/08/14 23:46, David Blundell wrote: > I have been using RDO on RHEL6 with no problems and wanted to try > it on RHEL7. After running into issues with Cinder on RHEL7 that > turned out to be bugs that were closed in the last few months I > noticed that some of the RDO packages for RHEL7 are much older than > the RHEL6 equivalent packages. > > Comparing openstack-cinder in the following two repos: > https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/ > > https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6 > > The latest in epel-7 is openstack-cinder-2014.1-2.el7.noarch.rpm > from 23-Apr-2014 The latest in epel-6 is > openstack-cinder-2014.1.1-3.el6.noarch.rpm from 30-Jul-2014 > > Have the 2014.1.1 updates been suspended for cinder in the EPEL-7 > repo or am I looking in the wrong place? It's not the case for other components, f.e. for Neutron, so I think Cinder packaging team missed the update somehow. Waiting for their comments. > > Thanks, > > David > > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJT4JCLAAoJEC5aWaUY1u57HEoH/09GZ/otFhXDCYoHqzLcZWWw UvH+0q/MLA54MNvuE7zKNGA1Q80boUVGStSRuxwgY5egFVZF6/n9NVLg3+qdmalJ ThxQkfrrCbHeD22D/VEC3AVT5u0+xk3ZQx0mv4g5+0O6ejejUQmWSUhY2JuHpj// OFvBVBH45r5dbzT7FREKSGEY8dGmIMYkWmx/LLDR8JEVepyLOeX5lcYKSIDAt2iq U3s8AbILlMuc9mClYgYOGqcS9SoUSoIsXo4ma23O5MAlaO1GA14x7xytP/4r3QBE ejeuCzBEc1I9i9W2Z7eKWzc/pn+haK1axGzhf4lJX0w6rskWvtsNx4+8owdliKw= =n+zI -----END PGP SIGNATURE----- From rbowen at redhat.com Tue Aug 5 15:48:53 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 05 Aug 2014 11:48:53 -0400 Subject: [Rdo-list] [Rdo-newsletter] August RDO Community Newsletter Message-ID: <53E0FCE5.7010804@redhat.com> Thanks again for being part of the RDO community! Here's some of what's going on in the community. VOTE for the OpenStack Summit schedule There's just one day left for you to influence what presentations will be at the OpenStack Summit in Paris in November. To vote, go to https://www.openstack.org/vote-paris/ and dive right in. (You'll need an openstack.org account.) There's a *huge* number of talks that have been submitted, so it can take a long time to get through them all. You can search for a particular topic that you're interested in, to narrow the list a little. If you're particularly interested in OpenStack on Red Hat Enterprise Linux, CentOS, and Fedora, you can find presentations by Red Hat engineers listed at http://redhatstackblog.redhat.com/2014/07/31/session-voting-now-open-for-openstack-summit-paris/ Remember, the vote closes on Wednesday, August 6, so don't wait! Flock starts tomorrow! Flock, the Fedora Contributor Conference, starts tomorrow, August 6, in Prague, Czech Republic. It's a gathering of everyone who cares about Fedora, for technical sessions, planning for the future, and community networking. On Saturday (August 9), Jakub Ruzicka will be leading a two hour workshop on configuring a two-node KVM-based OpenStack setup using VMs. If you're going to be there, this is a great opportunity to get a deeper understanding of OpenStack, Neutron networking, and virtualization in general. You can see more information about the session at http://sched.co/1kI1BWf and more information about the Flock conference at http://flocktofedora.org/ Juno mid-cycle meetups Over the last few weeks, several of the OpenStack projects held mid-cycle meetups, to discuss how progress is going toward Juno, and what needs to happen between now and October. We've had updates from several of these projects, and there are more on the way. Here's a few of those reports: Juno Preview for OpenStack Compute (Nova) by Russell Bryant: http://blog.russellbryant.net/2014/07/07/juno-preview-for-openstack-compute-nova/ The mid-cycle state of Ceilometer, with Eoghan Glynn: http://community.redhat.com/blog/2014/07/upstream-podcast-episode-10-rich-bowen-with-eoghan-glynn-on-openstack-juno/ OpenStack Orchestration (Heat), with Zane Bitter: https://www.youtube.com/watch?v=DwuZHMkFzFs&list=UUQ74G2gKXdpwZkXEsclzcrA#t=1343 Image (Glance) and Messaging/Notification (Marconi) Juno Previews, with Flavio Percoco: http://blog.flaper87.com/post/juno-preview-glance-marconi/ Security, with Nathan Kinder - http://redhatstackblog.redhat.com/2014/08/05/juno-updates-security/ To catch other mid-cycle previews as they come in, just follow us on Twitter, at @rdocommunity and watch for updates on http://openstack.redhat.com/Juno_previews OSCON Two weeks ago was the always-fun O'Reilly Open Source Convention, OSCON, in Portland, Oregon. RDO had a demo stand, and a steady stream of people came by to talk with us, and get the TryStack.org T-shirt and other RDO swag. If you were at OSCON, thanks for stopping by and chatting. We'd love to hear from you about your conference experience, what you learned about OpenStack, and what you'd like to see us do differently next year. On Tuesday night, there was a party to celebrate OpenStack's fourth birthday party - OpenStack was launched four years ago at OSCON. And there was a lot of great content at the conference, about OpenStack and every other Open Source project you care about. This year, all of the content from the event will be available on YouTube, so that if you missed anything, or if you want to see it again, you can. When it's ready, all of that video will be available at http://www.oscon.com/oscon2014/public/content/video I've posted a few photos from the booth on the G+ group at http://tm3.org/rdogplus LinuxCon In just a few weeks, RDO will be at LinuxCon in Chicago - http://events.linuxfoundation.org/events/linuxcon-north-america Drop by the Red Hat booth to talk to us about RDO, and see a demo of RDO in action. And, if you're in Europe, LinuxCon Dusseldorf is coming very soon - http://events.linuxfoundation.org/events/linuxcon-europe - in the middle of OCtober. So save the date, and we hope to see you there. Hangouts In July, Eoghan Glynn gave us an overview of what's planned for Ceilometer in the Juno cycle in the RDO hangout. If you missed that, you can still watch it at https://plus.google.com/events/c6e8vjjn8klrf78ruhkr95j4tas If you have any followup questions, stop by #rdo or #openstack-ceilometer on the Freenode IRC network, where there's always someone that can help you out. Stay tuned to @rdocommunity on Twitter to hear about our plans for a hangout in August, or see the Hangouts page on the RDO website, at http://openstack.redhat.com/Hangouts In closing ... In closing, I wanted to mention a few articles that have caught my attention over the last few weeks. * RDO on VMWare NSX - http://jreypo.wordpress.com/2014/06/23/deploying-openstack-with-kvm-and-vmware-nsx-part-4-deploy-openstack-rdo-with-neutron-integrated-with-nsx/ * Using ManageIQ on OpenStack - http://openstack.redhat.com/Using_ManageIQ_on_OpenStack * Making your first contribution to OpenStack - http://www.jpichon.net/blog/2014/08/training-europython-2014/ * How do companies use OpenStack? - http://maffulli.net/2014/08/04/how-do-companies-do-openstack/ Once again, you can always keep up to date a variety of ways: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter * RDO Q&A - http://ask.openstack.org/ Thanks again for being part of the RDO community! -- Rich Bowen, OpenStack Community Liaison rbowen at redhat.com http://openstack.redhat.com _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From xzhao at bnl.gov Tue Aug 5 18:24:53 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Tue, 05 Aug 2014 14:24:53 -0400 Subject: [Rdo-list] nova list returned "unauthorized" error Message-ID: <53E12175.6060002@bnl.gov> Hello, I am installing icehouse from RDO, on a 3-nodes testbed. One controller, one network and one compute node. I am using RDO release on RHEL6.5 system. After sourcing the keystone_admin file, the "nova list" command fails, the nova/api.log file shows the following messages: # nova list ERROR: Unauthorized (HTTP 401) # tail /var/log/nova/api.log 2014-08-05 11:41:21.941 5888 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead. 2014-08-05 11:41:22.192 5888 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead. 2014-08-05 14:18:39.408 5932 WARNING keystoneclient.middleware.auth_token [-] Unexpected response from keystone service: {u'error': {u'message': u'The request you have made requires authentication.', u'code': 401, u'title': u'Unauthorized'}} 2014-08-05 14:18:39.409 5932 WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token With the same admin username/password, the keystone/glance commands work fine. I have the following section in nova.conf file, which looks fine to me: [DEFAULT] ... auth_strategy=keystone ... [keystone_authtoken] auth_host=10.255.2.134 auth_port=35357 auth_protocol=http auth_uri=http://10.255.2.134:5000 admin_user=compute admin_password=computepassword admin_tenant_name=services Any idea where it goes wrong ? Thanks a lot, Xin From kchamart at redhat.com Wed Aug 6 11:50:53 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 6 Aug 2014 17:20:53 +0530 Subject: [Rdo-list] nova list returned "unauthorized" error In-Reply-To: <53E12175.6060002@bnl.gov> References: <53E12175.6060002@bnl.gov> Message-ID: <20140806115053.GC18413@tesla.pnq.redhat.com> On Tue, Aug 05, 2014 at 02:24:53PM -0400, Zhao, Xin wrote: > Hello, > I am installing icehouse from RDO, on a 3-nodes testbed. One controller, one > network and one compute node. > I am using RDO release on RHEL6.5 system. You might want to specify the exact versions of openstack-nova, openstack-keystone packages too, might be useful for otherw who might want to reproduce the issue you're seeing. > After sourcing the keystone_admin file, the "nova list" command fails, the > nova/api.log file shows the following messages: > > # nova list > ERROR: Unauthorized (HTTP 401) > > # tail /var/log/nova/api.log You can also try $ nova --debug list to see the `curl` request/response. (And you might want to try them manually.) > 2014-08-05 11:41:21.941 5888 WARNING keystoneclient.middleware.auth_token > [-] Configuring admin URI using auth fragments. This is deprecated, use > 'identity_uri' instead. > 2014-08-05 11:41:22.192 5888 WARNING keystoneclient.middleware.auth_token > [-] Configuring admin URI using auth fragments. This is deprecated, use > 'identity_uri' instead. > 2014-08-05 14:18:39.408 5932 WARNING keystoneclient.middleware.auth_token > [-] Unexpected response from keystone service: {u'error': {u'message': u'The > request you have made requires authentication.', u'code': 401, u'title': > u'Unauthorized'}} > 2014-08-05 14:18:39.409 5932 WARNING keystoneclient.middleware.auth_token > [-] Authorization failed for token Does adding Debug = True Verbose = True in /etc/nova.conf (and restart Nova services) and rerunning `nova` commands give any more useful ERRORs, instead of WARNINGs? > > With the same admin username/password, the keystone/glance commands work > fine. > > I have the following section in nova.conf file, which looks fine to me: > > [DEFAULT] > ... > auth_strategy=keystone > ... > [keystone_authtoken] > auth_host=10.255.2.134 > auth_port=35357 > auth_protocol=http > auth_uri=http://10.255.2.134:5000 > admin_user=compute > admin_password=computepassword > admin_tenant_name=services Looks sane to me. FWIW, in my attempt of IceHouse on a 2-node (one Controller, one Compute) Fedora-20 (I realize you're using RHEL 6.5 there), I used these Nova configs (scroll below)[1]. [1] https://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt -- /kashyap From xzhao at bnl.gov Wed Aug 6 15:31:05 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Wed, 06 Aug 2014 11:31:05 -0400 Subject: [Rdo-list] nova list returned "unauthorized" error In-Reply-To: <20140806115053.GC18413@tesla.pnq.redhat.com> References: <53E12175.6060002@bnl.gov> <20140806115053.GC18413@tesla.pnq.redhat.com> Message-ID: <53E24A39.5070806@bnl.gov> Hi Kashyap, It turns out I got the admin user/passwd pair wrong in nova/api-paste.ini, after changing that, nova commands work now. Thanks, Xin From brandor5 at gmail.com Wed Aug 6 21:44:09 2014 From: brandor5 at gmail.com (Brandon Sawyers) Date: Wed, 6 Aug 2014 17:44:09 -0400 Subject: [Rdo-list] neutron server failing to start Message-ID: Hello everyone: I'm attempting to build up an openstack install using the rdo packages after playing around with packstack. Up until now everything has been going smoothly. After installing the dashboard I attempted to login. My password was accepted but I had the "somethings wrong" error page pop up. httpd logs showed: [error] ConnectionFailed: Connection to neutron failed: Maximum attempts reached I tried running neutron net-list and received the same message. service neutron-server shows: neutron dead but pid file exists I started the server and it showed okay. However I was still receiving the same errors. I looked at the logs for neutron server and found the following: 2014-08-06 17:40:47.866 22869 INFO neutron.common.config [-] Logging enabled! 2014-08-06 17:40:47.872 22869 INFO neutron.common.config [-] Config paste file: /usr/share/neutron/api-paste.ini 2014-08-06 17:40:47.931 22869 INFO neutron.manager [-] Loading core plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 2014-08-06 17:40:48.044 22869 WARNING neutron.openstack.common.db.sqlalchemy.session [-] This application has not enabled MySQL traditional mode, which means silent data corruption may occur. Please encourage the application developers to enable this mode. 2014-08-06 17:40:48.067 22869 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Network VLAN ranges: {} 2014-08-06 17:40:48.157 22869 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Tunnel ID ranges: [(1, 1000)] 2014-08-06 17:40:48.190 22869 ERROR neutron.common.config [-] Unable to load neutron from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config Traceback (most recent call last): 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in load_paste_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = deploy.loadapp("config:%s" % config_path, name=app_name) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return loadobj(APP, uri, name=name, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 272, in loadobj 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return context.create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/urlmap.py", line 25, in urlmap_factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = loader.get_app(app_name, global_conf=global_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config name=name, global_conf=global_conf).create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/auth.py", line 69, in pipeline_factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = loader.get_app(pipeline[-1]) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config name=name, global_conf=global_conf).create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 146, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return fix_call(context.object, context.global_conf, **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/api/v2/router.py", line 71, in factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return cls(**local_config) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/api/v2/router.py", line 75, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config plugin = manager.NeutronManager.get_plugin() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 211, in get_plugin 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return cls.get_instance().plugin 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 206, in get_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config cls._create_instance() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return f(*args, **kwargs) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.gen.throw(type, value, traceback) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 212, in lock 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config yield sem 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return f(*args, **kwargs) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 200, in _create_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config cls._instance = cls() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 112, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config plugin_provider) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 140, in _get_plugin_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return plugin_class() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py", line 325, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.setup_rpc() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py", line 337, in setup_rpc 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.conn = rpc.create_connection(new=True) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 89, in create_connection 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return _get_impl().create_connection(CONF, new=new) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 274, in _get_impl 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config _RPCIMPL = importutils.import_module(impl) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", line 57, in import_module 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config __import__(import_str) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config ImportError: No module named rabbit 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config 2014-08-06 17:40:48.193 22869 ERROR neutron.service [-] Error occurred: trying old api-paste.ini. 2014-08-06 17:40:48.193 22869 TRACE neutron.service Traceback (most recent call last): 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 106, in serve_wsgi 2014-08-06 17:40:48.193 22869 TRACE neutron.service service.start() 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start 2014-08-06 17:40:48.193 22869 TRACE neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in _run_wsgi 2014-08-06 17:40:48.193 22869 TRACE neutron.service app = config.load_paste_app(app_name) 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 177, in load_paste_app 2014-08-06 17:40:48.193 22869 TRACE neutron.service raise RuntimeError(msg) 2014-08-06 17:40:48.193 22869 TRACE neutron.service RuntimeError: Unable to load neutron from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.193 22869 TRACE neutron.service 2014-08-06 17:40:48.194 22869 INFO neutron.common.config [-] Logging enabled! 2014-08-06 17:40:48.202 22869 INFO neutron.common.config [-] Config paste file: /usr/share/neutron/api-paste.ini 2014-08-06 17:40:48.202 22869 ERROR neutron.common.config [-] Unable to load quantum from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config Traceback (most recent call last): 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in load_paste_app 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config app = deploy.loadapp("config:%s" % config_path, name=app_name) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config return loadobj(APP, uri, name=name, **kw) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 271, in loadobj 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config global_conf=global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config global_conf=global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config return loader.get_context(object_type, name, global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 408, in get_context 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config object_type, name=name) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 587, in find_config_section 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config self.filename)) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config LookupError: No section 'quantum' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config /usr/share/neutron/api-paste.ini 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config 2014-08-06 17:40:48.203 22869 ERROR neutron.service [-] Unrecoverable error: please check log for details. 2014-08-06 17:40:48.203 22869 TRACE neutron.service Traceback (most recent call last): 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 110, in serve_wsgi 2014-08-06 17:40:48.203 22869 TRACE neutron.service service.start() 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start 2014-08-06 17:40:48.203 22869 TRACE neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in _run_wsgi 2014-08-06 17:40:48.203 22869 TRACE neutron.service app = config.load_paste_app(app_name) 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 177, in load_paste_app 2014-08-06 17:40:48.203 22869 TRACE neutron.service raise RuntimeError(msg) 2014-08-06 17:40:48.203 22869 TRACE neutron.service RuntimeError: Unable to load quantum from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.203 22869 TRACE neutron.service I googled for these errors and found http://lists.openstack.org/pipermail/openstack/2013-November/003464.html as the only similar (to my eyes) result. However the answer there was that python-keystoneclient wasn't installed. I have checked my controller and network nodes and they both have it installed. Does anyone have any idea what's going on? I'm not doing anything crazy config wise, just following the icehouse install guide. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From akalambu at cisco.com Wed Aug 6 22:00:17 2014 From: akalambu at cisco.com (Ajay Kalambur (akalambu)) Date: Wed, 6 Aug 2014 22:00:17 +0000 Subject: [Rdo-list] neutron server failing to start In-Reply-To: Message-ID: Can you check if it works from inside the controller node and Horizon Ajay From: Brandon Sawyers > Date: Wednesday, August 6, 2014 at 2:44 PM To: "rdo-list at redhat.com" > Subject: [Rdo-list] neutron server failing to start Hello everyone: I'm attempting to build up an openstack install using the rdo packages after playing around with packstack. Up until now everything has been going smoothly. After installing the dashboard I attempted to login. My password was accepted but I had the "somethings wrong" error page pop up. httpd logs showed: [error] ConnectionFailed: Connection to neutron failed: Maximum attempts reached I tried running neutron net-list and received the same message. service neutron-server shows: neutron dead but pid file exists I started the server and it showed okay. However I was still receiving the same errors. I looked at the logs for neutron server and found the following: 2014-08-06 17:40:47.866 22869 INFO neutron.common.config [-] Logging enabled! 2014-08-06 17:40:47.872 22869 INFO neutron.common.config [-] Config paste file: /usr/share/neutron/api-paste.ini 2014-08-06 17:40:47.931 22869 INFO neutron.manager [-] Loading core plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 2014-08-06 17:40:48.044 22869 WARNING neutron.openstack.common.db.sqlalchemy.session [-] This application has not enabled MySQL traditional mode, which means silent data corruption may occur. Please encourage the application developers to enable this mode. 2014-08-06 17:40:48.067 22869 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Network VLAN ranges: {} 2014-08-06 17:40:48.157 22869 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Tunnel ID ranges: [(1, 1000)] 2014-08-06 17:40:48.190 22869 ERROR neutron.common.config [-] Unable to load neutron from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config Traceback (most recent call last): 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in load_paste_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = deploy.loadapp("config:%s" % config_path, name=app_name) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return loadobj(APP, uri, name=name, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 272, in loadobj 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return context.create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/urlmap.py", line 25, in urlmap_factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = loader.get_app(app_name, global_conf=global_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config name=name, global_conf=global_conf).create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/auth.py", line 69, in pipeline_factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = loader.get_app(pipeline[-1]) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config name=name, global_conf=global_conf).create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 146, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return fix_call(context.object, context.global_conf, **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/api/v2/router.py", line 71, in factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return cls(**local_config) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/api/v2/router.py", line 75, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config plugin = manager.NeutronManager.get_plugin() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 211, in get_plugin 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return cls.get_instance().plugin 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 206, in get_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config cls._create_instance() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return f(*args, **kwargs) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.gen.throw(type, value, traceback) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 212, in lock 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config yield sem 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return f(*args, **kwargs) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 200, in _create_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config cls._instance = cls() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 112, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config plugin_provider) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 140, in _get_plugin_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return plugin_class() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py", line 325, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.setup_rpc() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py", line 337, in setup_rpc 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.conn = rpc.create_connection(new=True) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 89, in create_connection 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return _get_impl().create_connection(CONF, new=new) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 274, in _get_impl 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config _RPCIMPL = importutils.import_module(impl) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", line 57, in import_module 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config __import__(import_str) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config ImportError: No module named rabbit 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config 2014-08-06 17:40:48.193 22869 ERROR neutron.service [-] Error occurred: trying old api-paste.ini. 2014-08-06 17:40:48.193 22869 TRACE neutron.service Traceback (most recent call last): 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 106, in serve_wsgi 2014-08-06 17:40:48.193 22869 TRACE neutron.service service.start() 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start 2014-08-06 17:40:48.193 22869 TRACE neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in _run_wsgi 2014-08-06 17:40:48.193 22869 TRACE neutron.service app = config.load_paste_app(app_name) 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 177, in load_paste_app 2014-08-06 17:40:48.193 22869 TRACE neutron.service raise RuntimeError(msg) 2014-08-06 17:40:48.193 22869 TRACE neutron.service RuntimeError: Unable to load neutron from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.193 22869 TRACE neutron.service 2014-08-06 17:40:48.194 22869 INFO neutron.common.config [-] Logging enabled! 2014-08-06 17:40:48.202 22869 INFO neutron.common.config [-] Config paste file: /usr/share/neutron/api-paste.ini 2014-08-06 17:40:48.202 22869 ERROR neutron.common.config [-] Unable to load quantum from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config Traceback (most recent call last): 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in load_paste_app 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config app = deploy.loadapp("config:%s" % config_path, name=app_name) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config return loadobj(APP, uri, name=name, **kw) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 271, in loadobj 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config global_conf=global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config global_conf=global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config return loader.get_context(object_type, name, global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 408, in get_context 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config object_type, name=name) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 587, in find_config_section 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config self.filename)) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config LookupError: No section 'quantum' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config /usr/share/neutron/api-paste.ini 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config 2014-08-06 17:40:48.203 22869 ERROR neutron.service [-] Unrecoverable error: please check log for details. 2014-08-06 17:40:48.203 22869 TRACE neutron.service Traceback (most recent call last): 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 110, in serve_wsgi 2014-08-06 17:40:48.203 22869 TRACE neutron.service service.start() 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start 2014-08-06 17:40:48.203 22869 TRACE neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in _run_wsgi 2014-08-06 17:40:48.203 22869 TRACE neutron.service app = config.load_paste_app(app_name) 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 177, in load_paste_app 2014-08-06 17:40:48.203 22869 TRACE neutron.service raise RuntimeError(msg) 2014-08-06 17:40:48.203 22869 TRACE neutron.service RuntimeError: Unable to load quantum from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.203 22869 TRACE neutron.service I googled for these errors and found http://lists.openstack.org/pipermail/openstack/2013-November/003464.html as the only similar (to my eyes) result. However the answer there was that python-keystoneclient wasn't installed. I have checked my controller and network nodes and they both have it installed. Does anyone have any idea what's going on? I'm not doing anything crazy config wise, just following the icehouse install guide. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From akalambu at cisco.com Wed Aug 6 22:05:57 2014 From: akalambu at cisco.com (Ajay Kalambur (akalambu)) Date: Wed, 6 Aug 2014 22:05:57 +0000 Subject: [Rdo-list] neutron server failing to start In-Reply-To: Message-ID: Ignore it one case where I saw this error is by default API access in RDO seems restricted to access from controller and control nodes and I had to open up API access through ip table rules Looks like u cant even get to horizon so must be something else Ajay From: akalambu > Date: Wednesday, August 6, 2014 at 3:00 PM To: Brandon Sawyers >, "rdo-list at redhat.com" > Subject: Re: [Rdo-list] neutron server failing to start Can you check if it works from inside the controller node and Horizon Ajay From: Brandon Sawyers > Date: Wednesday, August 6, 2014 at 2:44 PM To: "rdo-list at redhat.com" > Subject: [Rdo-list] neutron server failing to start Hello everyone: I'm attempting to build up an openstack install using the rdo packages after playing around with packstack. Up until now everything has been going smoothly. After installing the dashboard I attempted to login. My password was accepted but I had the "somethings wrong" error page pop up. httpd logs showed: [error] ConnectionFailed: Connection to neutron failed: Maximum attempts reached I tried running neutron net-list and received the same message. service neutron-server shows: neutron dead but pid file exists I started the server and it showed okay. However I was still receiving the same errors. I looked at the logs for neutron server and found the following: 2014-08-06 17:40:47.866 22869 INFO neutron.common.config [-] Logging enabled! 2014-08-06 17:40:47.872 22869 INFO neutron.common.config [-] Config paste file: /usr/share/neutron/api-paste.ini 2014-08-06 17:40:47.931 22869 INFO neutron.manager [-] Loading core plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 2014-08-06 17:40:48.044 22869 WARNING neutron.openstack.common.db.sqlalchemy.session [-] This application has not enabled MySQL traditional mode, which means silent data corruption may occur. Please encourage the application developers to enable this mode. 2014-08-06 17:40:48.067 22869 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Network VLAN ranges: {} 2014-08-06 17:40:48.157 22869 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Tunnel ID ranges: [(1, 1000)] 2014-08-06 17:40:48.190 22869 ERROR neutron.common.config [-] Unable to load neutron from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config Traceback (most recent call last): 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in load_paste_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = deploy.loadapp("config:%s" % config_path, name=app_name) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return loadobj(APP, uri, name=name, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 272, in loadobj 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return context.create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/urlmap.py", line 25, in urlmap_factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = loader.get_app(app_name, global_conf=global_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config name=name, global_conf=global_conf).create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/auth.py", line 69, in pipeline_factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = loader.get_app(pipeline[-1]) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config name=name, global_conf=global_conf).create() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return self.object_type.invoke(self) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 146, in invoke 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return fix_call(context.object, context.global_conf, **context.local_conf) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in fix_call 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = callable(*args, **kw) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/api/v2/router.py", line 71, in factory 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return cls(**local_config) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/api/v2/router.py", line 75, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config plugin = manager.NeutronManager.get_plugin() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 211, in get_plugin 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return cls.get_instance().plugin 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 206, in get_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config cls._create_instance() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return f(*args, **kwargs) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.gen.throw(type, value, traceback) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 212, in lock 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config yield sem 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return f(*args, **kwargs) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 200, in _create_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config cls._instance = cls() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 112, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config plugin_provider) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 140, in _get_plugin_instance 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return plugin_class() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py", line 325, in __init__ 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.setup_rpc() 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py", line 337, in setup_rpc 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.conn = rpc.create_connection(new=True) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 89, in create_connection 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return _get_impl().create_connection(CONF, new=new) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 274, in _get_impl 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config _RPCIMPL = importutils.import_module(impl) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", line 57, in import_module 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config __import__(import_str) 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config ImportError: No module named rabbit 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config 2014-08-06 17:40:48.193 22869 ERROR neutron.service [-] Error occurred: trying old api-paste.ini. 2014-08-06 17:40:48.193 22869 TRACE neutron.service Traceback (most recent call last): 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 106, in serve_wsgi 2014-08-06 17:40:48.193 22869 TRACE neutron.service service.start() 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start 2014-08-06 17:40:48.193 22869 TRACE neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in _run_wsgi 2014-08-06 17:40:48.193 22869 TRACE neutron.service app = config.load_paste_app(app_name) 2014-08-06 17:40:48.193 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 177, in load_paste_app 2014-08-06 17:40:48.193 22869 TRACE neutron.service raise RuntimeError(msg) 2014-08-06 17:40:48.193 22869 TRACE neutron.service RuntimeError: Unable to load neutron from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.193 22869 TRACE neutron.service 2014-08-06 17:40:48.194 22869 INFO neutron.common.config [-] Logging enabled! 2014-08-06 17:40:48.202 22869 INFO neutron.common.config [-] Config paste file: /usr/share/neutron/api-paste.ini 2014-08-06 17:40:48.202 22869 ERROR neutron.common.config [-] Unable to load quantum from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config Traceback (most recent call last): 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in load_paste_app 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config app = deploy.loadapp("config:%s" % config_path, name=app_name) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config return loadobj(APP, uri, name=name, **kw) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 271, in loadobj 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config global_conf=global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config global_conf=global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config return loader.get_context(object_type, name, global_conf) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 408, in get_context 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config object_type, name=name) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 587, in find_config_section 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config self.filename)) 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config LookupError: No section 'quantum' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config /usr/share/neutron/api-paste.ini 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config 2014-08-06 17:40:48.203 22869 ERROR neutron.service [-] Unrecoverable error: please check log for details. 2014-08-06 17:40:48.203 22869 TRACE neutron.service Traceback (most recent call last): 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 110, in serve_wsgi 2014-08-06 17:40:48.203 22869 TRACE neutron.service service.start() 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start 2014-08-06 17:40:48.203 22869 TRACE neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in _run_wsgi 2014-08-06 17:40:48.203 22869 TRACE neutron.service app = config.load_paste_app(app_name) 2014-08-06 17:40:48.203 22869 TRACE neutron.service File "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 177, in load_paste_app 2014-08-06 17:40:48.203 22869 TRACE neutron.service raise RuntimeError(msg) 2014-08-06 17:40:48.203 22869 TRACE neutron.service RuntimeError: Unable to load quantum from configuration file /usr/share/neutron/api-paste.ini. 2014-08-06 17:40:48.203 22869 TRACE neutron.service I googled for these errors and found http://lists.openstack.org/pipermail/openstack/2013-November/003464.html as the only similar (to my eyes) result. However the answer there was that python-keystoneclient wasn't installed. I have checked my controller and network nodes and they both have it installed. Does anyone have any idea what's going on? I'm not doing anything crazy config wise, just following the icehouse install guide. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From brandor5 at gmail.com Wed Aug 6 22:11:22 2014 From: brandor5 at gmail.com (Brandon Sawyers) Date: Wed, 6 Aug 2014 18:11:22 -0400 Subject: [Rdo-list] neutron server failing to start In-Reply-To: References: Message-ID: Thanks for the suggestion. :) I decided to check the status of the services on network node as well. They also aren't starting, but they give a different error: I have found some more info though. 2014-08-06 17:57:05.200 1716 INFO neutron.common.config [-] Logging enabled! 2014-08-06 17:57:05.220 1716 CRITICAL neutron [req-092fa0ad-4c03-4e68-8817-6f6757509bd3 None] No module named rabbit 2014-08-06 17:57:05.220 1716 TRACE neutron Traceback (most recent call last): 2014-08-06 17:57:05.220 1716 TRACE neutron File "/usr/bin/neutron-openvswitch-agent", line 10, in 2014-08-06 17:57:05.220 1716 TRACE neutron sys.exit(main()) 2014-08-06 17:57:05.220 1716 TRACE neutron File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1394, in main 2014-08-06 17:57:05.220 1716 TRACE neutron agent = OVSNeutronAgent(**agent_config) 2014-08-06 17:57:05.220 1716 TRACE neutron File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 195, in __init__ 2014-08-06 17:57:05.220 1716 TRACE neutron self.setup_rpc() 2014-08-06 17:57:05.220 1716 TRACE neutron File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 269, in setup_rpc 2014-08-06 17:57:05.220 1716 TRACE neutron consumers) 2014-08-06 17:57:05.220 1716 TRACE neutron File "/usr/lib/python2.6/site-packages/neutron/agent/rpc.py", line 43, in create_consumers 2014-08-06 17:57:05.220 1716 TRACE neutron connection = rpc.create_connection(new=True) 2014-08-06 17:57:05.220 1716 TRACE neutron File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 89, in create_connection 2014-08-06 17:57:05.220 1716 TRACE neutron return _get_impl().create_connection(CONF, new=new) 2014-08-06 17:57:05.220 1716 TRACE neutron File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 274, in _get_impl 2014-08-06 17:57:05.220 1716 TRACE neutron _RPCIMPL = importutils.import_module(impl) 2014-08-06 17:57:05.220 1716 TRACE neutron File "/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", line 57, in import_module 2014-08-06 17:57:05.220 1716 TRACE neutron __import__(import_str) 2014-08-06 17:57:05.220 1716 TRACE neutron ImportError: No module named rabbit 2014-08-06 17:57:05.220 1716 TRACE neutron I am using rabbitmq instead of qpid, could that be the problem? I was under the impression that rabbitmq was going to be the default going forward and icehouse had full support for it. On Wed, Aug 6, 2014 at 6:05 PM, Ajay Kalambur (akalambu) wrote: > Ignore it one case where I saw this error is by default API access in > RDO seems restricted to access from controller and control nodes and I had > to open up API access through ip table rules > Looks like u cant even get to horizon so must be something else > > Ajay > > > From: akalambu > Date: Wednesday, August 6, 2014 at 3:00 PM > To: Brandon Sawyers , "rdo-list at redhat.com" < > rdo-list at redhat.com> > Subject: Re: [Rdo-list] neutron server failing to start > > Can you check if it works from inside the controller node and Horizon > Ajay > > > From: Brandon Sawyers > Date: Wednesday, August 6, 2014 at 2:44 PM > To: "rdo-list at redhat.com" > Subject: [Rdo-list] neutron server failing to start > > Hello everyone: > > I'm attempting to build up an openstack install using the rdo packages > after playing around with packstack. > > Up until now everything has been going smoothly. After installing the > dashboard I attempted to login. My password was accepted but I had the > "somethings wrong" error page pop up. > > httpd logs showed: > > [error] ConnectionFailed: Connection to neutron failed: Maximum attempts > reached > > I tried running neutron net-list and received the same message. > > service neutron-server shows: > neutron dead but pid file exists > > I started the server and it showed okay. However I was still receiving > the same errors. > > I looked at the logs for neutron server and found the following: > > 2014-08-06 17:40:47.866 22869 INFO neutron.common.config [-] Logging > enabled! > 2014-08-06 17:40:47.872 22869 INFO neutron.common.config [-] Config paste > file: /usr/share/neutron/api-paste.ini > 2014-08-06 17:40:47.931 22869 INFO neutron.manager [-] Loading core > plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > 2014-08-06 17:40:48.044 22869 WARNING > neutron.openstack.common.db.sqlalchemy.session [-] This application has not > enabled MySQL traditional mode, which means silent data corruption may > occur. Please encourage the application developers to enable this mode. > 2014-08-06 17:40:48.067 22869 INFO > neutron.plugins.openvswitch.ovs_neutron_plugin [-] Network VLAN ranges: {} > 2014-08-06 17:40:48.157 22869 INFO > neutron.plugins.openvswitch.ovs_neutron_plugin [-] Tunnel ID ranges: [(1, > 1000)] > 2014-08-06 17:40:48.190 22869 ERROR neutron.common.config [-] Unable to > load neutron from configuration file /usr/share/neutron/api-paste.ini. > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config Traceback (most > recent call last): > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in > load_paste_app > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = > deploy.loadapp("config:%s" % config_path, name=app_name) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in > loadapp > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > loadobj(APP, uri, name=name, **kw) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 272, in > loadobj > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > context.create() > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in > create > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > self.object_type.invoke(self) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in > invoke > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > **context.local_conf) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in > fix_call > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = > callable(*args, **kw) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/urlmap.py", line 25, in > urlmap_factory > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = > loader.get_app(app_name, global_conf=global_conf) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in > get_app > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config name=name, > global_conf=global_conf).create() > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in > create > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > self.object_type.invoke(self) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in > invoke > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > **context.local_conf) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in > fix_call > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = > callable(*args, **kw) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/auth.py", line 69, in > pipeline_factory > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config app = > loader.get_app(pipeline[-1]) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 350, in > get_app > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config name=name, > global_conf=global_conf).create() > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in > create > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > self.object_type.invoke(self) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 146, in > invoke > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > fix_call(context.object, context.global_conf, **context.local_conf) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/util.py", line 56, in > fix_call > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config val = > callable(*args, **kw) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/api/v2/router.py", line 71, in > factory > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > cls(**local_config) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/api/v2/router.py", line 75, in > __init__ > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config plugin = > manager.NeutronManager.get_plugin() > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 211, in > get_plugin > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > cls.get_instance().plugin > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 206, in > get_instance > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > cls._create_instance() > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", > line 249, in inner > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > f(*args, **kwargs) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > self.gen.throw(type, value, traceback) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", > line 212, in lock > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config yield sem > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", > line 249, in inner > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > f(*args, **kwargs) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 200, in > _create_instance > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > cls._instance = cls() > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 112, in __init__ > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > plugin_provider) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 140, in > _get_plugin_instance > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > plugin_class() > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py", > line 325, in __init__ > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > self.setup_rpc() > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py", > line 337, in setup_rpc > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config self.conn = > rpc.create_connection(new=True) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", > line 89, in create_connection > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config return > _get_impl().create_connection(CONF, new=new) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", > line 274, in _get_impl > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config _RPCIMPL = > importutils.import_module(impl) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/importutils.py", > line 57, in import_module > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > __import__(import_str) > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config ImportError: No > module named rabbit > 2014-08-06 17:40:48.190 22869 TRACE neutron.common.config > 2014-08-06 17:40:48.193 22869 ERROR neutron.service [-] Error occurred: > trying old api-paste.ini. > 2014-08-06 17:40:48.193 22869 TRACE neutron.service Traceback (most recent > call last): > 2014-08-06 17:40:48.193 22869 TRACE neutron.service File > "/usr/lib/python2.6/site-packages/neutron/service.py", line 106, in > serve_wsgi > 2014-08-06 17:40:48.193 22869 TRACE neutron.service service.start() > 2014-08-06 17:40:48.193 22869 TRACE neutron.service File > "/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start > 2014-08-06 17:40:48.193 22869 TRACE neutron.service self.wsgi_app = > _run_wsgi(self.app_name) > 2014-08-06 17:40:48.193 22869 TRACE neutron.service File > "/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in > _run_wsgi > 2014-08-06 17:40:48.193 22869 TRACE neutron.service app = > config.load_paste_app(app_name) > 2014-08-06 17:40:48.193 22869 TRACE neutron.service File > "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 177, in > load_paste_app > 2014-08-06 17:40:48.193 22869 TRACE neutron.service raise > RuntimeError(msg) > 2014-08-06 17:40:48.193 22869 TRACE neutron.service RuntimeError: Unable > to load neutron from configuration file /usr/share/neutron/api-paste.ini. > 2014-08-06 17:40:48.193 22869 TRACE neutron.service > 2014-08-06 17:40:48.194 22869 INFO neutron.common.config [-] Logging > enabled! > 2014-08-06 17:40:48.202 22869 INFO neutron.common.config [-] Config paste > file: /usr/share/neutron/api-paste.ini > 2014-08-06 17:40:48.202 22869 ERROR neutron.common.config [-] Unable to > load quantum from configuration file /usr/share/neutron/api-paste.ini. > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config Traceback (most > recent call last): > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 170, in > load_paste_app > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config app = > deploy.loadapp("config:%s" % config_path, name=app_name) > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in > loadapp > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config return > loadobj(APP, uri, name=name, **kw) > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 271, in > loadobj > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config > global_conf=global_conf) > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 296, in > loadcontext > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config > global_conf=global_conf) > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 320, in > _loadconfig > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config return > loader.get_context(object_type, name, global_conf) > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 408, in > get_context > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config object_type, > name=name) > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config File > "/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 587, in > find_config_section > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config > self.filename)) > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config LookupError: No > section 'quantum' (prefixed by 'app' or 'application' or 'composite' or > 'composit' or 'pipeline' or 'filter-app') found in config > /usr/share/neutron/api-paste.ini > 2014-08-06 17:40:48.202 22869 TRACE neutron.common.config > 2014-08-06 17:40:48.203 22869 ERROR neutron.service [-] Unrecoverable > error: please check log for details. > 2014-08-06 17:40:48.203 22869 TRACE neutron.service Traceback (most recent > call last): > 2014-08-06 17:40:48.203 22869 TRACE neutron.service File > "/usr/lib/python2.6/site-packages/neutron/service.py", line 110, in > serve_wsgi > 2014-08-06 17:40:48.203 22869 TRACE neutron.service service.start() > 2014-08-06 17:40:48.203 22869 TRACE neutron.service File > "/usr/lib/python2.6/site-packages/neutron/service.py", line 75, in start > 2014-08-06 17:40:48.203 22869 TRACE neutron.service self.wsgi_app = > _run_wsgi(self.app_name) > 2014-08-06 17:40:48.203 22869 TRACE neutron.service File > "/usr/lib/python2.6/site-packages/neutron/service.py", line 175, in > _run_wsgi > 2014-08-06 17:40:48.203 22869 TRACE neutron.service app = > config.load_paste_app(app_name) > 2014-08-06 17:40:48.203 22869 TRACE neutron.service File > "/usr/lib/python2.6/site-packages/neutron/common/config.py", line 177, in > load_paste_app > 2014-08-06 17:40:48.203 22869 TRACE neutron.service raise > RuntimeError(msg) > 2014-08-06 17:40:48.203 22869 TRACE neutron.service RuntimeError: Unable > to load quantum from configuration file /usr/share/neutron/api-paste.ini. > 2014-08-06 17:40:48.203 22869 TRACE neutron.service > > I googled for these errors and found > http://lists.openstack.org/pipermail/openstack/2013-November/003464.html > as the only similar (to my eyes) result. However the answer there was that > python-keystoneclient wasn't installed. I have checked my controller and > network nodes and they both have it installed. > > Does anyone have any idea what's going on? I'm not doing anything crazy > config wise, just following the icehouse install guide. > > Thanks! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Thu Aug 7 00:21:55 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 6 Aug 2014 20:21:55 -0400 Subject: [Rdo-list] neutron server failing to start In-Reply-To: References: Message-ID: <20140807002155.GC21765@redhat.com> On Wed, Aug 06, 2014 at 06:11:22PM -0400, Brandon Sawyers wrote: > [req-092fa0ad-4c03-4e68-8817-6f6757509bd3 None] No module named rabbit What is "rpc_backend" in /etc/nova/nova.conf? There is no Python module named "rabbit"; support for rabbitmq comes from the "kombu" module, so rpc_backend should look something like: rpc_backend=nova.openstack.common.rpc.impl_kombu -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From brandor5 at gmail.com Thu Aug 7 01:24:56 2014 From: brandor5 at gmail.com (Brandon Sawyers) Date: Wed, 6 Aug 2014 21:24:56 -0400 Subject: [Rdo-list] neutron server failing to start In-Reply-To: <20140807002155.GC21765@redhat.com> References: <20140807002155.GC21765@redhat.com> Message-ID: Lars, This was the problem! Thanks for the pointer. It turns out that I had all of configs set like so: `rpc_backend = rabbit` My brain must have just zoned out when I set that part up! Thanks to everyone who chimed in. Much appreciated. :) On Wed, Aug 6, 2014 at 8:21 PM, Lars Kellogg-Stedman wrote: > On Wed, Aug 06, 2014 at 06:11:22PM -0400, Brandon Sawyers wrote: > > [req-092fa0ad-4c03-4e68-8817-6f6757509bd3 None] No module named rabbit > > What is "rpc_backend" in /etc/nova/nova.conf? There is no Python > module named "rabbit"; support for rabbitmq comes from the "kombu" > module, so rpc_backend should look something like: > > rpc_backend=nova.openstack.common.rpc.impl_kombu > > -- > Lars Kellogg-Stedman | larsks @ irc > Cloud Engineering / OpenStack | " " @ twitter > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xzhao at bnl.gov Thu Aug 7 14:09:08 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Thu, 07 Aug 2014 10:09:08 -0400 Subject: [Rdo-list] neutron server failing to start In-Reply-To: References: <20140807002155.GC21765@redhat.com> Message-ID: <53E38884.3020802@bnl.gov> Actually the doc mentions you can set rpc_backend=rabbit, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Installation_and_Configuration_Guide/sect-Install_a_Compute_Node.html#sect-Configure_the_Compute_Service (section 8.4.5.3). Since the doc is for RHEL7, maybe that works only on RHEL7 packages ... Xin On 8/6/2014 9:24 PM, Brandon Sawyers wrote: > Lars, > > This was the problem! Thanks for the pointer. > > It turns out that I had all of configs set like so: > > `rpc_backend = rabbit` > > My brain must have just zoned out when I set that part up! > > Thanks to everyone who chimed in. Much appreciated. :) > > > On Wed, Aug 6, 2014 at 8:21 PM, Lars Kellogg-Stedman > wrote: > > On Wed, Aug 06, 2014 at 06:11:22PM -0400, Brandon Sawyers wrote: > > [req-092fa0ad-4c03-4e68-8817-6f6757509bd3 None] No module named > rabbit > > What is "rpc_backend" in /etc/nova/nova.conf? There is no Python > module named "rabbit"; support for rabbitmq comes from the "kombu" > module, so rpc_backend should look something like: > > rpc_backend=nova.openstack.common.rpc.impl_kombu > > -- > Lars Kellogg-Stedman > | > larsks @ irc > Cloud Engineering / OpenStack | " " @ twitter > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From madko77 at gmail.com Fri Aug 8 08:21:18 2014 From: madko77 at gmail.com (Madko) Date: Fri, 8 Aug 2014 10:21:18 +0200 Subject: [Rdo-list] gpg key for foreman repository Message-ID: Hi, today I just tried to update my openstack rdo setup. yum update fails with the following message: The GPG keys listed for the "Foreman stable" repository are already installed but they are not correct for this package. Check that the correct key URLs are configured for this repository. is it normal?? best regards, -- Edouard Bourguignon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Fri Aug 8 08:46:25 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 08 Aug 2014 10:46:25 +0200 Subject: [Rdo-list] neutron server failing to start In-Reply-To: <53E38884.3020802@bnl.gov> References: <20140807002155.GC21765@redhat.com> <53E38884.3020802@bnl.gov> Message-ID: <53E48E61.5070201@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 07/08/14 16:09, Zhao, Xin wrote: > > Actually the doc mentions you can set rpc_backend=rabbit, see > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Installation_and_Configuration_Guide/sect-Install_a_Compute_Node.html#sect-Configure_the_Compute_Service > > (section 8.4.5.3). Since the doc is for RHEL7, maybe that works only on > RHEL7 packages ... This is because Nova managed to migrate to oslo.messaging in Icehouse while Neutron didn't, and 'rabbit' is known to oslo.messaging only. So you still need to specify kombu for Neutron till Juno-based release. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJT5I5gAAoJEC5aWaUY1u57H3cH/jTEUy91Hbe31qAzJMwMGvwe yf7C6UeoJeNEeflH1AuQdFyzvf7TmbF3k8kr+hfIZkuqgq7bMAhft0TVbLUwZjrC 7rMfQU5GzmtBd/YCyhum07Z7OBfL4Tl4zZR0cFiVBQ9pu+PE21YTImlr5vYptr79 ydpXBSO430PiMbZ9IJEbV8qzLXKh9Jd7Xq/NZnKQS3fZn0SV5eVpr1Mk9lAvXJ9U O0QEyEM+vqXCxF7Tl+V1KW00Ph7osh92729zfK2hSR/k6Ovdhgu/+z5AifOV2AC0 +rhg+OzPkx6Wqt8vzmJw1X5IHEgTdYN/Uqiy7amY1wzWjNaRpl/XXPtaG9mDsYM= =3Vs3 -----END PGP SIGNATURE----- From elias.moreno.tec at gmail.com Fri Aug 8 22:31:25 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Fri, 8 Aug 2014 18:01:25 -0430 Subject: [Rdo-list] Cinder with glusterfs and split brain question Message-ID: Hello, I would like to understand something that happened to me recently. I have cinder configured with glusterfs and I had a volume that I couldn't attach, after watching logs and the like I noticed that it was a split brain situation but anyway, this is a replicated distributed volume and I assume that in the case one brick failed cinder would be able to work out using brick's replica. Doesn't this work with split brain conditions? I mean, if it wasn't for the split brain issue, cinder would've been able to use the replica when the other brick failed? Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From dram.wang at gmail.com Sat Aug 9 08:03:34 2014 From: dram.wang at gmail.com (Xin Wang) Date: Sat, 9 Aug 2014 16:03:34 +0800 Subject: [Rdo-list] Updated packstack with CentOS 7 support? Message-ID: When installing RDO on CentOS 7, I encountered several problems. It seems that fix have been committed to icehouse branch of packstack one month ago [1]. As this problem is critical to CentOS 7, I'm wondering when packstack package will be updated? [1] https://github.com/stackforge/packstack/commit/0040c6344790751e2d06678b1b0b4c6f5adc4d37 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Mon Aug 11 10:09:54 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Mon, 11 Aug 2014 11:09:54 +0100 Subject: [Rdo-list] gpg key for foreman repository In-Reply-To: References: Message-ID: <53E89672.1040803@redhat.com> On 08/08/2014 09:21 AM, Madko wrote: > Hi, > > today I just tried to update my openstack rdo setup. yum update fails with the following message: > > The GPG keys listed for the "Foreman stable" repository are already installed but they are not correct for this package. > Check that the correct key URLs are configured for this repository. > > is it normal?? There was a security breach on the foreman.org servers, so they resigned all their packages as a precaution. The new keys are already available though RDO, and can be updated by just doing: yum install rdo-release thanks, P?draig. From lars at redhat.com Mon Aug 11 14:41:07 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 11 Aug 2014 10:41:07 -0400 Subject: [Rdo-list] Missing mongodb dependency in RDO repository Message-ID: <20140811144107.GA5317@redhat.com> Hello all, It looks like the mongodb package has been retired from EPEL 6 (and EPEL 5): https://lists.fedoraproject.org/pipermail/devel/2014-August/201542.html It may have been removed due to the fact that there were open security bugs against the package and no active maintainer, although at the moment I am not able to find documentation to confirm this. For the time being, this means that it is not possible to install ceilometer under CentOS 6.5. For now, you should set: CONFIG_CEILOMETER_INSTALL=n In your answers file, or run packstack as: packstack --os-ceilometer-install=n ...other options... If we can find a new maintainer for the package it will return to the EPEL repositories. Cheers, -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From Yaniv.Kaul at emc.com Mon Aug 11 20:00:08 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Mon, 11 Aug 2014 16:00:08 -0400 Subject: [Rdo-list] [QA] Tempest - is volume testing actually testing anything with block storage? Message-ID: <648473255763364B961A02AC3BE1060D03C3E114A2@MX19A.corp.emc.com> (Some of you may know me from my previous work at Red Hat - good to see some familiar faces!) I'm working on testing a Cinder driver for OpenStack - IceHouse, Havana, and Juno (regretfully, in that order). I've quickly found out Tempest is not really testing much for real (example: if I don't configure iSCSI, I still pass all tests but 5 of them!), so I'm doing some 'manual' tests. I've discovered few issues, not sure where to file them upstream (Glance/Cinder/libvirt/etc.), I'll send them over the mailing list (or upstream, but I was hoping for a low-volume mailing list). 1. Is there any test actually booting or running the VMs from the block storage? I have it configured correctly (I think), but none of the relevant tempest.api.volume* tests actually do much with it. Volumes are created, mapped, snapshotted, removed, etc, but nothing is really written to it... 2. Is there a way to test multi-backend for real? Looks like in Tempest there's a single 'storage_protocol' entry? 3. Is there a way to configure Nova and friends to use as much as possible the block storage? - Can I somehow get rid of the 'base' ? Don't need it if I can have a base as volumes on the block storage. - I've found out that Glance does not really support Cinder (the 'raise NotImplementedError' under add() gave it away). Unless I misunderstood something, not sure why it's documented everywhere. (I've 'hacked' around it by creating /var/lib/glance/image_block, referring Glance to use it. That in turn is a mount on a multipathed LUN. It works, but Glance is regretfully copying files in 4K chunks for some reason. This is horrible performance-wise). - Conversion to raw is using only the first path in multipath (again, horrible performance-wise. By default Nova is regretfully configured this way too - unless use_multipath_for_image_xfer is set to True) I'm using the forked Tempest from https://github.com/redhat-openstack/tempest , on CentOS 6.5 (could not installed IceHouse on CentOS 7, known issue I believe), with IceHouse (hoping to see RDO Juno packages soon!). TIA, Y. From xzhao at bnl.gov Mon Aug 11 21:21:32 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Mon, 11 Aug 2014 17:21:32 -0400 Subject: [Rdo-list] instance can't connect to neutron Message-ID: <53E933DC.7070709@bnl.gov> Hello, I am setting up a 3-node icehouse testbed on RHEL6.5, using RDO, the testbed has one controller node, one network node and one compute node. I use ML2 plugin, with OVS mechanism and VLAN type. When I start an instance, it fails. On the compute node nova.log file, there is the following error messages: 2014-08-11 17:05:44.234 25860 WARNING nova.compute.manager [-] [instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed network setup (attempt 1 of 3) 2014-08-11 17:05:45.240 25860 WARNING nova.compute.manager [-] [instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed network setup (attempt 2 of 3) 2014-08-11 17:05:47.254 25860 ERROR nova.compute.manager [-] Instance failed network setup after 3 attempt(s) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager Traceback (most recent call last): 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1504, in _allocate_network_async 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager dhcp_options=dhcp_options) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 259, in allocate_for_instance 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager net_ids) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 128, in _get_available_networks 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager nets = neutron.list_networks(**search_opts).get('networks', []) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 111, in with_params 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager ret = self.function(instance, *args, **kwargs) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 333, in list_networks 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager **_params) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 1250, in list 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager for r in self._pagination(collection, path, **params): 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 1263, in _pagination 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager res = self.get(path, params=params) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 1236, in get 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager headers=headers, params=params) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 1228, in retry_request 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager raise exceptions.ConnectionFailed(reason=_("Maximum attempts reached")) 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager ConnectionFailed: Connection to neutron failed: Maximum attempts reached 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager 2014-08-11 17:05:49.069 25860 WARNING nova.virt.disk.vfs.guestfs [req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea f6bd9769708b4fbe971a616143c6959f eea3753cf3ce471ba60c434e7382750c] Failed to close augeas aug_close: do_aug_close: you must call 'aug-init' first to initialize Augeas 2014-08-11 17:05:49.222 25860 ERROR nova.compute.manager [req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea f6bd9769708b4fbe971a616143c6959f eea3753cf3ce471ba60c434e7382750c] [instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed to spawn On the controller node and network node, I don't see much errors from the neutron services log files. I can connect to the (standalone) DB from the network node, using the username/password inside the neutron.conf file. Here are the relevant rpms on the compute node: openstack-utils-2014.1-3.el6.noarch openstack-neutron-openvswitch-2014.1.1-8.el6.noarch openstack-neutron-ml2-2014.1.1-8.el6.noarch openstack-nova-compute-2014.1.1-3.el6.noarch openstack-neutron-2014.1.1-8.el6.noarch openstack-nova-common-2014.1.1-3.el6.noarch openstack-selinux-0.1.3-2.el6ost.noarch python-neutronclient-2.3.4-1.el6.noarch python-neutron-2014.1.1-8.el6.noarch Any idea what went wrong? Thanks a lot, Xin From pbrady at redhat.com Tue Aug 12 12:00:49 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 12 Aug 2014 13:00:49 +0100 Subject: [Rdo-list] Missing mongodb dependency in RDO repository In-Reply-To: <20140811144107.GA5317@redhat.com> References: <20140811144107.GA5317@redhat.com> Message-ID: <53EA01F1.6000600@redhat.com> On 08/11/2014 03:41 PM, Lars Kellogg-Stedman wrote: > Hello all, > > It looks like the mongodb package has been retired from EPEL 6 (and > EPEL 5): > > https://lists.fedoraproject.org/pipermail/devel/2014-August/201542.html > > It may have been removed due to the fact that there were open > security bugs against the package and no active maintainer, although > at the moment I am not able to find documentation to confirm this. > > For the time being, this means that it is not possible to install > ceilometer under CentOS 6.5. For now, you should set: > > CONFIG_CEILOMETER_INSTALL=n > > In your answers file, or run packstack as: > > packstack --os-ceilometer-install=n ...other options... > > If we can find a new maintainer for the package it will return to the > EPEL repositories. mongodb was removed in error from epel. This is being reinstated, though will take a little time due to manual steps required. In the meantime I've temporarily added mongodb to the epel6 RDO repos to avoid this issue. epel7 was not impacted. thanks, P?draig. From kchamart at redhat.com Tue Aug 12 16:21:20 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 12 Aug 2014 21:51:20 +0530 Subject: [Rdo-list] Missing mongodb dependency in RDO repository In-Reply-To: <53EA01F1.6000600@redhat.com> References: <20140811144107.GA5317@redhat.com> <53EA01F1.6000600@redhat.com> Message-ID: <20140812162120.GE32105@tesla.redhat.com> On Tue, Aug 12, 2014 at 01:00:49PM +0100, P?draig Brady wrote: > On 08/11/2014 03:41 PM, Lars Kellogg-Stedman wrote: > > Hello all, > > > > It looks like the mongodb package has been retired from EPEL 6 (and > > EPEL 5): > > > > https://lists.fedoraproject.org/pipermail/devel/2014-August/201542.html > > > > It may have been removed due to the fact that there were open > > security bugs against the package and no active maintainer, although > > at the moment I am not able to find documentation to confirm this. Yeah. brandor5 from notified this on #rdo, a relevant Fedora hosted ticket for that (although, it doesn't explicitly spell out mongodb, probably it ended being a dependency of a dependency): https://fedorahosted.org/rel-eng/ticket/5963 (If you've already seen this, disregard me. :-) ) > > For the time being, this means that it is not possible to install > > ceilometer under CentOS 6.5. For now, you should set: > > > > CONFIG_CEILOMETER_INSTALL=n > > > > In your answers file, or run packstack as: > > > > packstack --os-ceilometer-install=n ...other options... > > > > If we can find a new maintainer for the package it will return to the > > EPEL repositories. > > mongodb was removed in error from epel. > > This is being reinstated, though will take a little time > due to manual steps required. Yep, it seems to have taken effect, looking here: "Fedora EPEL 6 Approved" https://admin.fedoraproject.org/pkgdb/package/mongodb/ -- /kashyap From kchamart at redhat.com Tue Aug 12 16:47:19 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 12 Aug 2014 22:17:19 +0530 Subject: [Rdo-list] [QA] Tempest - is volume testing actually testing anything with block storage? In-Reply-To: <648473255763364B961A02AC3BE1060D03C3E114A2@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03C3E114A2@MX19A.corp.emc.com> Message-ID: <20140812164719.GF32105@tesla.redhat.com> On Mon, Aug 11, 2014 at 04:00:08PM -0400, Kaul, Yaniv wrote: [Can you convince your mailer to wrap very long lines?] > (Some of you may know me from my previous work at Red Hat - good to > see some familiar faces!) Hi Yaniv, (No direct answers to the Cinder/Tempest woes you present below, that I snipped.) [. . .] > (hoping to see RDO Juno packages soon!). If you have any test environments based on Fedora, it already has Juno Milestone-2 packages, for testing only. $ yum update openstack-$component --enablerepo= You should see package versions like that, which indicate Juno M-2 packages: openstack-nova-2014.2-0.1.b2.fc22 openstack-cinder-2014.2-0.1.b2.fc22 (Most likely you're looking for CentOS-7 packages, which are in the works from what I hear.) -- /kashyap From kchamart at redhat.com Tue Aug 12 17:02:52 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 12 Aug 2014 22:32:52 +0530 Subject: [Rdo-list] [QA] Tempest - is volume testing actually testing anything with block storage? In-Reply-To: <20140812164719.GF32105@tesla.redhat.com> References: <648473255763364B961A02AC3BE1060D03C3E114A2@MX19A.corp.emc.com> <20140812164719.GF32105@tesla.redhat.com> Message-ID: <20140812170252.GG32105@tesla.redhat.com> On Tue, Aug 12, 2014 at 10:17:19PM +0530, Kashyap Chamarthy wrote: > On Mon, Aug 11, 2014 at 04:00:08PM -0400, Kaul, Yaniv wrote: [. . .] > If you have any test environments based on Fedora, it already has Juno > Milestone-2 packages, for testing only. > > $ yum update openstack-$component --enablerepo= Sorry, typo, that was supposed to read: $ yum update openstack-$component --enablerepo=rawhide > You should see package versions like that, which indicate Juno M-2 > packages: > > openstack-nova-2014.2-0.1.b2.fc22 > openstack-cinder-2014.2-0.1.b2.fc22 > > (Most likely you're looking for CentOS-7 packages, which are in the > works from what I hear.) -- /kashyap From rich.minton at lmco.com Tue Aug 12 19:33:56 2014 From: rich.minton at lmco.com (Minton, Rich) Date: Tue, 12 Aug 2014 19:33:56 +0000 Subject: [Rdo-list] Cinder Problem with Circular directory structure. Message-ID: I created an Icehouse cluster that uses Cinder for object storage (using the NFS driver). I use an Isilon NL cluster and NFS mount to that. I'm getting these errors concerning "Circular Directory Structure" and I can't create volumes because of it. From the commands it looks like "snapshots" are supposed to be ignored but apparently they are not. Any suggestions would be greatly appreciated. Thank you, Richard Text from /var/log/cinder/volume.log: 2014-08-12 15:24:12.035 9485 AUDIT cinder.service [-] Starting cinder-volume node (version 2014.1.1) 2014-08-12 15:24:12.036 9485 INFO cinder.volume.manager [req-36376eb6-d919-4451-9c83-a2bae72c85c5 - - - - -] Starting volume driver NfsDriver (1.1.0) 2014-08-12 15:24:12.329 9485 INFO cinder.volume.manager [req-36376eb6-d919-4451-9c83-a2bae72c85c5 - - - - -] Updating volume status 2014-08-12 15:24:12.334 9485 INFO cinder.brick.remotefs.remotefs [req-36376eb6-d919-4451-9c83-a2bae72c85c5 - - - - -] Already mounted: /var/lib/cinder/mnt/ade98462e07fef2a453d3328ce54ac03 2014-08-12 15:24:12.428 9485 ERROR cinder.openstack.common.threadgroup [-] Unexpected error while running command. Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/ade98462e07fef2a453d3328ce54ac03 Exit code: 1 Stdout: '26\t/var/lib/cinder/mnt/ade98462e07fef2a453d3328ce54ac03\n' Stderr: "/usr/bin/du: WARNING: Circular directory structure.\nThis almost certainly means that you have a corrupted file system.\nNOTIFY YOUR SYSTEM MANAGER.\nThe following directory is part of the cycle:\n `/var/lib/cinder/mnt/ade98462e07fef2a453d3328ce54ac03/.snapshot'\n\n" 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup Traceback (most recent call last): 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/openstack/common/threadgroup.py", line 125, in wait 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup x.wait() 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/openstack/common/threadgroup.py", line 47, in wait 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup return self.thread.wait() 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 168, in wait 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup return self._exit_event.wait() 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in wait 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup return hubs.get_hub().switch() 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 187, in switch 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup return self.greenlet.switch() 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 194, in main 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup result = function(*args, **kwargs) 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/openstack/common/service.py", line 486, in run_service 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup service.start() 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/service.py", line 103, in start 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup self.manager.init_host() 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 308, in init_host 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup self.publish_service_capabilities(ctxt) 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 1106, in publish_service_capabilities 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup self._report_driver_status(context) 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 1095, in _report_driver_status 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup volume_stats = self.driver.get_volume_stats(refresh=True) 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/nfs.py", line 340, in get_volume_stats 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup self._update_volume_stats() 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/nfs.py", line 359, in _update_volume_stats 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup capacity, free, used = self._get_capacity_info(share) 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/nfs.py", line 567, in _get_capacity_info 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup '*snapshot*', mount_point, run_as_root=True) 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 136, in execute 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup return processutils.execute(*cmd, **kwargs) 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup cmd=' '.join(cmd)) 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup ProcessExecutionError: Unexpected error while running command. 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/mnt/ade98462e07fef2a453d3328ce54ac03 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup Exit code: 1 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup Stdout: '26\t/var/lib/cinder/mnt/ade98462e07fef2a453d3328ce54ac03\n' 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup Stderr: "/usr/bin/du: WARNING: Circular directory structure.\nThis almost certainly means that you have a corrupted file system.\nNOTIFY YOUR SYSTEM MANAGER.\nThe following directory is part of the cycle:\n `/var/lib/cinder/mnt/ade98462e07fef2a453d3328ce54ac03/.snapshot'\n\n" 2014-08-12 15:24:12.428 9485 TRACE cinder.openstack.common.threadgroup 2014-08-12 15:24:12.439 2662 INFO cinder.openstack.common.service [-] Child 9485 exited with status 0 2014-08-12 15:24:12.439 2662 INFO cinder.openstack.common.service [-] Forking too fast, sleeping Richard Minton Lockheed Martin - D&IS LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Aug 14 12:45:07 2014 From: whayutin at redhat.com (whayutin) Date: Thu, 14 Aug 2014 08:45:07 -0400 Subject: [Rdo-list] FYI.. Packstack on Fedora20: [heat-dbsync]/returns: ImportError: No module named M2Crypto Message-ID: <1408020307.3028.3.camel@localhost.localdomain> https://bugzilla.redhat.com/show_bug.cgi?id=1128301 Thanks From whayutin at redhat.com Thu Aug 14 18:16:54 2014 From: whayutin at redhat.com (whayutin) Date: Thu, 14 Aug 2014 14:16:54 -0400 Subject: [Rdo-list] Execution of '/usr/sbin/usermod -G ceilometer, nobody, nova ceilometer' returned 6: usermod: group 'nova' does not exist Message-ID: <1408040214.3028.5.camel@localhost.localdomain> FYI.. Issue now exists in RDO https://bugzilla.redhat.com/show_bug.cgi?id=1055661 From Yaniv.Kaul at emc.com Mon Aug 18 14:12:45 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Mon, 18 Aug 2014 10:12:45 -0400 Subject: [Rdo-list] [QA] Tempest - is volume testing actually testing anything with block storage? In-Reply-To: <20140812164719.GF32105@tesla.redhat.com> References: <648473255763364B961A02AC3BE1060D03C3E114A2@MX19A.corp.emc.com> <20140812164719.GF32105@tesla.redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03C3E11ACD@MX19A.corp.emc.com> > -----Original Message----- > From: Kashyap Chamarthy [mailto:kchamart at redhat.com] > Sent: Tuesday, August 12, 2014 7:47 PM > To: Kaul, Yaniv > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] [QA] Tempest - is volume testing actually testing anything > with block storage? > > On Mon, Aug 11, 2014 at 04:00:08PM -0400, Kaul, Yaniv wrote: > > [Can you convince your mailer to wrap very long lines?] It's not the mailer, it's me. > > > (Some of you may know me from my previous work at Red Hat - good to > > see some familiar faces!) > > Hi Yaniv, > > (No direct answers to the Cinder/Tempest woes you present below, that I > snipped.) > > [. . .] > > > (hoping to see RDO Juno packages soon!). > > If you have any test environments based on Fedora, it already has Juno > Milestone-2 packages, for testing only. I really don't like to mix running targets (Fedora and OpenStack) while testing. - On my F19, I could not find anything that was not 2013.1. - I failed to upgrade it to F20. 'fedup' failed after reboot to upgrade it. I'm quite fed up with it. - I could not find packstack @ https://repos.fedorapeople.org/repos/openstack/openstack-trunk/fedora/ - perhaps it's not the right location. So, no go. I'll wait. Patience is a virtue. Or I'll move to other distributions, we'll see. Thanks, Y. > > $ yum update openstack-$component --enablerepo= > > You should see package versions like that, which indicate Juno M-2 > packages: > > openstack-nova-2014.2-0.1.b2.fc22 > openstack-cinder-2014.2-0.1.b2.fc22 > > (Most likely you're looking for CentOS-7 packages, which are in the works from > what I hear.) > > -- > /kashyap From rdo-info at redhat.com Mon Aug 18 18:46:23 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 18 Aug 2014 18:46:23 +0000 Subject: [Rdo-list] [RDO] Blog Roundup, August 11-17, 2014 Message-ID: <00000147ea710f0d-ae6fba93-1f4f-499a-b497-5b185b99c38c-000000@email.amazonses.com> rbowen started a discussion. Blog Roundup, August 11-17, 2014 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/978/blog-roundup-august-11-17-2014 Have a great day! From rbowen at redhat.com Tue Aug 19 13:08:24 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 19 Aug 2014 09:08:24 -0400 Subject: [Rdo-list] Open CentOS 7 issues Message-ID: <53F34C48.3000306@redhat.com> What's the status of getting the fix for open CentOS7 issues in RDO? https://bugzilla.redhat.com/show_bug.cgi?id=1117035 has been (as far as I understand) fixed upstream for a couple of weeks. And the other stuff (enumerated in Gael's comments at https://bugzilla.redhat.com/show_bug.cgi?id=1117035#c4 ) is in various states of readiness. Is there a chance of getting something pushed out soonish that addresses some or all of these issues? -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From kchamart at redhat.com Tue Aug 19 18:59:14 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 20 Aug 2014 00:29:14 +0530 Subject: [Rdo-list] [QA] Tempest - is volume testing actually testing anything with block storage? In-Reply-To: <648473255763364B961A02AC3BE1060D03C3E11ACD@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03C3E114A2@MX19A.corp.emc.com> <20140812164719.GF32105@tesla.redhat.com> <648473255763364B961A02AC3BE1060D03C3E11ACD@MX19A.corp.emc.com> Message-ID: <20140819185914.GB23413@tesla.redhat.com> On Mon, Aug 18, 2014 at 10:12:45AM -0400, Kaul, Yaniv wrote: [. . .] > I really don't like to mix running targets (Fedora and OpenStack) > while testing. Fair enough. > - On my F19, I could not find anything that was not 2013.1. > - I failed to upgrade it to F20. 'fedup' failed after reboot to > upgrade it. I'm quite fed up with it. Hmm, as an alternative to fedup, I used the below method. FWIW, as I write this, I just updated 3 machines successfully from f19 -> f20 with these instructions: $ yum update yum -y; yum clean all; \ yum --releasever=20 distro-sync --nogpgcheck -y Followed by: $ package-cleanup --problems ; package-cleanup --orphans \ package-cleanup --dupes ; package-cleanup --leaves $ reboot > - I could not find packstack @ > https://repos.fedorapeople.org/repos/openstack/openstack-trunk/fedora/ > - perhaps it's not the right location. >From the above URL, seems like you're looking for latest packstack for Fedora. If so, they're always built first in Koji, so a consistent way to check (that's what I've been doing): For f20: $ koji latest-build f20 openstack-packstack Or, for the very latest packstack: $ koji latest-build openstack-packstack To download it (the package N-V-R is obtained from the above step) from CLI: $ koji download-build --arch=noarch \ openstack-packstack-2014.1.1-0.28.dev1238.fc22 And, to find latest builds for a specific package *across* current releases: $ bodhi -L openstack-packstack Not sure if you're aware, RDO packages for Fedora follows Fedora distro release schedule, i.e. once N+2 is released (Icehouse), we EOL N (Grizzly). So, to map Fedora and OpenStack releases: - Fedora-19 == Grizzly (2013.1) -- is now EOL - Fedora-20 == Havana (2013.2) -- is currently "supported" - Fedora-21 == IceHouse (2014.1) -- is currently "supported" If you hit any issues, please post here, we'll see what we can do. -- /kashyap From xzhao at bnl.gov Wed Aug 20 16:04:44 2014 From: xzhao at bnl.gov (Xin Zhao) Date: Wed, 20 Aug 2014 09:04:44 -0700 Subject: [Rdo-list] instance can't connect to neutron In-Reply-To: <53E933DC.7070709@bnl.gov> References: <53E933DC.7070709@bnl.gov> Message-ID: <53F4C71C.4020406@bnl.gov> Revisiting this issue, I see there is a bug report https://bugs.launchpad.net/nova/+bug/1251784, which is supposed to be fixed. Is the fix not in the RDO rpms, or do I misconfigure anything? Any suggestions where to debug it? Thanks, Xin On 8/11/2014 2:21 PM, Zhao, Xin wrote: > Hello, > > I am setting up a 3-node icehouse testbed on RHEL6.5, using RDO, the > testbed has one controller node, one network node and one compute > node. I use ML2 plugin, with OVS > mechanism and VLAN type. > > When I start an instance, it fails. On the compute node nova.log file, > there is the following error messages: > > > 2014-08-11 17:05:44.234 25860 WARNING nova.compute.manager [-] > [instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed > network setup (attempt 1 of 3) > 2014-08-11 17:05:45.240 25860 WARNING nova.compute.manager [-] > [instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed > network setup (attempt 2 of 3) > 2014-08-11 17:05:47.254 25860 ERROR nova.compute.manager [-] Instance > failed network setup after 3 attempt(s) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager Traceback > (most recent call last): > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1504, > in _allocate_network_async > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager > dhcp_options=dhcp_options) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line > 259, in allocate_for_instance > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager net_ids) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line > 128, in _get_available_networks > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager nets = > neutron.list_networks(**search_opts).get('networks', []) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line > 111, in with_params > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager ret = > self.function(instance, *args, **kwargs) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line > 333, in list_networks > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager **_params) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line > 1250, in list > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager for r in > self._pagination(collection, path, **params): > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line > 1263, in _pagination > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager res = > self.get(path, params=params) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line > 1236, in get > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager > headers=headers, params=params) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File > "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line > 1228, in retry_request > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager raise > exceptions.ConnectionFailed(reason=_("Maximum attempts reached")) > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager > ConnectionFailed: Connection to neutron failed: Maximum attempts reached > 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager > 2014-08-11 17:05:49.069 25860 WARNING nova.virt.disk.vfs.guestfs > [req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea > f6bd9769708b4fbe971a616143c6959f eea3753cf3ce471ba60c434e7382750c] > Failed to close augeas aug_close: do_aug_close: you must call > 'aug-init' first to initialize Augeas > 2014-08-11 17:05:49.222 25860 ERROR nova.compute.manager > [req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea > f6bd9769708b4fbe971a616143c6959f eea3753cf3ce471ba60c434e7382750c] > [instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed to spawn > > On the controller node and network node, I don't see much errors from > the neutron services log files. I can connect to the (standalone) DB > from the network node, using the username/password inside the > neutron.conf file. > > Here are the relevant rpms on the compute node: > > openstack-utils-2014.1-3.el6.noarch > openstack-neutron-openvswitch-2014.1.1-8.el6.noarch > openstack-neutron-ml2-2014.1.1-8.el6.noarch > openstack-nova-compute-2014.1.1-3.el6.noarch > openstack-neutron-2014.1.1-8.el6.noarch > openstack-nova-common-2014.1.1-3.el6.noarch > openstack-selinux-0.1.3-2.el6ost.noarch > python-neutronclient-2.3.4-1.el6.noarch > python-neutron-2014.1.1-8.el6.noarch > > Any idea what went wrong? > > Thanks a lot, > Xin > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From thomas.oulevey at cern.ch Thu Aug 21 09:20:20 2014 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Thu, 21 Aug 2014 11:20:20 +0200 Subject: [Rdo-list] cloud-init and CentOS 7 Message-ID: <53F5B9D4.6020905@cern.ch> Hi Folks, FYI, I sent upstream a patch for cloud-init to detect correctly the el7 clones (enable systemd). https://bugs.launchpad.net/cloud-init/+bug/1341508 Hopefully it can make it to EPEL7, soon. cheers, -- Thomas. From gfidente at redhat.com Thu Aug 21 09:20:51 2014 From: gfidente at redhat.com (Giulio Fidente) Date: Thu, 21 Aug 2014 11:20:51 +0200 Subject: [Rdo-list] [QA] Tempest - is volume testing actually testing anything with block storage? In-Reply-To: <648473255763364B961A02AC3BE1060D03C3E114A2@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03C3E114A2@MX19A.corp.emc.com> Message-ID: <53F5B9F3.6010206@redhat.com> On 08/11/2014 10:00 PM, Kaul, Yaniv wrote: > (Some of you may know me from my previous work at Red Hat - good to see some familiar faces!) yep, hi there :) > I'm working on testing a Cinder driver for OpenStack - IceHouse, Havana, and Juno (regretfully, in that order). > I've quickly found out Tempest is not really testing much for real (example: if I don't configure iSCSI, I still pass all tests but 5 of them!), so I'm doing some 'manual' tests. Indeed, only Nova nodes connect to the volumes via iSCSI (and only for certain Cinder backends), not the Cinder nodes so a majority of the tests for Cinder succeed regardless of the iSCSI setup > 1. Is there any test actually booting or running the VMs from the block storage? I have it configured correctly (I think), but none of the relevant tempest.api.volume* tests actually do much with it. Volumes are created, mapped, snapshotted, removed, etc, but nothing is really written to it... tempest.api.volume tests are testing Cinder API only there are tests stressing the Nova Volume API (eg. attach) in: tempest.api.compute.volumes Some of these would require proper iSCSI setup for instance. There are tests doing actual writes into the volumes too (eg. boot from volume, snapshot, clone, reboot) in: tempest/scenario/test_stamp_pattern.py and tempest/scenario/test_volume_boot_pattern.py and tempest/scenario/test_snapshot_pattern.py lastly, with Tempest you can also run any of the API test (eg tempest.api.*) in a loop with some parallelization and this is handful to test a driver as operations may hang if the backend is not behaving as expected > 2. Is there a way to test multi-backend for real? Looks like in Tempest there's a single 'storage_protocol' entry? not that I know of -- Giulio Fidente GPG KEY: 08D733BA From ihrachys at redhat.com Thu Aug 21 11:22:57 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 21 Aug 2014 13:22:57 +0200 Subject: [Rdo-list] instance can't connect to neutron In-Reply-To: <53F4C71C.4020406@bnl.gov> References: <53E933DC.7070709@bnl.gov> <53F4C71C.4020406@bnl.gov> Message-ID: <53F5D691.5070901@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 The issue is irrelevant, the error message is generic when Nova fails to connect to Neutron. Have you checked that your Controller is able to reach Network node? On 20/08/14 18:04, Xin Zhao wrote: > Revisiting this issue, I see there is a bug report > https://bugs.launchpad.net/nova/+bug/1251784, which is supposed to > be fixed. Is the fix not in the RDO rpms, or do I misconfigure > anything? Any suggestions where to debug it? > > Thanks, Xin > > On 8/11/2014 2:21 PM, Zhao, Xin wrote: >> Hello, >> >> I am setting up a 3-node icehouse testbed on RHEL6.5, using RDO, >> the testbed has one controller node, one network node and one >> compute node. I use ML2 plugin, with OVS mechanism and VLAN >> type. >> >> When I start an instance, it fails. On the compute node nova.log >> file, there is the following error messages: >> >> >> 2014-08-11 17:05:44.234 25860 WARNING nova.compute.manager [-] >> [instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed >> network setup (attempt 1 of 3) 2014-08-11 17:05:45.240 25860 >> WARNING nova.compute.manager [-] [instance: >> fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed network >> setup (attempt 2 of 3) 2014-08-11 17:05:47.254 25860 ERROR >> nova.compute.manager [-] Instance failed network setup after 3 >> attempt(s) 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager Traceback (most recent call last): >> 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File >> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line >> 1504, in _allocate_network_async 2014-08-11 17:05:47.254 25860 >> TRACE nova.compute.manager dhcp_options=dhcp_options) 2014-08-11 >> 17:05:47.254 25860 TRACE nova.compute.manager File >> "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", >> line 259, in allocate_for_instance 2014-08-11 17:05:47.254 25860 >> TRACE nova.compute.manager net_ids) 2014-08-11 17:05:47.254 25860 >> TRACE nova.compute.manager File >> "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", >> line 128, in _get_available_networks 2014-08-11 17:05:47.254 >> 25860 TRACE nova.compute.manager nets = >> neutron.list_networks(**search_opts).get('networks', []) >> 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File >> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >> line 111, in with_params 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager ret = self.function(instance, *args, >> **kwargs) 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager File >> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >> line 333, in list_networks 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager **_params) 2014-08-11 17:05:47.254 25860 >> TRACE nova.compute.manager File >> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >> line 1250, in list 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager for r in self._pagination(collection, >> path, **params): 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager File >> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >> line 1263, in _pagination 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager res = self.get(path, params=params) >> 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File >> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >> line 1236, in get 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager headers=headers, params=params) 2014-08-11 >> 17:05:47.254 25860 TRACE nova.compute.manager File >> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >> line 1228, in retry_request 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager raise >> exceptions.ConnectionFailed(reason=_("Maximum attempts >> reached")) 2014-08-11 17:05:47.254 25860 TRACE >> nova.compute.manager ConnectionFailed: Connection to neutron >> failed: Maximum attempts reached 2014-08-11 17:05:47.254 25860 >> TRACE nova.compute.manager 2014-08-11 17:05:49.069 25860 WARNING >> nova.virt.disk.vfs.guestfs >> [req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea >> f6bd9769708b4fbe971a616143c6959f >> eea3753cf3ce471ba60c434e7382750c] Failed to close augeas >> aug_close: do_aug_close: you must call 'aug-init' first to >> initialize Augeas 2014-08-11 17:05:49.222 25860 ERROR >> nova.compute.manager [req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea >> f6bd9769708b4fbe971a616143c6959f >> eea3753cf3ce471ba60c434e7382750c] [instance: >> fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed to spawn >> >> On the controller node and network node, I don't see much errors >> from the neutron services log files. I can connect to the >> (standalone) DB from the network node, using the >> username/password inside the neutron.conf file. >> >> Here are the relevant rpms on the compute node: >> >> openstack-utils-2014.1-3.el6.noarch >> openstack-neutron-openvswitch-2014.1.1-8.el6.noarch >> openstack-neutron-ml2-2014.1.1-8.el6.noarch >> openstack-nova-compute-2014.1.1-3.el6.noarch >> openstack-neutron-2014.1.1-8.el6.noarch >> openstack-nova-common-2014.1.1-3.el6.noarch >> openstack-selinux-0.1.3-2.el6ost.noarch >> python-neutronclient-2.3.4-1.el6.noarch >> python-neutron-2014.1.1-8.el6.noarch >> >> Any idea what went wrong? >> >> Thanks a lot, Xin >> >> >> _______________________________________________ Rdo-list mailing >> list Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJT9daRAAoJEC5aWaUY1u5720IIAMPAuKx9V6qFQjpu6gebeLQe x8epuN1GLUcO/i3PzH3sCOJfbR4jcD1KkqMlNsHDXSQN4zSb6XU78JShLok0iAeC Dj0dJR0eGcuqd4SWJH7toXayM4t3tcBFLV8dSICyYCBQSxmM1C8f/iNVd7cjNTsK Jtm2qr6Pa3lDF8q0/rSqr428a1AzJ1Jp/kDJbe4yATCNdtrzAHjia2DB1wPZOrad TxH4z6VGyOYDZJNoE965Gxu6Bdm79fJloUkIygWEiLMSS4CJJ7MJrhs0OG9TwCJt TcQm9ds1EF4wYEK+m4MCR8UL49PnzLePqoE7odRFNcB38HURH2opkofNiO+qdlE= =uVLT -----END PGP SIGNATURE----- From xzhao at bnl.gov Thu Aug 21 21:52:36 2014 From: xzhao at bnl.gov (Xin Zhao) Date: Thu, 21 Aug 2014 14:52:36 -0700 Subject: [Rdo-list] instance can't connect to neutron In-Reply-To: <53F5D691.5070901@redhat.com> References: <53E933DC.7070709@bnl.gov> <53F4C71C.4020406@bnl.gov> <53F5D691.5070901@redhat.com> Message-ID: <53F66A24.90904@bnl.gov> On 8/21/2014 4:22 AM, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > The issue is irrelevant, the error message is generic when Nova fails > to connect to Neutron. Have you checked that your Controller is able > to reach Network node? Hi Ihar, Which port on the network node is controller supposed to connect to ? I have the neutron-server daemon running on the controller node, the L3/dhcp/OVS-agent daemons running on the network node. Thanks, Xin > > On 20/08/14 18:04, Xin Zhao wrote: >> Revisiting this issue, I see there is a bug report >> https://bugs.launchpad.net/nova/+bug/1251784, which is supposed to >> be fixed. Is the fix not in the RDO rpms, or do I misconfigure >> anything? Any suggestions where to debug it? >> >> Thanks, Xin >> >> On 8/11/2014 2:21 PM, Zhao, Xin wrote: >>> Hello, >>> >>> I am setting up a 3-node icehouse testbed on RHEL6.5, using RDO, >>> the testbed has one controller node, one network node and one >>> compute node. I use ML2 plugin, with OVS mechanism and VLAN >>> type. >>> >>> When I start an instance, it fails. On the compute node nova.log >>> file, there is the following error messages: >>> >>> >>> 2014-08-11 17:05:44.234 25860 WARNING nova.compute.manager [-] >>> [instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed >>> network setup (attempt 1 of 3) 2014-08-11 17:05:45.240 25860 >>> WARNING nova.compute.manager [-] [instance: >>> fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed network >>> setup (attempt 2 of 3) 2014-08-11 17:05:47.254 25860 ERROR >>> nova.compute.manager [-] Instance failed network setup after 3 >>> attempt(s) 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager Traceback (most recent call last): >>> 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line >>> 1504, in _allocate_network_async 2014-08-11 17:05:47.254 25860 >>> TRACE nova.compute.manager dhcp_options=dhcp_options) 2014-08-11 >>> 17:05:47.254 25860 TRACE nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", >>> line 259, in allocate_for_instance 2014-08-11 17:05:47.254 25860 >>> TRACE nova.compute.manager net_ids) 2014-08-11 17:05:47.254 25860 >>> TRACE nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", >>> line 128, in _get_available_networks 2014-08-11 17:05:47.254 >>> 25860 TRACE nova.compute.manager nets = >>> neutron.list_networks(**search_opts).get('networks', []) >>> 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >>> line 111, in with_params 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager ret = self.function(instance, *args, >>> **kwargs) 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >>> line 333, in list_networks 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager **_params) 2014-08-11 17:05:47.254 25860 >>> TRACE nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >>> line 1250, in list 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager for r in self._pagination(collection, >>> path, **params): 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >>> line 1263, in _pagination 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager res = self.get(path, params=params) >>> 2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >>> line 1236, in get 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager headers=headers, params=params) 2014-08-11 >>> 17:05:47.254 25860 TRACE nova.compute.manager File >>> "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", >>> line 1228, in retry_request 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager raise >>> exceptions.ConnectionFailed(reason=_("Maximum attempts >>> reached")) 2014-08-11 17:05:47.254 25860 TRACE >>> nova.compute.manager ConnectionFailed: Connection to neutron >>> failed: Maximum attempts reached 2014-08-11 17:05:47.254 25860 >>> TRACE nova.compute.manager 2014-08-11 17:05:49.069 25860 WARNING >>> nova.virt.disk.vfs.guestfs >>> [req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea >>> f6bd9769708b4fbe971a616143c6959f >>> eea3753cf3ce471ba60c434e7382750c] Failed to close augeas >>> aug_close: do_aug_close: you must call 'aug-init' first to >>> initialize Augeas 2014-08-11 17:05:49.222 25860 ERROR >>> nova.compute.manager [req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea >>> f6bd9769708b4fbe971a616143c6959f >>> eea3753cf3ce471ba60c434e7382750c] [instance: >>> fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed to spawn >>> >>> On the controller node and network node, I don't see much errors >>> from the neutron services log files. I can connect to the >>> (standalone) DB from the network node, using the >>> username/password inside the neutron.conf file. >>> >>> Here are the relevant rpms on the compute node: >>> >>> openstack-utils-2014.1-3.el6.noarch >>> openstack-neutron-openvswitch-2014.1.1-8.el6.noarch >>> openstack-neutron-ml2-2014.1.1-8.el6.noarch >>> openstack-nova-compute-2014.1.1-3.el6.noarch >>> openstack-neutron-2014.1.1-8.el6.noarch >>> openstack-nova-common-2014.1.1-3.el6.noarch >>> openstack-selinux-0.1.3-2.el6ost.noarch >>> python-neutronclient-2.3.4-1.el6.noarch >>> python-neutron-2014.1.1-8.el6.noarch >>> >>> Any idea what went wrong? >>> >>> Thanks a lot, Xin >>> >>> >>> _______________________________________________ Rdo-list mailing >>> list Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >> _______________________________________________ Rdo-list mailing >> list Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJT9daRAAoJEC5aWaUY1u5720IIAMPAuKx9V6qFQjpu6gebeLQe > x8epuN1GLUcO/i3PzH3sCOJfbR4jcD1KkqMlNsHDXSQN4zSb6XU78JShLok0iAeC > Dj0dJR0eGcuqd4SWJH7toXayM4t3tcBFLV8dSICyYCBQSxmM1C8f/iNVd7cjNTsK > Jtm2qr6Pa3lDF8q0/rSqr428a1AzJ1Jp/kDJbe4yATCNdtrzAHjia2DB1wPZOrad > TxH4z6VGyOYDZJNoE965Gxu6Bdm79fJloUkIygWEiLMSS4CJJ7MJrhs0OG9TwCJt > TcQm9ds1EF4wYEK+m4MCR8UL49PnzLePqoE7odRFNcB38HURH2opkofNiO+qdlE= > =uVLT > -----END PGP SIGNATURE----- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From t10tennn at gmail.com Fri Aug 22 16:36:27 2014 From: t10tennn at gmail.com (10 minus) Date: Fri, 22 Aug 2014 18:36:27 +0200 Subject: [Rdo-list] icehouse with ML2 : VMs not able to get DHCP on Centos 6.5 Message-ID: Hi, My setup Contoller+Network node -- 2 nics ( internal+vm, external) 2x Compute -- 2 nics (internal+vm, external) I have used packstack to set the environment up. The VMs on compute node are unable to contact controller node. tcpdump shows me that the packets never make it to controller node On compute node --snip-- tcpdump -i br-vm | grep -i dhcp 17:25:52.476521 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:3e:ca:c2 (oui Unknown), length 281 17:27:52.598709 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:3e:ca:c2 (oui Unknown), length 281 --snip-- On controller node the above packets never make it logs for /var/log/neutron/openvswitch-agent.log on compute node : --snip-- 2014-08-22 17:17:38.793 29698 INFO neutron.agent.securitygroups_rpc [req-faf30bbb-de0c-4f41-8fcb-cf9f09cfd141 None] Security group member updated [u'292c5a84-5c31-4158-858d-8261a6ea9680'] 2014-08-22 17:18:08.231 29698 WARNING neutron.agent.linux.ovs_lib [-] Found failed openvswitch port: [u'int-br-ex', [u'map', []], -1] 2014-08-22 17:18:08.348 29698 INFO neutron.agent.securitygroups_rpc [-] Preparing filters for devices set([u'739aff99-7472-4e5c-921b-095005830f61']) 2014-08-22 17:18:08.391 29698 INFO neutron.openstack.common.rpc.common [-] Connected to AMQP server on 10.5.0.31:5672 2014-08-22 17:18:09.162 29698 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Port 739aff99-7472-4e5c-921b-095005830f61 updated. Details: {u'admin_state_up': True, u'network_id': u'16e331e2-3502-4d72-8a91-8931bb90263c', u'segmentation_id': 100, u'physical_network': u'tvlan', u'device': u'739aff99-7472-4e5c-921b-095005830f61', u'port_id': u'739aff99-7472-4e5c-921b-095005830f61', u'network_type': u'vlan'} 2014-08-22 17:18:09.162 29698 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Assigning 1 as local vlan for net-id=16e331e2-3502-4d72-8a91-8931bb90263c 2014-08-22 17:18:09.639 29698 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Configuration for device 739aff99-7472-4e5c-921b-095005830f61 completed. --snip-- logs for /var/log/neutron/server.log on controller : --snip-- 2014-08-22 17:18:01.996 3131 INFO neutron.wsgi [req-6dbeeb06-b98c-4567-b4c5-1003932ea426 None] (3131) accepted ('10.5.0.31', 58207) 2014-08-22 17:18:02.052 3131 INFO neutron.wsgi [req-53df877f-59bd-48d9-a8c0-ec799ce86677 None] 10.5.0.31 - - [22/Aug/2014 17:18:02] "GET //v2.0/subnets.json HTTP/1.1" 200 1424 0.055183 2014-08-22 17:18:11.554 3131 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.5.0.31 2014-08-22 17:18:11.657 3131 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.5.0.31 2014-08-22 17:18:11.827 3131 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.5.0.31 2014-08-22 17:18:12.048 3131 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'739aff99-7472-4e5c-921b-095005830f61', u'name': u'network-vif-plugged', u'server_uuid': u'aaf0838b-d668-457a-b564-b9aa626ea78a', u'code': 200} 2014-08-22 17:18:15.656 3131 INFO neutron.wsgi [-] (3131) accepted ('10.5.0.31', 58217) . . 2014-08-22 17:18:29.494 3131 INFO neutron.wsgi [req-a8c3197a-9ac8-4721-b2c2-7e120c1b2b68 None] 10.5.0.33 - - [22/Aug/2014 17:18:29] "GET /v2.0/ports.json?network_id=16e331e2-3502-4d72-8a91-8931bb90263c&device_owner=network%3Adhcp HTTP/1.1" 200 941 0.020400 2014-08-22 17:19:30.697 3131 INFO neutron.wsgi [-] (3131) accepted ('10.5.0.33', 33439) 2014-08-22 17:19:30.945 3131 INFO neutron.wsgi [req-8bc82373-aa5b-425f-b258-6a75022ece9f None] (3131) accepted ('10.5.0.33', 33442) 2014-08-22 17:19:30.963 3131 INFO neutron.wsgi [req-fc706978-e642-4057-8fda-9ee53bfddf91 None] 10.5.0.33 - - [22/Aug/2014 17:19:30] "GET /v2.0/subnets.json?id=7667013a-af5f-4171-9797-9dd788fe8461 HTTP/1.1" 200 628 0.017350 2014-08-22 17:19:30.965 3131 INFO neutron.wsgi [req-fc706978-e642-4057-8fda-9ee53bfddf91 None] (3131) accepted ('10.5.0.33', 33443) 2014-08-22 17:19:30.986 3131 INFO neutron.wsgi [req-7f86ccd7-3463-4057-be48-1c4deb475238 None] 10.5.0.33 - - [22/Aug/2014 17:19:30] "GET /v2.0/ports.json?network_id=16e331e2-3502-4d72-8a91-8931bb90263c&device_owner=network%3Adhcp HTTP/1.1" 200 941 0.020030 2014-08-22 17:20:32.204 3131 INFO neutron.wsgi [-] (3131) accepted ('10.5.0.33', 33444) --snip-- my plugin.ini on compute node --snip-- [ml2] type_drivers = vlan tenant_network_types = vlan mechanism_drivers =openvswitch [ml2_type_flat] [ml2_type_vlan] network_vlan_ranges = tvlan:100:110 [ml2_type_gre] [ml2_type_vxlan] [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [ovs] bridge_mappings = tvlan:br-vm network_vlan_ranges = tvlan:100:110 tenant_network_type = vlan enable_tunneling = False integration_bridge = br-int local_ip = 172.16.0.33 --snip-- If I define a fixed ip address I'm unable to query the router on controller node. --snip-- tcpdump -i br-vm | grep 172.16.100.254 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br-vm, link-type EN10MB (Ethernet), capture size 65535 bytes 18:00:27.637994 ARP, Request who-has 172.16.100.254 tell 172.16.100.5, length 28 18:00:28.638008 ARP, Request who-has 172.16.100.254 tell 172.16.100.5, length 28 18:00:29.640179 ARP, Request who-has 172.16.100.254 tell 172.16.100.5, length 28 18:00:30.638030 ARP, Request who-has 172.16.100.254 tell 172.16.100.5, length 28 18:00:31.638033 ARP, Request who-has 172.16.100.254 tell 172.16.100.5, length 28 18:00:32.640302 ARP, Request who-has 172.16.100.254 tell 172.16.100.5, length 28 18:00:33.638048 ARP, Request who-has 172.16.100.254 tell 172.16.100.5, length 28 18:00:34.638055 ARP, Request who-has 172.16.100.254 tell 172.16.100.5, length 28 --snip-- What baffles me I'm unable to see the vlan info # My neutron config for computing-node neutron agent-show 9fa4620b-27e0-4308-a4ef-0bd29bc813f4 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | agent_type | Open vSwitch agent | | alive | True | | binary | neutron-openvswitch-agent | | configurations | { | | | "tunnel_types": [], | | | "tunneling_ip": "172.16.0.33", | | | "bridge_mappings": { | | | "tvlan": "br-vm" | | | }, | | | "l2_population": false, | | | "devices": 1 | | | } | | created_at | 2014-08-21 08:59:59 | | description | | | heartbeat_timestamp | 2014-08-22 16:08:10 | | host | cc03.t10.de | | id | 9fa4620b-27e0-4308-a4ef-0bd29bc813f4 | | started_at | 2014-08-22 15:09:40 | | topic | N/A | +---------------------+--------------------------------------+ # Controller config neutron agent-show 8f947289-c8bc-40d6-8ebf-b5a29a5f83bc +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | agent_type | Open vSwitch agent | | alive | True | | binary | neutron-openvswitch-agent | | configurations | { | | | "tunnel_types": [], | | | "tunneling_ip": "", | | | "bridge_mappings": { | | | "physnet1": "br-ex", | | | "tvlan": "br-vm" | | | }, | | | "l2_population": false, | | | "devices": 4 | | | } | | created_at | 2014-08-20 15:49:14 | | description | | | heartbeat_timestamp | 2014-08-22 16:21:39 | | host | cc01.t10.de | | id | 8f947289-c8bc-40d6-8ebf-b5a29a5f83bc | | started_at | 2014-08-21 11:26:17 | | topic | N/A | +---------------------+--------------------------------------+ Any pointers to fix the issue .. -------------- next part -------------- An HTML attachment was scrubbed... URL: From t10tennn at gmail.com Mon Aug 25 06:38:51 2014 From: t10tennn at gmail.com (10 minus) Date: Mon, 25 Aug 2014 08:38:51 +0200 Subject: [Rdo-list] Icehouse : Foreman + Staypuft on Centos 6.5 Message-ID: Hi , Has anybody got Staypuft to work . I get an error "missing base_hostgroup" when I click on New Deployment. The error is same regardless of which version I use. from "ruby193-rubygem-staypuft-0.1.2" thru "ruby193-rubygem-staypuft-0.1.20" Cheers, -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Mon Aug 25 16:14:39 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 25 Aug 2014 16:14:39 +0000 Subject: [Rdo-list] [RDO] RDO Blog Roundup, week of August 18 Message-ID: <000001480df2a65e-8f4a4498-d86e-4e24-8b0d-831cb23d0df0-000000@email.amazonses.com> rbowen started a discussion. RDO Blog Roundup, week of August 18 --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/979/rdo-blog-roundup-week-of-august-18 Have a great day! From rbowen at rcbowen.com Mon Aug 25 19:20:46 2014 From: rbowen at rcbowen.com (Rich Bowen) Date: Mon, 25 Aug 2014 15:20:46 -0400 Subject: [Rdo-list] Deploying with Heat - Hangout - September 5 Message-ID: <53FB8C8E.60802@rcbowen.com> Next week, Friday, September 5, 10 am Eastern US time, Lars Kellogg-Stedman will be presenting a Google Hangout on the subject of deploying with Heat. This will be streamed live on YouTube at https://plus.google.com/events/c9u4sjn7ksb8jrmma7vd25aok94 and if the time is not convenient, you will be able to watch it at that same URL after the fact. Come to the #rdo-hangout channel on Freenode IRC for questions and discussion during and after the event, or come to #rdo at any time for RDO-related discussion. --Rich -- Rich Bowen - rbowen at rcbowen.com - @rbowen http://apachecon.com/ - @apachecon From rbowen at redhat.com Tue Aug 26 16:52:40 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 26 Aug 2014 12:52:40 -0400 Subject: [Rdo-list] Meetups in the coming week Message-ID: <53FCBB58.5030909@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts will be gathering. If you know of others, please do add them to http://openstack.redhat.com/Events * DevOps At Red Hat, August 26, Ra'anana - http://www.meetup.com/Open-Source-Israel/events/195812602/ * OpenStack Networking (Neutron) - 2014 Update, August 28, Google Hangout - http://www.meetup.com/OpenStack-Online-Meetup/events/201860872/ * Introduction to RDO and packstack, August 28, New Zealand OpenStack User Group, Auckland - http://www.meetup.com/New-Zealand-OpenStack-User-Group/events/199656102/ * OpenStack Swift Hackathon], August 28, Valley Forge Tech - http://www.meetup.com/ValleyForgeTech/events/199592552/ * Openstack Amsterdam September Meetup & Openstack 101, September 3, Openstack & Ceph User Group, Amsterdam - http://www.meetup.com/Openstack-Amsterdam/events/202482492/ * Deploying things with Heat], September 5th, Google Hangout - https://plus.google.com/events/c9u4sjn7ksb8jrmma7vd25aok94 If you attend any of these meetups, please take pictures, and send me some. If you blog about the events, please send me that, too. Thanks! --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Tue Aug 26 20:29:02 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 26 Aug 2014 16:29:02 -0400 Subject: [Rdo-list] Fwd: [openstack-community] Official Paris Summit Schedule is Live In-Reply-To: References: Message-ID: <53FCEE0E.8010808@redhat.com> FYI, the OpenStack Summit Schedule is now available! -------- Original Message -------- Subject: [openstack-community] Official Paris Summit Schedule is Live Date: Tue, 26 Aug 2014 22:25:14 +0200 From: Shari Mahrdt To: community at lists.openstack.org , marketing at lists.openstack.org *The official OpenStack Summit Schedule is available here .* We received an incredible 1,100+ submissions for the Paris Summit, and had to make some tough decisions for the schedule. The final sessions were chosen last week and everyone who submitted a proposal was notified on Friday - August 22, 2014. All accepted and alternate speakers received free codes to register to the Summit. Email notifications were sent from events at openstack.org or speakermanager at fntech.com . Please let us know if there is anyone who has submitted a session but hasn't received a notification from one of these emails. There is also the opportunity to present a Tech Talk in the #vbrownbag room. The TechTalks offer a forum for community members to give ten minute presentations. They have a small in-person audience and will be video recorded and published to YouTube. To participate, just fill out the submission form here . Please remember that the *last day to *purchase * Summit passes at the Early Bird rate is this Thursday - August 27, 2014. * We look forward to seeing you all in Paris! Cheers, Shari Shari Mahrdt OpenStack Marketing shari at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community From jonas.hagberg at scilifelab.se Wed Aug 27 08:11:56 2014 From: jonas.hagberg at scilifelab.se (Jonas Hagberg) Date: Wed, 27 Aug 2014 10:11:56 +0200 Subject: [Rdo-list] Foreman quickstack Neutron with VLAN Message-ID: Hej Is there any guide to configure a neutron network node to support OVS and VLAN with two physical interface? I tried change some parameters in foreman. I would also like to us the mellanox plugin and sr-IOV in ethernet mode. https://wiki.openstack.org/wiki/Mellanox-Neutron-Icehouse-Redhat#Network_Node But I do not get things running. in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini I now have [OVS] vxlan_udp_port=4789 network_vlan_ranges=physnet1:1000:2999 tenant_network_type=vlan enable_tunneling=False integration_bridge=br-int bridge_mappings=physnet1:br-eth0,public:br-ex Eth0 is m y internal mellanox interface. I have run the script ./bridge-create.sh br-eth0 eth0 my ifconfig looks like this ifconfig br-eth0 Link encap:Ethernet HWaddr 24:BE:05:9A:2B:71 inet addr:10.10.10.101 Bcast:10.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::7431:b4ff:fe5f:4dbd/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:180 (180.0 b) TX bytes:930 (930.0 b) br-eth0,public Link encap:Ethernet HWaddr E6:72:FA:A1:EB:45 inet6 addr: fe80::e08d:1eff:feb4:f07d/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:468 (468.0 b) br-int Link encap:Ethernet HWaddr 32:16:54:BA:73:4C inet6 addr: fe80::1c1b:21ff:fe0e:48f9/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:468 (468.0 b) eth0 Link encap:Ethernet HWaddr 24:BE:05:9A:2B:71 inet6 addr: fe80::26be:5ff:fe9a:2b71/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:1731 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:110784 (108.1 KiB) TX bytes:926 (926.0 b) eth2 Link encap:Ethernet HWaddr 9C:B6:54:08:94:FC inet addr:172.25.8.101 Bcast:172.25.11.255 Mask:255.255.252.0 inet6 addr: fe80::9eb6:54ff:fe08:94fc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:53594 errors:0 dropped:0 overruns:0 frame:0 TX packets:21334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52384874 (49.9 MiB) TX bytes:8096475 (7.7 MiB) Memory:f7d00000-f7e00000 eth3 Link encap:Ethernet HWaddr 9C:B6:54:08:94:FD UP BROADCAST MULTICAST MTU:9000 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Memory:f7b00000-f7c00000 int-br-eth0 Link encap:Ethernet HWaddr B2:F3:16:36:98:6A inet6 addr: fe80::b0f3:16ff:fe36:986a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:468 (468.0 b) TX bytes:468 (468.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) phy-br-eth0 Link encap:Ethernet HWaddr A6:85:EA:FF:E1:86 inet6 addr: fe80::a485:eaff:feff:e186/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:468 (468.0 b) TX bytes:468 (468.0 b) So lots of strange bridges. but no br-ex I created that by hand and can get service neutron-openvswitch-agent to run I am just starting to learn Neutron so I may not fully understand what I am doing. Some help and kind guidance would be wonderful. cheers -- Jonas Hagberg BILS - Bioinformatics Infrastructure for Life Sciences - http://bils.se e-mail: jonas.hagberg at bils.se, jonas.hagberg at scilifelab.se phone: +46-(0)70 6683869 address: SciLifeLab, Box 1031, 171 21 Solna, Sweden -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail-lists at karan.org Wed Aug 27 10:40:46 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Wed, 27 Aug 2014 11:40:46 +0100 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image Message-ID: <53FDB5AE.1050907@karan.org> hi I've just pushed a GenericCloud image, that will become the gold standard to build all varients and environ specific images from. Requesting people to help test this image : http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 ( ~ 922 MB) or http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz ( 261 MB) Sha256's; 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz please note: these images contain unsigned content ( cloud-init and cloud-utils-* ), and are therefore unsuiteable for use beyond validation on your environment. regards, -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From bderzhavets at hotmail.com Wed Aug 27 12:14:08 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 27 Aug 2014 08:14:08 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDB5AE.1050907@karan.org> References: <53FDB5AE.1050907@karan.org> Message-ID: Instance deployed with floating IP 192.168.1.180 and is up and running [boris at icehouse1 Downloads]$ ping 192.168.1.180PING 192.168.1.180 (192.168.1.180) 56(84) bytes of data.64 bytes from 192.168.1.180: icmp_seq=1 ttl=63 time=1.17 ms64 bytes from 192.168.1.180: icmp_seq=2 ttl=63 time=0.247 ms64 bytes from 192.168.1.180: icmp_seq=3 ttl=63 time=0.280 ms64 bytes from 192.168.1.180: icmp_seq=4 ttl=63 time=0.250 ms ^C [boris at icehouse1 Downloads]$ ssh -i oskey25.pem cloud-user at 192.168.1.180 The authenticity of host '192.168.1.180 (192.168.1.180)' can't be established.ECDSA key fingerprint is 56:dc:7e:14:ee:8d:4c:bb:09:d1:da:7b:fd:a7:8b:60.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '192.168.1.180' (ECDSA) to the list of known hosts.Permission denied (publickey,gssapi-keyex,gssapi-with-mic). Attempt to SSH with ssh-keypair been used when launching fails.That seems to be the same issue I experienced trying to reproducehttp://openstack.redhat.com/Creating_CentOS_and_Fedora_images_ready_for_Openstack Boris. > Date: Wed, 27 Aug 2014 11:40:46 +0100 > From: mail-lists at karan.org > To: Rdo-list at redhat.com > Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image > > hi > > I've just pushed a GenericCloud image, that will become the gold > standard to build all varients and environ specific images from. > Requesting people to help test this image : > > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > ( ~ 922 MB) > or > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > ( 261 MB) > > Sha256's; > > 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > > please note: these images contain unsigned content ( cloud-init and > cloud-utils-* ), and are therefore unsuiteable for use beyond validation > on your environment. > > regards, > > -- > Karanbir Singh > +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh > GnuPG Key : http://www.karan.org/publickey.asc > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail-lists at karan.org Wed Aug 27 12:38:46 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Wed, 27 Aug 2014 13:38:46 +0100 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: References: <53FDB5AE.1050907@karan.org> Message-ID: <53FDD156.8080409@karan.org> On 08/27/2014 01:14 PM, Boris Derzhavets wrote: > [boris at icehouse1 Downloads]$ ssh -i oskey25.pem cloud-user at 192.168.1.180 > > The authenticity of host '192.168.1.180 (192.168.1.180)' can't be > established. > ECDSA key fingerprint is 56:dc:7e:14:ee:8d:4c:bb:09:d1:da:7b:fd:a7:8b:60. > Are you sure you want to continue connecting (yes/no)? yes > Warning: Permanently added '192.168.1.180' (ECDSA) to the list of known > hosts. > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > Attempt to SSH with ssh-keypair been used when launching fails. > That seems to be the same issue I experienced trying to reproduce > http://openstack.redhat.com/Creating_CentOS_and_Fedora_images_ready_for_Openstack > what happens when you try ssh -l centos -i oskey25.pem 192.168.1.180 the issue then is - why are you expecting for the login to be cloud-user and not 'centos'. - KB -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From kfiresmith at gmail.com Wed Aug 27 12:51:44 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 27 Aug 2014 08:51:44 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDD156.8080409@karan.org> References: <53FDB5AE.1050907@karan.org> <53FDD156.8080409@karan.org> Message-ID: Coming from the Redhat world I also assumed it would be 'cloud-user'... On Wed, Aug 27, 2014 at 8:38 AM, Karanbir Singh wrote: > On 08/27/2014 01:14 PM, Boris Derzhavets wrote: > >> [boris at icehouse1 Downloads]$ ssh -i oskey25.pem cloud-user at 192.168.1.180 >> >> The authenticity of host '192.168.1.180 (192.168.1.180)' can't be >> established. >> ECDSA key fingerprint is 56:dc:7e:14:ee:8d:4c:bb:09:d1:da:7b:fd:a7:8b:60. >> Are you sure you want to continue connecting (yes/no)? yes >> Warning: Permanently added '192.168.1.180' (ECDSA) to the list of known >> hosts. >> Permission denied (publickey,gssapi-keyex,gssapi-with-mic). >> >> Attempt to SSH with ssh-keypair been used when launching fails. >> That seems to be the same issue I experienced trying to reproduce >> http://openstack.redhat.com/Creating_CentOS_and_Fedora_images_ready_for_Openstack >> > > what happens when you try ssh -l centos -i oskey25.pem 192.168.1.180 > > the issue then is - why are you expecting for the login to be cloud-user > and not 'centos'. > > - KB > > -- > Karanbir Singh > +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh > GnuPG Key : http://www.karan.org/publickey.asc > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From jose.castro.leon at cern.ch Wed Aug 27 12:57:34 2014 From: jose.castro.leon at cern.ch (Jose Castro Leon) Date: Wed, 27 Aug 2014 12:57:34 +0000 Subject: [Rdo-list] Openstack Horizon el6 status Message-ID: <248A2D277CB6E34992A0902A839C1E5D0101948519@CERNXCHG42.cern.ch> Hi, We are following the horizon releases from the RDO repository and also from the github repository as well. We have just realised that the package for icehouse-2 has not been released and that the branch that was used to track the redhat patches for el6 has been removed as well. Could you please tell me the timeline for this package? Is there any other repository with the el6 patches? Kind regards, Jose Castro Leon CERN IT-OIS tel: +41.22.76.74272 mob: +41.76.48.79222 fax: +41.22.76.67955 Office: 31-R-021 CH-1211 Geneve 23 email: jose.castro.leon at cern.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail-lists at karan.org Wed Aug 27 13:08:47 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Wed, 27 Aug 2014 14:08:47 +0100 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDD156.8080409@karan.org> Message-ID: <53FDD85F.7010201@karan.org> On 08/27/2014 01:51 PM, Kodiak Firesmith wrote: > Coming from the Redhat world I also assumed it would be 'cloud-user'... > we had a chat about this on the centos-devel list a while back, and then we did some face2face questions at various events, the idea of going with 'centos' seemed the most popular. Also, ar'nt the fedora images all setup to default user to 'fedora' ? the next thing then would be howto best communicate this... the AMI's, Cloud setups in hpcloud, brightbox are all defaulting to 'centos' as well going forward.. places like google-compute dont have a default login, its whatever the user sets up when they start the instance up. - KB -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From kfiresmith at gmail.com Wed Aug 27 13:13:36 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 27 Aug 2014 09:13:36 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDD85F.7010201@karan.org> References: <53FDB5AE.1050907@karan.org> <53FDD156.8080409@karan.org> <53FDD85F.7010201@karan.org> Message-ID: So long as there's an info page on the website, (maybe a readme.txt in the download dirs), and maybe it could be mentioned on the MOTD (like Cirros), that'd probably cover all bases. But that's just like, this newb's opinion on it after searching around for login names on a couple cloud images recently... - Kodiak On Wed, Aug 27, 2014 at 9:08 AM, Karanbir Singh wrote: > On 08/27/2014 01:51 PM, Kodiak Firesmith wrote: >> Coming from the Redhat world I also assumed it would be 'cloud-user'... >> > > we had a chat about this on the centos-devel list a while back, and then > we did some face2face questions at various events, the idea of going > with 'centos' seemed the most popular. Also, ar'nt the fedora images all > setup to default user to 'fedora' ? > > the next thing then would be howto best communicate this... the AMI's, > Cloud setups in hpcloud, brightbox are all defaulting to 'centos' as > well going forward.. places like google-compute dont have a default > login, its whatever the user sets up when they start the instance up. > > - KB > > > -- > Karanbir Singh > +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh > GnuPG Key : http://www.karan.org/publickey.asc From bderzhavets at hotmail.com Wed Aug 27 14:04:38 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 27 Aug 2014 10:04:38 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image (GRE Issue) In-Reply-To: <53FDD156.8080409@karan.org> References: <53FDB5AE.1050907@karan.org> , <53FDD156.8080409@karan.org> Message-ID: My system is Two Node Cluster Neutron ML2&OVS&GRE configured as follows [root at dfw02 neutron(keystone_admin)]$ cat dhcp_agent.ini | grep -v ^# | grep -v ^$ [DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver handle_internal_only_routers = TRUE external_network_bridge = br-ex ovs_use_veth = True use_namespaces = True dnsmasq_config_file = /etc/neutron/dnsmasq.conf [root at dfw02 neutron(keystone_admin)]$ cat dnsmasq.conf log-facility = /var/log/neutron/dnsmasq.log log-dhcp dhcp-option=26,1454 It forces any new created VM's ( Ubuntu,F20) MTU to be set to 1454 Deployed CentOS 7 VM had MTU 1500 what was verified via load without ssh-keypair wit post creation script assigning password to user "centos". I believe that current image would have problems on GRE Systems. Boris > Date: Wed, 27 Aug 2014 13:38:46 +0100 > From: mail-lists at karan.org > To: bderzhavets at hotmail.com; rdo-list at redhat.com > Subject: Re: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image > > On 08/27/2014 01:14 PM, Boris Derzhavets wrote: > > > [boris at icehouse1 Downloads]$ ssh -i oskey25.pem cloud-user at 192.168.1.180 > > > > The authenticity of host '192.168.1.180 (192.168.1.180)' can't be > > established. > > ECDSA key fingerprint is 56:dc:7e:14:ee:8d:4c:bb:09:d1:da:7b:fd:a7:8b:60. > > Are you sure you want to continue connecting (yes/no)? yes > > Warning: Permanently added '192.168.1.180' (ECDSA) to the list of known > > hosts. > > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > > > Attempt to SSH with ssh-keypair been used when launching fails. > > That seems to be the same issue I experienced trying to reproduce > > http://openstack.redhat.com/Creating_CentOS_and_Fedora_images_ready_for_Openstack > > > > what happens when you try ssh -l centos -i oskey25.pem 192.168.1.180 > > the issue then is - why are you expecting for the login to be cloud-user > and not 'centos'. > > - KB > > -- > Karanbir Singh > +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh > GnuPG Key : http://www.karan.org/publickey.asc -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Aug 27 15:17:08 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 27 Aug 2014 11:17:08 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDB5AE.1050907@karan.org> References: <53FDB5AE.1050907@karan.org> Message-ID: <53FDF674.6010108@redhat.com> On 08/27/2014 06:40 AM, Karanbir Singh wrote: > hi > > I've just pushed a GenericCloud image, that will become the gold > standard to build all varients and environ specific images from. > Requesting people to help test this image : > > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > ( ~ 922 MB) > or > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > ( 261 MB) > > Sha256's; > > 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > > please note: these images contain unsigned content ( cloud-init and > cloud-utils-* ), and are therefore unsuiteable for use beyond validation > on your environment. > I've added these to http://openstack.redhat.com/Image_resources#Downloading_Pre-Built_Images_for_OpenStack ... please feel free to add a disclaimer there if you think it's appropriate/necessary. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From mail-lists at karan.org Wed Aug 27 15:34:16 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Wed, 27 Aug 2014 16:34:16 +0100 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDF674.6010108@redhat.com> References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> Message-ID: <53FDFA78.2040201@karan.org> On 08/27/2014 04:17 PM, Rich Bowen wrote: > > I've added these to > http://openstack.redhat.com/Image_resources#Downloading_Pre-Built_Images_for_OpenStack > ... please feel free to add a disclaimer there if you think it's > appropriate/necessary. Thanks, we should have updated images soon with all the outstanding issues resolved ( or documented ) I'm unsure howto address the issue that Boris brought up though - since I dont have a suiteable install to test in. I might need to setup some intermediatory images. -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From rbowen at redhat.com Wed Aug 27 15:37:59 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 27 Aug 2014 11:37:59 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDFA78.2040201@karan.org> References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> Message-ID: <53FDFB57.2030005@redhat.com> On 08/27/2014 11:34 AM, Karanbir Singh wrote: > On 08/27/2014 04:17 PM, Rich Bowen wrote: >> > >> >I've added these to >> >http://openstack.redhat.com/Image_resources#Downloading_Pre-Built_Images_for_OpenStack >> >... please feel free to add a disclaimer there if you think it's >> >appropriate/necessary. > Thanks, > > we should have updated images soon with all the outstanding issues > resolved ( or documented ) Once you're producing images on a regular basis, it would be nice to have a "-latest" symlink so that we don't need to update the images page every week. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From mail-lists at karan.org Wed Aug 27 16:18:08 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Wed, 27 Aug 2014 17:18:08 +0100 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDFB57.2030005@redhat.com> References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com> Message-ID: <53FE04C0.1000909@karan.org> On 08/27/2014 04:37 PM, Rich Bowen wrote: > > On 08/27/2014 11:34 AM, Karanbir Singh wrote: >> On 08/27/2014 04:17 PM, Rich Bowen wrote: >>> > >>> >I've added these to >>> >http://openstack.redhat.com/Image_resources#Downloading_Pre-Built_Images_for_OpenStack >>> >>> >... please feel free to add a disclaimer there if you think it's >>> >appropriate/necessary. >> Thanks, >> >> we should have updated images soon with all the outstanding issues >> resolved ( or documented ) > > Once you're producing images on a regular basis, it would be nice to > have a "-latest" symlink so that we don't need to update the images page > every week. right, that exists - the same name minus the datestamp is a symlink ( but dont use those yet! ). Is there value in having -latest in there ? I just truncated the date, so its always the same. If having -latest better communicates the state of the image, then we can add that in. -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From rbowen at redhat.com Wed Aug 27 16:48:42 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 27 Aug 2014 12:48:42 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FE04C0.1000909@karan.org> References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com> <53FE04C0.1000909@karan.org> Message-ID: <53FE0BEA.2030800@redhat.com> On 08/27/2014 12:18 PM, Karanbir Singh wrote: >> >Once you're producing images on a regular basis, it would be nice to >> >have a "-latest" symlink so that we don't need to update the images page >> >every week. > right, that exists - the same name minus the datestamp is a symlink ( > but dont use those yet! ). > > Is there value in having -latest in there ? I just truncated the date, > so its always the same. If having -latest better communicates the state > of the image, then we can add that in. Nope, I don't really care what the file name is, as long as it doesn't change from week to week. :-) Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From ak at cloudssky.com Wed Aug 27 17:21:48 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Wed, 27 Aug 2014 19:21:48 +0200 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FE0BEA.2030800@redhat.com> References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com> <53FE04C0.1000909@karan.org> <53FE0BEA.2030800@redhat.com> Message-ID: added the image to glance on havana with: glance image-create --name "CentOS 7 Generic Cloud 20140826" --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 --is-public True and fired an instance on horion, the instance is up, can access it through the console, but can't ping it. any ideas? Thx! On Wed, Aug 27, 2014 at 6:48 PM, Rich Bowen wrote: > > On 08/27/2014 12:18 PM, Karanbir Singh wrote: > >> >Once you're producing images on a regular basis, it would be nice to >>> >have a "-latest" symlink so that we don't need to update the images page >>> >every week. >>> >> right, that exists - the same name minus the datestamp is a symlink ( >> but dont use those yet! ). >> >> Is there value in having -latest in there ? I just truncated the date, >> so its always the same. If having -latest better communicates the state >> of the image, then we can add that in. >> > > Nope, I don't really care what the file name is, as long as it doesn't > change from week to week. :-) > > Thanks! > > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://openstack.redhat.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Wed Aug 27 17:37:29 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Wed, 27 Aug 2014 19:37:29 +0200 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com> <53FE04C0.1000909@karan.org> <53FE0BEA.2030800@redhat.com> Message-ID: some more info, nova list gives me: +--------------------------------------+-------------+--------+------------+-------------+-----------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+-----------------------------------+ | 948961f3-ea67-4645-a502-5b395692dcb7 | CentOS7 | ACTIVE | - | Running | csgnet=10.0.0.19 | ... I'm using RDO Havana with VLAN and it works like a charm for CentOS 6.5, Atomic, Ubuntu and CoreOS. Question: how can I dig deeper to find what's not working? Was the image tested on Icehouse? Thx! On Wed, Aug 27, 2014 at 7:21 PM, Arash Kaffamanesh wrote: > added the image to glance on havana with: > > glance image-create --name "CentOS 7 Generic Cloud 20140826" > --container-format bare --disk-format qcow2 --file > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 --is-public True > > and fired an instance on horion, the instance is up, can access it through > the console, but can't ping it. > > any ideas? > > Thx! > > > On Wed, Aug 27, 2014 at 6:48 PM, Rich Bowen wrote: > >> >> On 08/27/2014 12:18 PM, Karanbir Singh wrote: >> >>> >Once you're producing images on a regular basis, it would be nice to >>>> >have a "-latest" symlink so that we don't need to update the images >>>> page >>>> >every week. >>>> >>> right, that exists - the same name minus the datestamp is a symlink ( >>> but dont use those yet! ). >>> >>> Is there value in having -latest in there ? I just truncated the date, >>> so its always the same. If having -latest better communicates the state >>> of the image, then we can add that in. >>> >> >> Nope, I don't really care what the file name is, as long as it doesn't >> change from week to week. :-) >> >> Thanks! >> >> >> >> -- >> Rich Bowen - rbowen at redhat.com >> OpenStack Community Liaison >> http://openstack.redhat.com/ >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kfiresmith at gmail.com Wed Aug 27 17:48:55 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 27 Aug 2014 13:48:55 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com> <53FE04C0.1000909@karan.org> <53FE0BEA.2030800@redhat.com> Message-ID: You might see cloud-init doing it's network thing (or failing to) and/or dhcp failing (or not failing) via the console logs. You might also take a peek at the dnsmasq logs on the neutron node. On Wed, Aug 27, 2014 at 1:37 PM, Arash Kaffamanesh wrote: > some more info, nova list gives me: > > +--------------------------------------+-------------+--------+------------+-------------+-----------------------------------+ > | ID | Name | Status | Task State | > Power State | Networks | > +--------------------------------------+-------------+--------+------------+-------------+-----------------------------------+ > | 948961f3-ea67-4645-a502-5b395692dcb7 | CentOS7 | ACTIVE | - | > Running | csgnet=10.0.0.19 | > ... > > I'm using RDO Havana with VLAN and it works like a charm for CentOS 6.5, > Atomic, Ubuntu and CoreOS. > > Question: how can I dig deeper to find what's not working? Was the image > tested on Icehouse? > > Thx! > > > On Wed, Aug 27, 2014 at 7:21 PM, Arash Kaffamanesh wrote: >> >> added the image to glance on havana with: >> >> glance image-create --name "CentOS 7 Generic Cloud 20140826" >> --container-format bare --disk-format qcow2 --file >> CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 --is-public True >> >> and fired an instance on horion, the instance is up, can access it through >> the console, but can't ping it. >> >> any ideas? >> >> Thx! >> >> >> On Wed, Aug 27, 2014 at 6:48 PM, Rich Bowen wrote: >>> >>> >>> On 08/27/2014 12:18 PM, Karanbir Singh wrote: >>>>> >>>>> >Once you're producing images on a regular basis, it would be nice to >>>>> >have a "-latest" symlink so that we don't need to update the images >>>>> > page >>>>> >every week. >>>> >>>> right, that exists - the same name minus the datestamp is a symlink ( >>>> but dont use those yet! ). >>>> >>>> Is there value in having -latest in there ? I just truncated the date, >>>> so its always the same. If having -latest better communicates the state >>>> of the image, then we can add that in. >>> >>> >>> Nope, I don't really care what the file name is, as long as it doesn't >>> change from week to week. :-) >>> >>> Thanks! >>> >>> >>> >>> -- >>> Rich Bowen - rbowen at redhat.com >>> OpenStack Community Liaison >>> http://openstack.redhat.com/ >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >> >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From Tim.Bell at cern.ch Wed Aug 27 17:54:31 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 27 Aug 2014 17:54:31 +0000 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com> <53FE04C0.1000909@karan.org> <53FE0BEA.2030800@redhat.com> Message-ID: Image works well on CERN OpenStack (KVM), cloud-init set the hostname and did the disk extension. yum fastest mirror found the CERN repo. Tim On 27 Aug 2014, at 19:21, Arash Kaffamanesh > wrote: added the image to glance on havana with: glance image-create --name "CentOS 7 Generic Cloud 20140826" --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 --is-public True and fired an instance on horion, the instance is up, can access it through the console, but can't ping it. any ideas? Thx! On Wed, Aug 27, 2014 at 6:48 PM, Rich Bowen > wrote: On 08/27/2014 12:18 PM, Karanbir Singh wrote: >Once you're producing images on a regular basis, it would be nice to >have a "-latest" symlink so that we don't need to update the images page >every week. right, that exists - the same name minus the datestamp is a symlink ( but dont use those yet! ). Is there value in having -latest in there ? I just truncated the date, so its always the same. If having -latest better communicates the state of the image, then we can add that in. Nope, I don't really care what the file name is, as long as it doesn't change from week to week. :-) Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Wed Aug 27 17:57:55 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Wed, 27 Aug 2014 19:57:55 +0200 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com> <53FE04C0.1000909@karan.org> <53FE0BEA.2030800@redhat.com> Message-ID: in the console log I can see: localhost login: cloud-init[802]: 2014-08-27 17:22:32,811 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [5/120s]: request error [[Errno 101] Network is unreachable] cloud-init[802]: 2014-08-27 17:22:34,815 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [7/120s]: request error [[Errno 101] Network is unreachable] Mit freundlichen Gr??en, Arash Kaffamanesh Meetup: OpenStack X Meetup Like | Follow *________________**____________* *Arash Kaffamanesh* *Clouds Sky GmbH* *Home: Im Mediapark 4CStartplatz: Im Mediapark 5* *50760 K?ln* T.: +49 221 379 90 680 M.: +49 177 880 77 34 www.cloudssky.com *_________________**___________* On Wed, Aug 27, 2014 at 7:54 PM, Tim Bell wrote: > > Image works well on CERN OpenStack (KVM), cloud-init set the hostname and > did the disk extension. yum fastest mirror found the CERN repo. > > Tim > > On 27 Aug 2014, at 19:21, Arash Kaffamanesh wrote: > > added the image to glance on havana with: > > glance image-create --name "CentOS 7 Generic Cloud 20140826" > --container-format bare --disk-format qcow2 --file > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 --is-public True > > and fired an instance on horion, the instance is up, can access it > through the console, but can't ping it. > > any ideas? > > Thx! > > > On Wed, Aug 27, 2014 at 6:48 PM, Rich Bowen wrote: > >> >> On 08/27/2014 12:18 PM, Karanbir Singh wrote: >> >>> >Once you're producing images on a regular basis, it would be nice to >>>> >have a "-latest" symlink so that we don't need to update the images >>>> page >>>> >every week. >>>> >>> right, that exists - the same name minus the datestamp is a symlink ( >>> but dont use those yet! ). >>> >>> Is there value in having -latest in there ? I just truncated the date, >>> so its always the same. If having -latest better communicates the state >>> of the image, then we can add that in. >>> >> >> Nope, I don't really care what the file name is, as long as it doesn't >> change from week to week. :-) >> >> Thanks! >> >> >> >> -- >> Rich Bowen - rbowen at redhat.com >> OpenStack Community Liaison >> http://openstack.redhat.com/ >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kfiresmith at gmail.com Wed Aug 27 18:02:32 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 27 Aug 2014 14:02:32 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com> <53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com> <53FE04C0.1000909@karan.org> <53FE0BEA.2030800@redhat.com> Message-ID: Something earlier in the instance spawn is going awry. I'd look earlier on the console for dhcp issues, and/or check the dnsmasq logs on the Neutron host. I'd spin up another iteration of this instance to rule out a MAC specific issue (which admittedly is unlikely). - Kodiak On Wed, Aug 27, 2014 at 1:57 PM, Arash Kaffamanesh wrote: > in the console log I can see: > > localhost login: cloud-init[802]: 2014-08-27 17:22:32,811 - > url_helper.py[WARNING]: Calling > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [5/120s]: > request error [[Errno 101] Network is unreachable] > cloud-init[802]: 2014-08-27 17:22:34,815 - url_helper.py[WARNING]: Calling > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [7/120s]: > request error [[Errno 101] Network is unreachable] > > > Mit freundlichen Gr??en, > Arash Kaffamanesh > > Meetup: OpenStack X Meetup > > Like | Follow > > ____________________________ > > Arash Kaffamanesh > Clouds Sky GmbH > > Home: Im Mediapark 4C > Startplatz: Im Mediapark 5 > > 50760 K?ln > > T.: +49 221 379 90 680 > M.: +49 177 880 77 34 > www.cloudssky.com > ____________________________ > > > > On Wed, Aug 27, 2014 at 7:54 PM, Tim Bell wrote: >> >> >> Image works well on CERN OpenStack (KVM), cloud-init set the hostname and >> did the disk extension. yum fastest mirror found the CERN repo. >> >> Tim >> >> On 27 Aug 2014, at 19:21, Arash Kaffamanesh wrote: >> >> added the image to glance on havana with: >> >> glance image-create --name "CentOS 7 Generic Cloud 20140826" >> --container-format bare --disk-format qcow2 --file >> CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 --is-public True >> >> and fired an instance on horion, the instance is up, can access it through >> the console, but can't ping it. >> >> any ideas? >> >> Thx! >> >> >> On Wed, Aug 27, 2014 at 6:48 PM, Rich Bowen wrote: >>> >>> >>> On 08/27/2014 12:18 PM, Karanbir Singh wrote: >>>>> >>>>> >Once you're producing images on a regular basis, it would be nice to >>>>> >have a "-latest" symlink so that we don't need to update the images >>>>> > page >>>>> >every week. >>>> >>>> right, that exists - the same name minus the datestamp is a symlink ( >>>> but dont use those yet! ). >>>> >>>> Is there value in having -latest in there ? I just truncated the date, >>>> so its always the same. If having -latest better communicates the state >>>> of the image, then we can add that in. >>> >>> >>> Nope, I don't really care what the file name is, as long as it doesn't >>> change from week to week. :-) >>> >>> Thanks! >>> >>> >>> >>> -- >>> Rich Bowen - rbowen at redhat.com >>> OpenStack Community Liaison >>> http://openstack.redhat.com/ >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From bderzhavets at hotmail.com Wed Aug 27 18:15:04 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 27 Aug 2014 14:15:04 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image (metadata access) In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com>,<53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com>,<53FE04C0.1000909@karan.org> <53FE0BEA.2030800@redhat.com>, , , Message-ID: Installed on several systems IceHouse ML2&OVS&GRE(VXLAN), no problems with metadataaccess:- [centos at centos07rsx ~]$ hostnamecentos07rsx.novalocal [centos at centos07rsx ~]$ uname -aLinux centos07rsx.novalocal 3.10.0-123.6.3.el7.x86_64 #1 SMP Wed Aug 6 21:12:36 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [centos at centos07rsx ~]$ curl http://169.254.169.254/latest/meta-data/instance-idi-00000031 [centos at centos07rsx ~]$ curl http://169.254.169.254/latest/meta-data/local-ipv440.0.0.37 [centos at centos07rsx ~]$ ifconfigeth0: flags=4163 mtu 1500 inet 40.0.0.37 netmask 255.255.255.0 broadcast 40.0.0.255 inet6 fe80::f816:3eff:febc:bf28 prefixlen 64 scopeid 0x20 ether fa:16:3e:bc:bf:28 txqueuelen 1000 (Ethernet) RX packets 3734 bytes 4310517 (4.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2136 bytes 252156 (246.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 12 bytes 976 (976.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12 bytes 976 (976.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Boris. Date: Wed, 27 Aug 2014 19:57:55 +0200 From: ak at cloudssky.com To: rdo-list at redhat.com Subject: Re: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image in the console log I can see: localhost login: cloud-init[802]: 2014-08-27 17:22:32,811 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [5/120s]: request error [[Errno 101] Network is unreachable] cloud-init[802]: 2014-08-27 17:22:34,815 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [7/120s]: request error [[Errno 101] Network is unreachable] Mit freundlichen Gr??en, Arash Kaffamanesh Meetup: OpenStack X Meetup Like | Follow ____________________________ Arash Kaffamanesh Clouds Sky GmbH Home: Im Mediapark 4C Startplatz: Im Mediapark 5 50760 K?ln T.: +49 221 379 90 680 M.: +49 177 880 77 34 www.cloudssky.com ____________________________ On Wed, Aug 27, 2014 at 7:54 PM, Tim Bell wrote: Image works well on CERN OpenStack (KVM), cloud-init set the hostname and did the disk extension. yum fastest mirror found the CERN repo. Tim On 27 Aug 2014, at 19:21, Arash Kaffamanesh wrote: added the image to glance on havana with: glance image-create --name "CentOS 7 Generic Cloud 20140826" --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 --is-public True and fired an instance on horion, the instance is up, can access it through the console, but can't ping it. any ideas? Thx! On Wed, Aug 27, 2014 at 6:48 PM, Rich Bowen wrote: On 08/27/2014 12:18 PM, Karanbir Singh wrote: >Once you're producing images on a regular basis, it would be nice to >have a "-latest" symlink so that we don't need to update the images page >every week. right, that exists - the same name minus the datestamp is a symlink ( but dont use those yet! ). Is there value in having -latest in there ? I just truncated the date, so its always the same. If having -latest better communicates the state of the image, then we can add that in. Nope, I don't really care what the file name is, as long as it doesn't change from week to week. :-) Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Aug 27 18:16:48 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 27 Aug 2014 14:16:48 -0400 Subject: [Rdo-list] M3 test day Message-ID: <53FE2090.2010101@redhat.com> We've been asked by the Fedora Cloud folks if they can help us do a test day for the M3 packages. M3 release date is September 4 - https://wiki.openstack.org/wiki/Juno_Release_Schedule - so could we tentatively look at the week of the 22nd to do a test day? -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From bderzhavets at hotmail.com Wed Aug 27 18:34:35 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 27 Aug 2014 14:34:35 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image (Access via floating IPs) In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDF674.6010108@redhat.com>,<53FDFA78.2040201@karan.org> <53FDFB57.2030005@redhat.com>,<53FE04C0.1000909@karan.org> <53FE0BEA.2030800@redhat.com>, Message-ID: On IceHouse ML2&OVS&VXLAN systems there is no problems with pinging floating IPs andSSH CentOS 7 VMs. On IceHouse ML2&OVS&GRE CentOS 7 VM's MTU doesn't set to 1454 , so activated instance with no ssh-keypair && postinstall script assigning password to "centos". Then login as "centos" and run `ifconfig eth0 mtu 1454 up` , afterwards usual SSH login to floating IP works fine. Boris.Intel based Hardware. CPUs Q9550, 8 GB RAM boxes.(Home Lab) Date: Wed, 27 Aug 2014 19:21:48 +0200 From: ak at cloudssky.com To: rbowen at redhat.com CC: rdo-list at redhat.com Subject: Re: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image added the image to glance on havana with: glance image-create --name "CentOS 7 Generic Cloud 20140826" --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 --is-public True and fired an instance on horion, the instance is up, can access it through the console, but can't ping it. any ideas? Thx! On Wed, Aug 27, 2014 at 6:48 PM, Rich Bowen wrote: On 08/27/2014 12:18 PM, Karanbir Singh wrote: >Once you're producing images on a regular basis, it would be nice to >have a "-latest" symlink so that we don't need to update the images page >every week. right, that exists - the same name minus the datestamp is a symlink ( but dont use those yet! ). Is there value in having -latest in there ? I just truncated the date, so its always the same. If having -latest better communicates the state of the image, then we can add that in. Nope, I don't really care what the file name is, as long as it doesn't change from week to week. :-) Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Aug 27 18:56:43 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 27 Aug 2014 14:56:43 -0400 Subject: [Rdo-list] Fwd: [rhos-dev] FYI: Early bird pricing ends tomorrow for Paris Summit In-Reply-To: <53FDDCEF.6040708@redhat.com> References: <53FDDCEF.6040708@redhat.com> Message-ID: <53FE29EB.3020501@redhat.com> FYI - early bird pricing for the OpenStack Summit ends 5pm CDT *tomorrow*. https://www.openstack.org/summit/openstack-paris-summit-2014/ -------- Original Message -------- Subject: [rhos-dev] FYI: Early bird pricing ends tomorrow for Paris Summit Date: Wed, 27 Aug 2014 09:28:15 -0400 From: Dave Neary To: rhos-pgm , rh-openstack-dev Hi all, For those unaware, pricing for tickets to the OpenStack Summit in Paris will increase after 5pm CDT tomorrow, August 28. If you plan to attend, are paying for a ticket, and have not bought it yet, you might want to get on it now, before the prices increase tomorrow evening. https://www.openstack.org/summit/openstack-paris-summit-2014/ Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Aug 27 20:54:03 2014 From: whayutin at redhat.com (whayutin) Date: Wed, 27 Aug 2014 16:54:03 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDB5AE.1050907@karan.org> References: <53FDB5AE.1050907@karan.org> Message-ID: <1409172843.3051.21.camel@localhost.localdomain> On Wed, 2014-08-27 at 11:40 +0100, Karanbir Singh wrote: > hi > > I've just pushed a GenericCloud image, that will become the gold > standard to build all varients and environ specific images from. > Requesting people to help test this image : > > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > ( ~ 922 MB) > or > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > ( 261 MB) > > Sha256's; > > 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > > please note: these images contain unsigned content ( cloud-init and > cloud-utils-* ), and are therefore unsuiteable for use beyond validation > on your environment. > > regards, > Hey Karanbir, Most everything seems to be working with the image, thank you for posting it!! Hopefully the CentOS release can be fixed, I believe it should return 7.0 e.g. [centos at packstack ~]$ cat /etc/redhat-release CentOS Linux release 7.0.1406 (Core) The rdo public ci has successfully completed a packstack allinone run on CentOS-7.0 [1] w/ a few workarounds. I'll start to setup a foreman based install soon. Thanks!! [1]https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production-centos-70-aio-packstack-neutron-gre-rabbitmq/20/console https://prod-rdojenkins.rhcloud.com/ Workarounds used: http://openstack.redhat.com/Workarounds https://github.com/redhat-openstack/khaleesi/blob/master/workarounds/workarounds-pre-run-packstack.yml From kchamart at redhat.com Thu Aug 28 03:27:47 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 28 Aug 2014 08:57:47 +0530 Subject: [Rdo-list] M3 test day In-Reply-To: <53FE2090.2010101@redhat.com> References: <53FE2090.2010101@redhat.com> Message-ID: <20140828032747.GB7583@tesla.redhat.com> On Wed, Aug 27, 2014 at 02:16:48PM -0400, Rich Bowen wrote: > We've been asked by the Fedora Cloud folks if they can help us do a test day > for the M3 packages. M3 release date is September 4 - > https://wiki.openstack.org/wiki/Juno_Release_Schedule - so could we > tentatively look at the week of the 22nd to do a test day? Sounds good me. Just that we'd to ensure packagers are aware of this date too. A side note: As I test all my stuff on Fedora, I use this trivial script[1] to check the latest versions of OpenStack components. As it stands, you can see Juno-milestone-2 packages that are in Fedora Rawhide (or what will become F22): $ ./query-koji-for-packages.bash rawhide Build ---------------------------------------- openstack-nova-2014.2-0.1.b2.fc22 Build ---------------------------------------- openstack-glance-2014.2-0.1.b2.fc22 Build ---------------------------------------- openstack-cinder-2014.2-0.1.b2.fc22 Build ---------------------------------------- openstack-neutron-2014.2-0.2.b2.fc22 Build ---------------------------------------- openstack-keystone-2014.2-0.2.b2.fc22 Build ---------------------------------------- openstack-heat-2014.2-0.4.b2.fc22 Build ---------------------------------------- Build ---------------------------------------- openstack-swift-2.0.0-1.fc22 Build ---------------------------------------- [1] https://kashyapc.fedorapeople.org/query-koji-for-packages.bash -- /kashyap From mail-lists at karan.org Thu Aug 28 08:34:13 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Thu, 28 Aug 2014 09:34:13 +0100 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <1409172843.3051.21.camel@localhost.localdomain> References: <53FDB5AE.1050907@karan.org> <1409172843.3051.21.camel@localhost.localdomain> Message-ID: <53FEE985.6080008@karan.org> On 08/27/2014 09:54 PM, whayutin wrote: > On Wed, 2014-08-27 at 11:40 +0100, Karanbir Singh wrote: >> hi >> >> I've just pushed a GenericCloud image, that will become the gold >> standard to build all varients and environ specific images from. >> Requesting people to help test this image : >> >> http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 >> ( ~ 922 MB) >> or >> http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz >> ( 261 MB) >> >> Sha256's; >> >> 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f >> CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 >> 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 >> CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz >> >> please note: these images contain unsigned content ( cloud-init and >> cloud-utils-* ), and are therefore unsuiteable for use beyond validation >> on your environment. >> >> regards, >> > > Hey Karanbir, > Most everything seems to be working with the image, thank you for > posting it!! Hopefully the CentOS release can be fixed, I believe it > should return 7.0 > > e.g. > [centos at packstack ~]$ cat /etc/redhat-release > CentOS Linux release 7.0.1406 (Core) how much and where is this causing a problem ? We are trying to message around the idea that effectively there is no distro 7.0 or 7.1 etc, they are just various points in time on the CentOS-7 distro. We went through quite a few iterations around naming and numbering and adding a datestmp to the release number was the only really 'acceptable' setup. The 1406 then indicates age of the .0 base. > > The rdo public ci has successfully completed a packstack allinone run on > CentOS-7.0 [1] w/ a few workarounds. I'll start to setup a foreman > based install soon. sounds good. We will have regular image builds up soon, would it be possible for you to automate a new-image-get and deploy, say every two weeks perhaps ? > > Thanks!! > > [1]https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production-centos-70-aio-packstack-neutron-gre-rabbitmq/20/console > https://prod-rdojenkins.rhcloud.com/ > > Workarounds used: > http://openstack.redhat.com/Workarounds > https://github.com/redhat-openstack/khaleesi/blob/master/workarounds/workarounds-pre-run-packstack.yml > nice! -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From kfiresmith at gmail.com Thu Aug 28 09:43:56 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Thu, 28 Aug 2014 05:43:56 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FEE985.6080008@karan.org> References: <53FDB5AE.1050907@karan.org> <1409172843.3051.21.camel@localhost.localdomain> <53FEE985.6080008@karan.org> Message-ID: Has anyone proven that messing with what /etc/redhat-release returns will not break Puppet's Facter operations related to OS release level? (And maybe some of the other config managers, but I'm not familiar...) This is a curiousity I may be able to test today. We use Puppet's osmajrelease in a case statement that hands out rhel 5/6/7 specific configs. In my environment I rely on conveying image freshness via the image name, which ends in 2014mm-r##. -Kodiak On Aug 28, 2014 4:36 AM, "Karanbir Singh" wrote: > On 08/27/2014 09:54 PM, whayutin wrote: > > On Wed, 2014-08-27 at 11:40 +0100, Karanbir Singh wrote: > >> hi > >> > >> I've just pushed a GenericCloud image, that will become the gold > >> standard to build all varients and environ specific images from. > >> Requesting people to help test this image : > >> > >> > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > >> ( ~ 922 MB) > >> or > >> > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > >> ( 261 MB) > >> > >> Sha256's; > >> > >> 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f > >> CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > >> 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 > >> CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > >> > >> please note: these images contain unsigned content ( cloud-init and > >> cloud-utils-* ), and are therefore unsuiteable for use beyond validation > >> on your environment. > >> > >> regards, > >> > > > > Hey Karanbir, > > Most everything seems to be working with the image, thank you for > > posting it!! Hopefully the CentOS release can be fixed, I believe it > > should return 7.0 > > > > e.g. > > [centos at packstack ~]$ cat /etc/redhat-release > > CentOS Linux release 7.0.1406 (Core) > > how much and where is this causing a problem ? > > We are trying to message around the idea that effectively there is no > distro 7.0 or 7.1 etc, they are just various points in time on the > CentOS-7 distro. We went through quite a few iterations around naming > and numbering and adding a datestmp to the release number was the only > really 'acceptable' setup. The 1406 then indicates age of the .0 base. > > > > > > The rdo public ci has successfully completed a packstack allinone run on > > CentOS-7.0 [1] w/ a few workarounds. I'll start to setup a foreman > > based install soon. > > sounds good. > > We will have regular image builds up soon, would it be possible for you > to automate a new-image-get and deploy, say every two weeks perhaps ? > > > > > Thanks!! > > > > [1] > https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production-centos-70-aio-packstack-neutron-gre-rabbitmq/20/console > > https://prod-rdojenkins.rhcloud.com/ > > > > Workarounds used: > > http://openstack.redhat.com/Workarounds > > > https://github.com/redhat-openstack/khaleesi/blob/master/workarounds/workarounds-pre-run-packstack.yml > > > > nice! > > > -- > Karanbir Singh > +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh > GnuPG Key : http://www.karan.org/publickey.asc > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Thu Aug 28 09:51:02 2014 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 28 Aug 2014 11:51:02 +0200 Subject: [Rdo-list] M3 test day In-Reply-To: <20140828032747.GB7583@tesla.redhat.com> References: <53FE2090.2010101@redhat.com> <20140828032747.GB7583@tesla.redhat.com> Message-ID: <53FEFB86.6070007@redhat.com> On 28/08/14 05:27, Kashyap Chamarthy wrote: > > [1] https://kashyapc.fedorapeople.org/query-koji-for-packages.bash > > -- [mrunge at turing SPECS (master)]$ koji latest-build rawhide python-django-horizon Build Tag Built by ---------------------------------------- -------------------- ---------------- python-django-horizon-2014.2-0.2.fc22 f22 mrunge Your script looks for openstack-horizon, the source package is named python-django-horizon. Matthias From mail-lists at karan.org Thu Aug 28 10:01:43 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Thu, 28 Aug 2014 11:01:43 +0100 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: References: <53FDB5AE.1050907@karan.org> <1409172843.3051.21.camel@localhost.localdomain> <53FEE985.6080008@karan.org> Message-ID: <53FEFE07.6030304@karan.org> On 08/28/2014 10:43 AM, Kodiak Firesmith wrote: > Has anyone proven that messing with what /etc/redhat-release returns > will not break Puppet's Facter operations related to OS release level? > (And maybe some of the other config managers, but I'm not familiar...) I would appreciate feedback on this - there were some issues wth a few projects, and we've worked with them to overcome/resolve those. By and large, its not been a huge deal. Most people have focused on osMajorRelease or a rpm -q centos-release, which should still return a 7 > This is a curiousity I may be able to test today. We use Puppet's > osmajrelease in a case statement that hands out rhel 5/6/7 specific configs. let me know how you get on. Regards -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From ak at cloudssky.com Thu Aug 28 12:05:52 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Thu, 28 Aug 2014 14:05:52 +0200 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FEFE07.6030304@karan.org> References: <53FDB5AE.1050907@karan.org> <1409172843.3051.21.camel@localhost.localdomain> <53FEE985.6080008@karan.org> <53FEFE07.6030304@karan.org> Message-ID: Hi Karanbir, finally got it working on havana too, the problem was with neutron, had to restart the neutron server and all things seem to be fine now. Thx for the hard work! Regards, Arash On Thu, Aug 28, 2014 at 12:01 PM, Karanbir Singh wrote: > On 08/28/2014 10:43 AM, Kodiak Firesmith wrote: > > Has anyone proven that messing with what /etc/redhat-release returns > > will not break Puppet's Facter operations related to OS release level? > > (And maybe some of the other config managers, but I'm not familiar...) > > I would appreciate feedback on this - there were some issues wth a few > projects, and we've worked with them to overcome/resolve those. By and > large, its not been a huge deal. Most people have focused on > osMajorRelease or a rpm -q centos-release, which should still return a 7 > > > This is a curiousity I may be able to test today. We use Puppet's > > osmajrelease in a case statement that hands out rhel 5/6/7 specific > configs. > > let me know how you get on. > > > Regards > > > -- > Karanbir Singh > +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh > GnuPG Key : http://www.karan.org/publickey.asc > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Thu Aug 28 13:11:03 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 28 Aug 2014 18:41:03 +0530 Subject: [Rdo-list] M3 test day In-Reply-To: <53FEFB86.6070007@redhat.com> References: <53FE2090.2010101@redhat.com> <20140828032747.GB7583@tesla.redhat.com> <53FEFB86.6070007@redhat.com> Message-ID: <20140828131103.GA5072@tesla.pnq.redhat.com> On Thu, Aug 28, 2014 at 11:51:02AM +0200, Matthias Runge wrote: > On 28/08/14 05:27, Kashyap Chamarthy wrote: > > > > [1] https://kashyapc.fedorapeople.org/query-koji-for-packages.bash > > > > -- > > [mrunge at turing SPECS (master)]$ koji latest-build rawhide > python-django-horizon > Build Tag Built by > ---------------------------------------- -------------------- > ---------------- > python-django-horizon-2014.2-0.2.fc22 f22 mrunge > > Your script looks for openstack-horizon, the source package is named > python-django-horizon. Duh, I had that change locally, but missed to refresh the above script. Thanks for the correction Matthais, now updated. -- /kashyap From dustymabe at gmail.com Thu Aug 28 15:51:46 2014 From: dustymabe at gmail.com (Dusty Mabe) Date: Thu, 28 Aug 2014 11:51:46 -0400 Subject: [Rdo-list] M3 test day In-Reply-To: <20140828032747.GB7583@tesla.redhat.com> References: <53FE2090.2010101@redhat.com> <20140828032747.GB7583@tesla.redhat.com> Message-ID: <20140828155146.GA29741@hattop.hq.kanerai.net> On Thu, Aug 28, 2014 at 08:57:47AM +0530, Kashyap Chamarthy wrote: > On Wed, Aug 27, 2014 at 02:16:48PM -0400, Rich Bowen wrote: > > We've been asked by the Fedora Cloud folks if they can help us do a test day > > for the M3 packages. M3 release date is September 4 - > > https://wiki.openstack.org/wiki/Juno_Release_Schedule - so could we > > tentatively look at the week of the 22nd to do a test day? > > Sounds good me. Just that we'd to ensure packagers are aware of this > date too. How about, tentatively, Thursday September 25th? Dusty From whayutin at redhat.com Thu Aug 28 18:42:42 2014 From: whayutin at redhat.com (whayutin) Date: Thu, 28 Aug 2014 14:42:42 -0400 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FEE985.6080008@karan.org> References: <53FDB5AE.1050907@karan.org> <1409172843.3051.21.camel@localhost.localdomain> <53FEE985.6080008@karan.org> Message-ID: <1409251362.15796.4.camel@localhost.localdomain> On Thu, 2014-08-28 at 09:34 +0100, Karanbir Singh wrote: > On 08/27/2014 09:54 PM, whayutin wrote: > > On Wed, 2014-08-27 at 11:40 +0100, Karanbir Singh wrote: > >> hi > >> > >> I've just pushed a GenericCloud image, that will become the gold > >> standard to build all varients and environ specific images from. > >> Requesting people to help test this image : > >> > >> http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > >> ( ~ 922 MB) > >> or > >> http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > >> ( 261 MB) > >> > >> Sha256's; > >> > >> 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f > >> CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > >> 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 > >> CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > >> > >> please note: these images contain unsigned content ( cloud-init and > >> cloud-utils-* ), and are therefore unsuiteable for use beyond validation > >> on your environment. > >> > >> regards, > >> > > > > Hey Karanbir, > > Most everything seems to be working with the image, thank you for > > posting it!! Hopefully the CentOS release can be fixed, I believe it > > should return 7.0 > > > > e.g. > > [centos at packstack ~]$ cat /etc/redhat-release > > CentOS Linux release 7.0.1406 (Core) > > how much and where is this causing a problem ? It is causing ansible to not recognize the release major as 7.0 I would expect CentOS to behave in the same way as RHEL.. [root at localhost ~]# rpm -qa |grep release redhat-release-server-7.0-1.el7.x86_64 [root at localhost ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.0 (Maipo) [root at localhost ~]# > > We are trying to message around the idea that effectively there is no > distro 7.0 or 7.1 etc, they are just various points in time on the > CentOS-7 distro. We went through quite a few iterations around naming > and numbering and adding a datestmp to the release number was the only > really 'acceptable' setup. The 1406 then indicates age of the .0 base. > > > > > > The rdo public ci has successfully completed a packstack allinone run on > > CentOS-7.0 [1] w/ a few workarounds. I'll start to setup a foreman > > based install soon. > > sounds good. > > We will have regular image builds up soon, would it be possible for you > to automate a new-image-get and deploy, say every two weeks perhaps ? That should be fine.. Thanks guys > > > > > Thanks!! > > > > [1]https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production-centos-70-aio-packstack-neutron-gre-rabbitmq/20/console > > https://prod-rdojenkins.rhcloud.com/ > > > > Workarounds used: > > http://openstack.redhat.com/Workarounds > > https://github.com/redhat-openstack/khaleesi/blob/master/workarounds/workarounds-pre-run-packstack.yml > > > > nice! > > From rdo-info at redhat.com Thu Aug 28 19:44:07 2014 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 28 Aug 2014 19:44:07 +0000 Subject: [Rdo-list] [RDO] RDO Google Hangout: Deploying with Heat (September 5) Message-ID: <000001481e257f52-7155853d-44dc-4d3b-8364-c791cc23b9c5-000000@email.amazonses.com> rbowen started a discussion. RDO Google Hangout: Deploying with Heat (September 5) --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/980/rdo-google-hangout-deploying-with-heat-september-5 Have a great day! From sellis at redhat.com Thu Aug 28 23:43:39 2014 From: sellis at redhat.com (Steven Ellis) Date: Fri, 29 Aug 2014 11:43:39 +1200 Subject: [Rdo-list] Meetups in the coming week In-Reply-To: <53FCBB58.5030909@redhat.com> References: <53FCBB58.5030909@redhat.com> Message-ID: <53FFBEAB.6000905@redhat.com> On 08/27/2014 04:52 AM, Rich Bowen wrote: > > * Introduction to RDO and packstack, August 28, New Zealand OpenStack > User Group, Auckland - > http://www.meetup.com/New-Zealand-OpenStack-User-Group/events/199656102/ > Some photos are now up at the link below, plus my presentation materials are now available at my Red Hat page. http://www.meetup.com/New-Zealand-OpenStack-User-Group/events/199656102/ http://people.redhat.com/sellis/ Overall a good little session with some great questions from the audience. As part of the session I installed RDO on a RHEL7 host during the talk and then showed off the key features of OpenStack. To keep things simple we used an allinone profile with neutron turned off. Just to see what would happen we edited the packstack.answers file to enable Heat and then re-ran PackStack on the existing install, and it cleanly enabled the Heat components. If you're reading my presentation materials there are some tips/tricks on how I set-up nested virtualisation and nova networking at the back, and thanks to some of the RDO team for the materials I based my presentation off, plus forcing me to play with reveal.js finally. Steve -- Steven Ellis Solution Architect - Red Hat New Zealand *E:* sellis at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdake at redhat.com Fri Aug 29 16:20:12 2014 From: sdake at redhat.com (Steven Dake) Date: Fri, 29 Aug 2014 09:20:12 -0700 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDB5AE.1050907@karan.org> References: <53FDB5AE.1050907@karan.org> Message-ID: <5400A83C.1050300@redhat.com> On 08/27/2014 03:40 AM, Karanbir Singh wrote: > hi > > I've just pushed a GenericCloud image, that will become the gold > standard to build all varients and environ specific images from. > Requesting people to help test this image : > > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > ( ~ 922 MB) > or > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > ( 261 MB) > > Sha256's; > > 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > > please note: these images contain unsigned content ( cloud-init and > cloud-utils-* ), and are therefore unsuiteable for use beyond validation > on your environment. > > regards, > It appears diskimage-builder is busted for use with CentOS latest images. This is a problem for OpenStack because it means both Heat and TripleO won't operate. https://bugs.launchpad.net/heat/+bug/1363146 Regards, -steve From rbowen at redhat.com Fri Aug 29 17:58:20 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 29 Aug 2014 13:58:20 -0400 Subject: [Rdo-list] SSL on openstack.redhat.com Message-ID: <5400BF3C.5060002@redhat.com> This has been on my ToDo list for at least a year now. Sorry. other things kept getting in the way. We finally have SSL on openstack.redhat.com. Please let me know if you experience any problem with this at all. Thanks. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From mail-lists at karan.org Sat Aug 30 10:27:58 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Sat, 30 Aug 2014 11:27:58 +0100 Subject: [Rdo-list] Getting the CentOS-7 fix's rolled into rdo Message-ID: <5401A72E.6050604@karan.org> hi guys, Is there a timeline on when we can expect the CentOS-7 workarounds to get rolled into packstack/rdo itself ? -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc