From ak at cloudssky.com Fri May 1 00:35:17 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 1 May 2015 02:35:17 +0200 Subject: [Rdo-list] FW: RDO build that passed CI (rc2) In-Reply-To: References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com> <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> <5540FC2A.9050906@redhat.com> Message-ID: I made a fresh install of CentOS 7.0 mini with cobbler again w/o yum update and got after the first run: ERROR : Error appeared during Puppet run: 20.0.0.11_ceilometer.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-ceilometer-notification' returned : You will find full trace in log /var/tmp/packstack/20150430-193045-vAhnyX/manifests/212.224.122.82_ceilometer.pp.log This was an issue with Juno too. Disabled Ceilometer with: sed -i 's/CONFIG_CEILOMETER_INSTALL=y/CONFIG_CEILOMETER_INSTALL=n/g' packstack-aio ran packstack again and it was successful **** Installation completed successfully ****** Dashboard works on AIO, but spawning a cirros instance as a demo user in dashboard gives: *Error: *Failed to perform requested operation on instance "cirros1", the instance has an error status: Please try again later [Error: No valid host was found. There are not enough hosts available.]. And in fact the compute_nodes table is empty: MariaDB [nova]> select * from compute_nodes; *Empty set (0.00 sec)* The warning that is shown about the NetworkManager after installation was already with Juno and as I remember it could be ignored for AIO. But for for our records I did: [root at localhost ~]# yum update -y [root at localhost ~]# systemctl stop NetworkManager.service [root at localhost ~]# systemctl disable NetworkManager.service [root at localhost ~]# packstack --answer-file=packstak-aio **** Installation completed successfully ****** Tried to spawn cirros2 instance again and got the same result as above. On Fri, May 1, 2015 at 1:01 AM, Arash Kaffamanesh wrote: > I did a CenOS fresh install with the following steps for AIO: > > yum -y update > > cat /etc/redhat-release > > CentOS Linux release 7.1.1503 (Core) > > yum install > http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > > yum install epel-release > > cd /etc/yum.repos.d/ > > curl -O > https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2/delorean-kilo.repo > > yum install openstack-packstack > > setenforce 0 > > packstack --allinone > > and got again: > > Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. > Force execution using --nocheck, but the results are unpredictable. > > But if I don't do a yum update and install AIO it finishes successfully > and I can yum update afterwards. > > So if nobody can reproduce this issue, then something is wrong with my > base CentOS install, I'll try to install the latest CentOS from ISO now. > > Thanks! > Arash > > > > > > On Fri, May 1, 2015 at 12:42 AM, Arash Kaffamanesh > wrote: > >> I'm installing CentOS with cobbler and kickstart (from centos7-mini) on 2 >> machines >> and I'm trying a 2 node install. With rc1 it worked without yum update. >> I'll do a fresh install now with yum update and let you know. >> >> Thanks! >> Arash >> >> >> >> On Fri, May 1, 2015 at 12:23 AM, Alan Pevec wrote: >> >>> 2015-05-01 0:12 GMT+02:00 Arash Kaffamanesh : >>> > But if I yum update it into 7.1, then we have the issue with nmcli: >>> > >>> > Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. >>> > Force execution using --nocheck, but the results are unpredictable. >>> >>> Huh, again?! I thought that was solved after you did yum update... >>> My original answer to that is still the same "Not sure how could that >>> happen, nmcli is part of NetworkManager RPM." >>> Can you reproduce this w/o RDO in the picture, starting with the clean >>> centos installation? How are you installing centos? >>> >>> Cheers, >>> Alan >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Fri May 1 01:14:43 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 1 May 2015 03:14:43 +0200 Subject: [Rdo-list] FW: RDO build that passed CI (rc2) In-Reply-To: References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com> <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> <5540FC2A.9050906@redhat.com> Message-ID: > I made a fresh install of CentOS 7.0 mini with cobbler again w/o yum update and got after the first run: Why not start with 7.1 image? ... > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-ceilometer-notification' returned : ... > This was an issue with Juno too. I don't remember, what was the resolution with Juno? What is the output when you run from shell yum install openstack-ceilometer-notification ? It works for me on default centos7 install with epel and Kilo RC2 enabled. Cheers, Alan From bderzhavets at hotmail.com Fri May 1 05:44:17 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 1 May 2015 01:44:17 -0400 Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Message-ID: Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: answer-fileRC2.txt.gz Type: application/x-gzip Size: 1917 bytes Desc: not available URL: From bderzhavets at hotmail.com Fri May 1 07:02:28 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 1 May 2015 03:02:28 -0400 Subject: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: Message-ID: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 192.169.142.137_nova.pp.log.gz Type: application/x-gzip Size: 20605 bytes Desc: not available URL: From jack.lauritsen at gmail.com Fri May 1 13:31:20 2015 From: jack.lauritsen at gmail.com (Jack Lauritsen) Date: Fri, 1 May 2015 09:31:20 -0400 Subject: [Rdo-list] RE(4): RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com> <20150429152624.GF2764@tesla.redhat.com> Message-ID: RC2 at https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2/ does not contain a kilo rpm for Sahara. Will that be added soon or to RC2 or is RC3 near drop? On Thu, Apr 30, 2015 at 6:03 PM, Alan Pevec wrote: > > Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: ImportError: No > module > > named oslo_config > > That would mean python-oslo-config is not installed or old (Juno) > version, what does rpm -q python-oslo-config return? Was this upgrade > or clean install? > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri May 1 14:12:28 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 01 May 2015 10:12:28 -0400 Subject: [Rdo-list] Kilo released - what's new? Message-ID: <554389CC.90402@redhat.com> As you no doubt know by now, OpenStack Kilo released yesterday. You can read the release notes at https://wiki.openstack.org/wiki/ReleaseNotes/Kilo There's a lot there. Now that the release is out and you have nothing else to do (ha, ha), I was wondering if a few people would be willing to spend 10-15 minutes talking with me about the various projects, and what the really great new bits are. If we can do this as Google hangouts, I can record them, and then write up a series of blog posts about each of the projects and what people should get excited about in Kilo. If you could get in touch with me to claim a topic, that would be great, otherwise I'll start hunting you down as I work through the list. So, here's the list of topics that I'd like to cover, just working through the release notes: * Swift * Nova * Glance * Horizon * Keystone * Neutron * Cinder * Ceilometer * Heat * Trove * Ironic * Documentation I'll also probably be bugging people on IRC. Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ak at cloudssky.com Fri May 1 15:43:57 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 1 May 2015 17:43:57 +0200 Subject: [Rdo-list] RE(4): RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com> <20150429152624.GF2764@tesla.redhat.com> Message-ID: It was an yum updated system. Now I did a new install on the latest CentOS 7.1 VM for RC2 AIO which was a successful run. A CentOS 7.0 system must be yum updated and rebooted so that the new kernel for 7.1 takes effect. By the way the python oslo version is: [root at kilo-rc2 ~]# rpm -q python-oslo-config python-oslo-config-1.9.3-post1.el7.centos.noarch Thanks! Arash On Fri, May 1, 2015 at 12:03 AM, Alan Pevec wrote: > > Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: ImportError: No > module > > named oslo_config > > That would mean python-oslo-config is not installed or old (Juno) > version, what does rpm -q python-oslo-config return? Was this upgrade > or clean install? > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Fri May 1 16:12:37 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 1 May 2015 18:12:37 +0200 Subject: [Rdo-list] RE(4): RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com> <20150429152624.GF2764@tesla.redhat.com> Message-ID: 2015-05-01 15:31 GMT+02:00 Jack Lauritsen : > RC2 at > https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2/ > does not contain a kilo rpm for Sahara. > Will that be added soon or to RC2 or is RC3 near drop? This was built by the Delorean build system which is tracking upstream source repos and sahara was not part of Delorean Trunk in Kilo. It has been added and will be chasing trunk during the Liberty cycle. Sahara will be in RDO Kilo GA, in the meantime you can take candidate CentOS Cloud SIG build https://cbs.centos.org/koji/buildinfo?buildID=1122 Cheers, Alan From mohammed.arafa at gmail.com Fri May 1 17:16:22 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 1 May 2015 13:16:22 -0400 Subject: [Rdo-list] rdo-manager virt setup still stuck on neutron Message-ID: + setup-neutron -n /tmp/tmp.Qxz7JkrlLI /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for novaclient.v2). The preferable way to get client class or object you can find in novaclient.client module. warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis for " -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From aragonx at dcsnow.com Fri May 1 17:30:12 2015 From: aragonx at dcsnow.com (Will Yonker) Date: Fri, 1 May 2015 17:30:12 -0000 Subject: [Rdo-list] Update to Quickstart guide Message-ID: <6d403aad2930021683950b62754679e1.squirrel@www.dcsnow.com> Hi, I did an install of RDO on one of my Linux boxes.?? Our default setup has root logon through SSH disabled.?? Perhaps is should be mentioned in the prerequisites section of the quickstart guide that root logon should be enabled in /etc/ssh/sshd_config (PermitRootLogin yes)? --- Will Y. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri May 1 17:45:38 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 01 May 2015 13:45:38 -0400 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter - April 2015 Message-ID: <5543BBC2.7060104@redhat.com> Thanks for being part of the RDO community! Quick links: * Quick Start - http://rdoproject.org/quickstart * Mailing Lists - http://rdoproject.org/Mailing_lists * RDO packages - https://repos.fedorapeople.org/repos/openstack/openstack-juno/ * RDO blog - http://rdoproject.org/blog * Q&A - http://ask.openstack.org/ RDO Test Day With Kilo out (more on this below), we need help testing, so that we can make RDO as solid as possible. We will be holding the Kilo RDO test day on May 5th and 6th, and would greatly appreciate your help in this. We'll be coordinating on #rdo on the Freenode IRC network, and on the rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list - for any questions you may have. Details about the test day are developing in the wiki, at https://www.rdoproject.org/RDO_test_day_Kilo Test cases and documentation will be appearing there over the coming few days. Please help us by setting aside an hour or two to help test a scenario or two, to solidify the RDO release. Thanks! OpenStack Summit We're just days away from the OpenStack Summit and OpenStack Liberty Design Summit. We'll be in Booth H4, under the big RDO sign. Drop by and get your free RDO tshirt. We'll also have demos of RDO-manager - https://www.rdoproject.org/RDO-Manager - and the downstream Red Hat OpenStack Platform product. We're also hoping to have an RDO meetup (details pending), where we'll discuss the direction of RDO, and how you can get more involved. The agenda for that meeting is available at https://etherpad.openstack.org/p/RDO_Vancouver for you to add your ideas and register your interest. OpenStack Summit is where the direction of the next major release of OpenStack, codenamed Liberty, will be discussed. Thousands of OpenStack developers and operators will be gathering in Vancouver, and you can still be part of that if you register now at http://tm3.org/summit-register OpenStack Kilo Released The latest major release of OpenStack, codenamed Kilo, dropped on April 30th, and is pretty impressive. For the full scoop, see the release notes at https://wiki.openstack.org/wiki/ReleaseNotes/Kilo Highlights include: * Erasure code storage policy type in Swift - http://swift.openstack.org/overview_erasure_code.html * Support for theming in the Horizon dashboard - http://docs.openstack.org/developer/horizon/topics/settings.html#custom-theme-path * Added support for OpenID Connect as a federated identity provider in Keystone - http://docs.openstack.org/developer/keystone/extensions/openidc.html * Gnocci dispatch support for ceilometer-collector - http://launchpad.net/gnocchi * New http://docs.openstack.org/ landing page and the first release of the new Networking Guide - http://docs.openstack.org/networking-guide/ ... And so much more. We'll be blogging about various of these features in the coming weeks on the RDO blog (http://rdoproject.org/blog) and on RedHatStack http://redhatstackblog.redhat.com/ so stay tuned. Technical Committee Elections Congratulations to everyone elected in the recent TC election. (See http://lists.openstack.org/pipermail/openstack-dev/2015-April/063000.html for the full results.) In particular, a big shout out to Flavio Percoco, who is a prominent member of the RDO community, and a big contributor to the Zaqar project. We look forward to your leadership in this critical part of OpenStack governance. Flavio blogs frequently about OpenStack at http://blog.flaper87.org/ Keep in touch There's lots of ways to stay in in touch with what's going in in the RDO community. The best ways are ... WWW * RDO - http://rdoproject.org/ * OpenStack Q&A - http://ask.openstack.org/ Mailing Lists: * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter IRC * IRC - #rdo on Freenode.irc.net * Puppet module development - #rdo-puppet Social Media: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * Facebook - http://facebook.com/rdocommunity Thanks again for being part of the RDO community! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From rbowen at redhat.com Fri May 1 19:11:01 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 01 May 2015 15:11:01 -0400 Subject: [Rdo-list] Update to Quickstart guide In-Reply-To: <6d403aad2930021683950b62754679e1.squirrel@www.dcsnow.com> References: <6d403aad2930021683950b62754679e1.squirrel@www.dcsnow.com> Message-ID: <5543CFC5.9060109@redhat.com> On 05/01/2015 01:30 PM, Will Yonker wrote: > Hi, > > I did an install of RDO on one of my Linux boxes. Our default setup has > root logon through SSH disabled. Perhaps is should be mentioned in the > prerequisites section of the quickstart guide that root logon should be > enabled in /etc/ssh/sshd_config (PermitRootLogin yes)? This is certainly the case by default on CentOS. Where were you installing where it wasn't the case? I'm reluctant to try to have the QuickStart compensate for every possible local customization, or we'll have to call it just Start instead. ;-) But perhaps a linked "more details" document that we can link to specific section of, through the QuickStart? It's already drifting in that direction (ie, not so Quick actually), and we need to figure out a way to get it back down to being Quick, while still having more details available for everyone that has special needs. Suggestions always welcome. (Which will, of course, be easier once we have the site in Git rather than in Mediawiki.) -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ak at cloudssky.com Fri May 1 20:22:41 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 1 May 2015 22:22:41 +0200 Subject: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: Message-ID: I got the compute node working by adding the delorean-kilo.repo on compute node, yum updating the compute node, rebooted and extended the packstack file from the first AIO install with the IP of compute node and ran packstack again with NetworkManager enabled and did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node, before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a 3rd cirros instance which landed on 2nd compute node. ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets wrote: > Ran packstack --debug --answer-file=./answer-fileRC2.txt > 192.169.142.137_nova.pp.log.gz attached > > Boris > > ------------------------------ > From: bderzhavets at hotmail.com > To: apevec at gmail.com > Date: Fri, 1 May 2015 01:44:17 -0400 > CC: rdo-list at redhat.com > Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute > Node when testing delorean RC2 or CI repo on CentOS 7.1 > > Follow instructions > https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html > packstack fails :- > > Applying 192.169.142.127_nova.pp > Applying 192.169.142.137_nova.pp > 192.169.142.127_nova.pp: [ DONE ] > 192.169.142.137_nova.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp > Error: Could not start Service[nova-compute]: Execution of > '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for > openstack-nova-compute.service failed. See 'systemctl status > openstack-nova-compute.service' and 'journalctl -xn' for details. > You will find full trace in log > /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log > > In both cases (RC2 or CI repos) on compute node 192.169.142.137 > /var/log/nova/nova-compute.log > reports :- > > 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 > seconds... > 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on > localhost:5672 > 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 > is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. > > Seems like it is looking for AMQP Server at wrong host . Should be > 192.169.142.127 > On 192.169.142.127 :- > > [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 > ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* > LISTEN 14506/beam.smp > tcp6 0 0 :::5672 > :::* LISTEN 14506/beam.smp > > [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 > -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m > comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT > -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m > comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT > > Answer-file is attached > > Thanks. > Boris > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Fri May 1 20:45:12 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 1 May 2015 22:45:12 +0200 Subject: [Rdo-list] Kilo RC2 issue with Ceilometer (Resources Usage in Dashboard) Message-ID: On a 2 node install for Kilo RC2 Ceilometer Resource Usage doesn't work. (screenshot attached). [root at csky01 ~]# tail -f /var/log/ceilometer/compute.log 2015-05-01 16:41:18.038 19053 TRACE ceilometer.coordination 2015-05-01 16:41:18.039 19053 ERROR ceilometer.coordination [-] Error sending a heartbeat to coordination backend. 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination Traceback (most recent call last): 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 105, in heartbeat 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination self._coordinator.heartbeat() 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 378, in heartbeat 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination value=b"Not dead!") 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__ 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination self.gen.throw(type, value, traceback) 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 77, in _translate_failures 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination raise coordination.ToozConnectionError(utils.exception_message(e)) 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination ToozConnectionError: Error 111 connecting to 20.0.0.11:6379. ECONNREFUSED. 2015-05-01 16:41:18.039 19053 TRACE ceilometer.coordination 2015-05-01 16:41:19.038 19053 ERROR ceilometer.coordination [-] Error connecting to coordination backend. Thx, Arash -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2015-05-01 at 22.33.43.png Type: image/png Size: 105075 bytes Desc: not available URL: From bderzhavets at hotmail.com Fri May 1 21:12:09 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 1 May 2015 17:12:09 -0400 Subject: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: , , Message-ID: Thank you very much for explanations, hopefully in GA release it would be fixed. Boris. Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node,yum updating the compute node, rebooted and extended the packstack file from the first AIOinstall with the IP of compute node and ran packstack again with NetworkManager enabledand did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node,before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a3rd cirros instance which landed on 2nd compute node.ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sat May 2 07:02:13 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 2 May 2015 03:02:13 -0400 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: , , Message-ID: Thank you once again it really works. [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list +----+----------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+----------------------------------------+-------+---------+ | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | +----+----------------------------------------+-------+---------+ [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net +--------------------------------------+-------------------+---------------+----------------------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+----------------------------------------+ | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net | | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net | +--------------------------------------+-------------------+---------------+----------------------------------------+ with only one issue:- during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 and finally it results mess in ml2_vxlan_endpoints table. I had manually update ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes afterwards VMs on compute node obtained access to meta-data server. I also believe that synchronized delete records from tables "compute_nodes && services" ( along with disabling nova-compute on Controller) could turn AIO host into real Controller. Boris. Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node,yum updating the compute node, rebooted and extended the packstack file from the first AIOinstall with the IP of compute node and ran packstack again with NetworkManager enabledand did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node,before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a3rd cirros instance which landed on 2nd compute node.ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at progbau.de Sat May 2 07:13:40 2015 From: contact at progbau.de (Chris) Date: Sat, 02 May 2015 14:13:40 +0700 Subject: [Rdo-list] Instance auto resume after compute node restart Message-ID: Hello, We want to have instances auto resume their status after a compute node reboot/failure. Means when the VM has the running state before it should be automatically started. We are using Icehouse. There is the option resume_guests_state_on_host_boot=true|false which should exactly do what we want: # Whether to start guests that were running before the host # rebooted (boolean value) resume_guests_state_on_host_boot=true I tried it out and it just didn?t work. Libvirt fails to start the VMs because I couldn?t find the interfaces: 2015-04-30 06:16:00.783+0000: 3091: error : virNetDevGetMTU:343 : Cannot get interface MTU on 'qbr62d7e489-f8': No such device 2015-04-30 06:16:00.897+0000: 3091: warning : qemuDomainObjStart:6144 : Unable to restore from managed state /var/lib/libvirt/qemu/save/instance-0000025f.save. Maybe the file is corrupted? I did some research and found some corresponding experiences from other users: ?AFAIK at the present time OpenStack (Icehouse) still not completely aware about environments inside it, so it can't restore completely after reboot.? Source: http://stackoverflow.com/questions/23150148/how-to-get-instances-back-after-reboot-in-openstack Is this feature really broken or do I just miss something? Thanks in advance! Cheers Chris From contact at progbau.de Sat May 2 07:15:04 2015 From: contact at progbau.de (Chris) Date: Sat, 02 May 2015 14:15:04 +0700 Subject: [Rdo-list] neutron-openvswitch-agent reload without ping lost Message-ID: <006795d67a5f19f8b2fa493e51a32760@we20c.netcup.net> Hello, We made some changes on our compute nodes in the ?/etc/neutron/neutron.conf?. For example qpid_hostname. But nothing what effects the network infrastructure in the compute node. To apply the changes I think we need to restart the ?neutron-openvswitch-agent? service. By restarting this service the VM gets disconnected for around one ping, the reason is the restart causes recreation of the int-br-bond0 and phy-br-bond0 interfaces: ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --may-exist add-br br-int ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- set-fail-mode br-int secure ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --if-exists del-port br-int patch-tun ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --if-exists del-port br-int int-br-bond0 kernel: [73873.047999] device int-br-bond0 left promiscuous mode ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --if-exists del-port br-bond0 phy-br-bond0 kernel: [73873.086241] device phy-br-bond0 left promiscuous mode ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --may-exist add-port br-int int-br-bond0 kernel: [73873.287466] device int-br-bond0 entered promiscuous mode ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --may-exist add-port br-bond0 phy-br-bond0 Is there a way to apply this changes without loose pings? Cheers Chris From hbrock at redhat.com Sat May 2 14:41:51 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Sat, 2 May 2015 16:41:51 +0200 Subject: [Rdo-list] rdo-manager virt setup still stuck on neutron In-Reply-To: References: Message-ID: <20150502144150.GB7067@redhat.com> On Fri, May 01, 2015 at 01:16:22PM -0400, Mohammed Arafa wrote: > + setup-neutron -n /tmp/tmp.Qxz7JkrlLI > /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: > UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for > novaclient.v2). The preferable way to get client class or object you can > find in novaclient.client module. > warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis for > " > > -- Any ideas here folks? I think we are all getting through virt installs now without too much difficulty -- what do we think is different about Mohammed's setup? --Hugh > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == Tuskar: Elastic Scaling for OpenStack == == http://github.com/tuskar == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From slawek at kaplonski.pl Sun May 3 07:33:58 2015 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Sun, 3 May 2015 09:33:58 +0200 Subject: [Rdo-list] [Openstack] neutron-openvswitch-agent without ping lost In-Reply-To: <004b01d08322$90ff46c0$b2fdd440$@progbau.de> References: <004b01d08322$90ff46c0$b2fdd440$@progbau.de> Message-ID: <20150503073358.GA5535@dell> Hello, AFAIK it is because of recreation of all openflow rules in ovs - that is at least on my infra where we're using vxlan tunnels with l2population mechanism. I would be happy if there will be any solution to not recreate all tunnels when agent is restarted. -- Best regards / Pozdrawiam S?awek Kap?o?ski slawek at kaplonski.pl On Thu, Apr 30, 2015 at 03:49:24PM +0700, Chris wrote: > Hello, > > > > We made some changes on our compute nodes in the > "/etc/neutron/neutron.conf". For example qpid_hostname. But nothing what > effects the network infrastructure in the compute node. > > To apply the changes I think we need to restart the > "neutron-openvswitch-agent" service. > > > > By restarting this service the VM gets disconnected for around one ping, the > reason is the restart causes recreation of the int-br-bond0 and phy-br-bond0 > interfaces: > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- > --may-exist add-br br-int > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- > set-fail-mode br-int secure > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- > --if-exists del-port br-int patch-tun > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- > --if-exists del-port br-int int-br-bond0 > > kernel: [73873.047999] device int-br-bond0 left promiscuous mode > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- > --if-exists del-port br-bond0 phy-br-bond0 > > kernel: [73873.086241] device phy-br-bond0 left promiscuous mode > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- > --may-exist add-port br-int int-br-bond0 > > kernel: [73873.287466] device int-br-bond0 entered promiscuous mode > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- > --may-exist add-port br-bond0 phy-br-bond0 > > > > Is there a way to apply this changes without loose pings? > > > > Cheers > > Chris > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From blak111 at gmail.com Sun May 3 10:31:03 2015 From: blak111 at gmail.com (Kevin Benton) Date: Sun, 3 May 2015 03:31:03 -0700 Subject: [Rdo-list] [Openstack] neutron-openvswitch-agent without ping lost In-Reply-To: <20150503073358.GA5535@dell> References: <004b01d08322$90ff46c0$b2fdd440$@progbau.de> <20150503073358.GA5535@dell> Message-ID: Yes, unfortunately right now all of the inter-bridge connections are wiped out and recreated along with all of the OF rules and it's not configurable behavior.[1] I believe there are plans to fix this in Liberty, but the changes will likely be too invasive to be back-ported to Juno and Kilo. 1. https://github.com/openstack/neutron/blob/stable/juno/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#L865 On Sun, May 3, 2015 at 12:33 AM, S?awek Kap?o?ski wrote: > Hello, > > AFAIK it is because of recreation of all openflow rules in ovs - that is > at least on my infra where we're using vxlan tunnels with l2population > mechanism. > I would be happy if there will be any solution to not recreate all > tunnels when agent is restarted. > > -- > Best regards / Pozdrawiam > S?awek Kap?o?ski > slawek at kaplonski.pl > > On Thu, Apr 30, 2015 at 03:49:24PM +0700, Chris wrote: > > Hello, > > > > > > > > We made some changes on our compute nodes in the > > "/etc/neutron/neutron.conf". For example qpid_hostname. But nothing what > > effects the network infrastructure in the compute node. > > > > To apply the changes I think we need to restart the > > "neutron-openvswitch-agent" service. > > > > > > > > By restarting this service the VM gets disconnected for around one ping, > the > > reason is the restart causes recreation of the int-br-bond0 and > phy-br-bond0 > > interfaces: > > > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl > --timeout=10 -- > > --may-exist add-br br-int > > > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl > --timeout=10 -- > > set-fail-mode br-int secure > > > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl > --timeout=10 -- > > --if-exists del-port br-int patch-tun > > > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl > --timeout=10 -- > > --if-exists del-port br-int int-br-bond0 > > > > kernel: [73873.047999] device int-br-bond0 left promiscuous mode > > > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl > --timeout=10 -- > > --if-exists del-port br-bond0 phy-br-bond0 > > > > kernel: [73873.086241] device phy-br-bond0 left promiscuous mode > > > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl > --timeout=10 -- > > --may-exist add-port br-int int-br-bond0 > > > > kernel: [73873.287466] device int-br-bond0 entered promiscuous mode > > > > ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl > --timeout=10 -- > > --may-exist add-port br-bond0 phy-br-bond0 > > > > > > > > Is there a way to apply this changes without loose pings? > > > > > > > > Cheers > > > > Chris > > > > > _______________________________________________ > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Sun May 3 14:51:54 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sun, 3 May 2015 16:51:54 +0200 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: Message-ID: Boris, thanks for your kind feedback. I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal. The installation was successful by the first run. The network looks like this: https://cloudssky.com/.galleries/images/kilo-virt-setup.png For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login, added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated, rebooted and spawn the network and compute1 vm nodes from that snapshot. (To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN on it.) What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilo becomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again). The packstack file was created in interactive mode with: packstack --answer-file= --> press enter I accepted most default values and selected trove and heat to be installed. The answers are on pastebin: http://pastebin.com/SYp8Qf7d The generated packstack file is here: http://pastebin.com/XqJuvQxf The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below). And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm). Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited as I'm now about Kilo :-) Thanks! Arash --- Some outputs here: [root at controller ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | compute1.novalocal | up | enabled | +----+---------------------+-------+---------+ [root at network ~]# ovs-vsctl show 436a6114-d489-4160-b469-f088d66bd752 Bridge br-tun fail_mode: secure Port "vxlan-14000212" Interface "vxlan-14000212" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" ovs_version: "2.3.1" [root at compute~]# ovs-vsctl show 8123433e-b477-4ef5-88aa-721487a4bd58 Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-14000213" Interface "vxlan-14000213" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets wrote: > Thank you once again it really works. > > [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list > +----+----------------------------------------+-------+---------+ > | ID | Hypervisor hostname | State | Status | > +----+----------------------------------------+-------+---------+ > | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | > | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | > +----+----------------------------------------+-------+---------+ > > [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers > ip-192-169-142-137.ip.secureserver.net > > +--------------------------------------+-------------------+---------------+----------------------------------------+ > | ID | Name | Hypervisor ID > | Hypervisor Hostname | > > +--------------------------------------+-------------------+---------------+----------------------------------------+ > | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 > | ip-192-169-142-137.ip.secureserver.net | > | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 > | ip-192-169-142-137.ip.secureserver.net | > > +--------------------------------------+-------------------+---------------+----------------------------------------+ > > with only one issue:- > > during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= > during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 > > and finally it results mess in ml2_vxlan_endpoints table. I had manually > update > ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on > both nodes > afterwards VMs on compute node obtained access to meta-data server. > > I also believe that synchronized delete records from tables > "compute_nodes && services" > ( along with disabling nova-compute on Controller) could turn AIO host > into real Controller. > > Boris. > > ------------------------------ > Date: Fri, 1 May 2015 22:22:41 +0200 > Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on > Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 > From: ak at cloudssky.com > To: bderzhavets at hotmail.com > CC: apevec at gmail.com; rdo-list at redhat.com > > I got the compute node working by adding the delorean-kilo.repo on compute > node, > yum updating the compute node, rebooted and extended the packstack file > from the first AIO > install with the IP of compute node and ran packstack again with > NetworkManager enabled > and did a second yum update on compute node before the 3rd packstack run, > and now it works :-) > > In short, for RC2 we have to force by hand to get the nova-compute running > on compute node, > before running packstack from controller again from an existing AIO > install. > > Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and > could spawn a > 3rd cirros instance which landed on 2nd compute node. > ssh'ing into the instances over the floating ip works fine too. > > Before running packstack again, I set: > > EXCLUDE_SERVERS= > > [root at csky01 ~(keystone_osx)]# virsh list --all > Id Name Status > ---------------------------------------------------- > 2 instance-00000001 laufend --> means running in German > > 3 instance-00000002 laufend --> means running in German > > > [root at csky06 ~]# virsh list --all > Id Name Status > ---------------------------------------------------- > 2 instance-00000003 laufend --> means running in German > > > == Nova managed services == > > +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ > | Id | Binary | Host | Zone | Status | State | > Updated_at | Disabled Reason | > > +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ > | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | > 2015-05-01T19:46:42.000000 | - | > | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | > 2015-05-01T19:46:42.000000 | - | > | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | > 2015-05-01T19:46:42.000000 | - | > | 4 | nova-compute | csky01.csg.net | nova | enabled | up | > 2015-05-01T19:46:40.000000 | - | > | 5 | nova-cert | csky01.csg.net | internal | enabled | up | > 2015-05-01T19:46:42.000000 | - | > | 6 | nova-compute | csky06.csg.net | nova | enabled | up | > 2015-05-01T19:46:38.000000 | - | > > +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ > > > On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets > wrote: > > Ran packstack --debug --answer-file=./answer-fileRC2.txt > 192.169.142.137_nova.pp.log.gz attached > > Boris > > ------------------------------ > From: bderzhavets at hotmail.com > To: apevec at gmail.com > Date: Fri, 1 May 2015 01:44:17 -0400 > CC: rdo-list at redhat.com > Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute > Node when testing delorean RC2 or CI repo on CentOS 7.1 > > Follow instructions > https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html > packstack fails :- > > Applying 192.169.142.127_nova.pp > Applying 192.169.142.137_nova.pp > 192.169.142.127_nova.pp: [ DONE ] > 192.169.142.137_nova.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp > Error: Could not start Service[nova-compute]: Execution of > '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for > openstack-nova-compute.service failed. See 'systemctl status > openstack-nova-compute.service' and 'journalctl -xn' for details. > You will find full trace in log > /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log > > In both cases (RC2 or CI repos) on compute node 192.169.142.137 > /var/log/nova/nova-compute.log > reports :- > > 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 > seconds... > 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on > localhost:5672 > 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 > is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. > > Seems like it is looking for AMQP Server at wrong host . Should be > 192.169.142.127 > On 192.169.142.127 :- > > [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 > ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* > LISTEN 14506/beam.smp > tcp6 0 0 :::5672 > :::* LISTEN 14506/beam.smp > > [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 > -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m > comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT > -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m > comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT > > Answer-file is attached > > Thanks. > Boris > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sun May 3 16:46:47 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 3 May 2015 12:46:47 -0400 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: , , , , Message-ID: Date: Sun, 3 May 2015 16:51:54 +0200 Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Boris, thanks for your kind feedback. I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal.The installation was successful by the first run. The network looks like this:https://cloudssky.com/.galleries/images/kilo-virt-setup.png For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login,added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated,rebooted and spawn the network and compute1 vm nodes from that snapshot.(To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN on it.) What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilobecomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again). The packstack file was created in interactive mode with: packstack --answer-file= --> press enter I accepted most default values and selected trove and heat to be installed. The answers are on pastebin: http://pastebin.com/SYp8Qf7d The generated packstack file is here: http://pastebin.com/XqJuvQxf The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below). And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm). Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited as I'm now about Kilo :-) Thanks! Arash --- Some outputs here: [root at controller ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | compute1.novalocal | up | enabled | +----+---------------------+-------+---------+ [root at network ~]# ovs-vsctl show 436a6114-d489-4160-b469-f088d66bd752 Bridge br-tun fail_mode: secure Port "vxlan-14000212" Interface "vxlan-14000212" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" ovs_version: "2.3.1" [root at compute~]# ovs-vsctl show 8123433e-b477-4ef5-88aa-721487a4bd58 Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-14000213" Interface "vxlan-14000213" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets wrote: Thank you once again it really works. [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list +----+----------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+----------------------------------------+-------+---------+ | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | +----+----------------------------------------+-------+---------+ [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net +--------------------------------------+-------------------+---------------+----------------------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+----------------------------------------+ | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net | | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net | +--------------------------------------+-------------------+---------------+----------------------------------------+ with only one issue:- during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 and finally it results mess in ml2_vxlan_endpoints table. I had manually update ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes afterwards VMs on compute node obtained access to meta-data server. I also believe that synchronized delete records from tables "compute_nodes && services" ( along with disabling nova-compute on Controller) could turn AIO host into real Controller. Boris. Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node,yum updating the compute node, rebooted and extended the packstack file from the first AIOinstall with the IP of compute node and ran packstack again with NetworkManager enabledand did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node,before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a3rd cirros instance which landed on 2nd compute node.ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sun May 3 17:40:14 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 3 May 2015 13:40:14 -0400 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: , , , , Message-ID: Yes, it looks possible to perform multi node deployment with RDO Kilo RC2 via single packstack run. I've tried:- CONFIG_CONTROLLER_HOST=192.169.142.127 CONFIG_COMPUTE_HOSTS=192.169.142.127,192.169.142.137 CONFIG_NETWORK_HOSTS=192.169.142.127 was able use different IPs for VTEPs ( CONFIG_TUNNEL_IF=eth1 works as expected ). Bridge br-tun fail_mode: secure Port "vxlan-0a000089" Interface "vxlan-0a000089" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.0.0.127", out_key=flow, remote_ip="10.0.0.137"} and succeeded . Per your report looks like 192.169.142.127 may be removed from CONFIG_COMPUTE_HOSTS. AIO host plus separate Compute node setups scared me too much ;) The point seems to be presence of delorean.repo on Compute nodes. Am I correct ? My testing resources are limited 16 GB RAM and 4CORE CPU , I cannot start third VM for testing You wrote :- > What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity >to the instance and Kilo I just used VMs with eth0 for public && management network and eth1 for VXLAN endpoints Answer-file is attached. Thank you for keeping me posted. Boris Date: Sun, 3 May 2015 16:51:54 +0200 Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Boris, thanks for your kind feedback. I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal.The installation was successful by the first run. The network looks like this:https://cloudssky.com/.galleries/images/kilo-virt-setup.png For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login,added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated,rebooted and spawn the network and compute1 vm nodes from that snapshot.(To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN on it.) What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilobecomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again). The packstack file was created in interactive mode with: packstack --answer-file= --> press enter I accepted most default values and selected trove and heat to be installed. The answers are on pastebin: http://pastebin.com/SYp8Qf7d The generated packstack file is here: http://pastebin.com/XqJuvQxf The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below). And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm). Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited as I'm now about Kilo :-) Thanks! Arash --- Some outputs here: [root at controller ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | compute1.novalocal | up | enabled | +----+---------------------+-------+---------+ [root at network ~]# ovs-vsctl show 436a6114-d489-4160-b469-f088d66bd752 Bridge br-tun fail_mode: secure Port "vxlan-14000212" Interface "vxlan-14000212" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" ovs_version: "2.3.1" [root at compute~]# ovs-vsctl show 8123433e-b477-4ef5-88aa-721487a4bd58 Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-14000213" Interface "vxlan-14000213" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets wrote: Thank you once again it really works. [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list +----+----------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+----------------------------------------+-------+---------+ | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | +----+----------------------------------------+-------+---------+ [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net +--------------------------------------+-------------------+---------------+----------------------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+----------------------------------------+ | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net | | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net | +--------------------------------------+-------------------+---------------+----------------------------------------+ with only one issue:- during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 and finally it results mess in ml2_vxlan_endpoints table. I had manually update ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes afterwards VMs on compute node obtained access to meta-data server. I also believe that synchronized delete records from tables "compute_nodes && services" ( along with disabling nova-compute on Controller) could turn AIO host into real Controller. Boris. Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node,yum updating the compute node, rebooted and extended the packstack file from the first AIOinstall with the IP of compute node and ran packstack again with NetworkManager enabledand did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node,before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a3rd cirros instance which landed on 2nd compute node.ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: answer-fileRC2.txt.gz Type: application/x-gzip Size: 1917 bytes Desc: not available URL: From stdake at cisco.com Sun May 3 17:15:59 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sun, 3 May 2015 17:15:59 +0000 Subject: [Rdo-list] Defects in Kilo RC2 Packaging Message-ID: Hi, I recently ported Kolla (OpenStack running in containers ? http://github.com/stackforge/kolla) and found the following defects: 1. Glance has missing dependencies in its package Specifically +RUN yum -y install openstack-glance python-oslo-log python-oslo-policy && yum clean all Is needed to get glance to operate. Oslo-log and oslo-policy should be added to the dependencies. You wouldn?t notice this on an AIO install because other packages probably have those packages as dependencies. 2. Neutron for whatever reason depends on a file fwwas_driver.ini which has been removed from the master of neutron. But the agents will exit if its not in the config directory. I used juno?s version of fwaas_driver.ini to get the agents to stop exiting. 3. The file dnsmasq-neutron.conf is misconfigured in the default installation. This causes the neutron agents to exit. I delete the file during docker build which fixes the problem. I?m not sure what this config file is supposed to look like. 4. A critical bug was found in both Juno and Kilo versions of nova. If I launch approximately 20 Vms via a heat resource group with floating ips, only about 7 of the Vms get ports assigned. The others do get their ports assigned because they can access dhcp and metadata server, so their networking is operational. Neutron port-list shows their ports are active. However nova-list does not show their IPs from the instance info cache. My only workaround to this problem is to run the icehouse version of nova (api, conductor, scheduler, compute) which works perfectly. I have filed a bug with a 100% reliable easy to use reproducer and more details and logs here: https://bugzilla.redhat.com/show_bug.cgi?id=1213547 Interestingly in my informal tests icehouse nova is about 4x faster at placing Vms in the active state as compared to juno or kilo, so that may need some attention as well. Just watching top, it appears neutron-server is much busier (~35% cpu utilization of 1 core during the entire ->ACTIVE process) with the juno/kilo releases. Note I spent about 7 days trying to debug this problem but the code literally calls IP assignments in about 40 different places in the code base, including exchanges over RPC and python-neutronclient, so it is very difficult to track. I would appreciate finding a nova expert to debug the problem further. Other than those problems, RDO Kilo RC2 looks spectacular and works perfectly in my dead chicken testing. Nice job guys! Regards -steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sun May 3 18:20:41 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 3 May 2015 14:20:41 -0400 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: , , , , Message-ID: Arash, Please, disregard this notice :- >You wrote :- >> What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the >> connectivity >to the instance and Kilo Different types of VMs in yours and mine environments. Boris. Date: Sun, 3 May 2015 16:51:54 +0200 Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Boris, thanks for your kind feedback. I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal.The installation was successful by the first run. The network looks like this:https://cloudssky.com/.galleries/images/kilo-virt-setup.png For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login,added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated,rebooted and spawn the network and compute1 vm nodes from that snapshot.(To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN on it.) What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilobecomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again). The packstack file was created in interactive mode with: packstack --answer-file= --> press enter I accepted most default values and selected trove and heat to be installed. The answers are on pastebin: http://pastebin.com/SYp8Qf7d The generated packstack file is here: http://pastebin.com/XqJuvQxf The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below). And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm). Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited as I'm now about Kilo :-) Thanks! Arash --- Some outputs here: [root at controller ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | compute1.novalocal | up | enabled | +----+---------------------+-------+---------+ [root at network ~]# ovs-vsctl show 436a6114-d489-4160-b469-f088d66bd752 Bridge br-tun fail_mode: secure Port "vxlan-14000212" Interface "vxlan-14000212" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" ovs_version: "2.3.1" [root at compute~]# ovs-vsctl show 8123433e-b477-4ef5-88aa-721487a4bd58 Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-14000213" Interface "vxlan-14000213" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets wrote: Thank you once again it really works. [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list +----+----------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+----------------------------------------+-------+---------+ | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | +----+----------------------------------------+-------+---------+ [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net +--------------------------------------+-------------------+---------------+----------------------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+----------------------------------------+ | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net | | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net | +--------------------------------------+-------------------+---------------+----------------------------------------+ with only one issue:- during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 and finally it results mess in ml2_vxlan_endpoints table. I had manually update ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes afterwards VMs on compute node obtained access to meta-data server. I also believe that synchronized delete records from tables "compute_nodes && services" ( along with disabling nova-compute on Controller) could turn AIO host into real Controller. Boris. Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node,yum updating the compute node, rebooted and extended the packstack file from the first AIOinstall with the IP of compute node and ran packstack again with NetworkManager enabledand did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node,before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a3rd cirros instance which landed on 2nd compute node.ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Sun May 3 22:50:16 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sun, 3 May 2015 22:50:16 +0000 Subject: [Rdo-list] Defects in Kilo RC2 Packaging Message-ID: From: Steven Dake > Date: Sunday, May 3, 2015 at 10:15 AM To: "rdo-list at redhat.com" > Subject: [Rdo-list] Defects in Kilo RC2 Packaging Hi, I recently ported Kolla (OpenStack running in containers ? http://github.com/stackforge/kolla) and found the following defects: 1. Glance has missing dependencies in its package Specifically +RUN yum -y install openstack-glance python-oslo-log python-oslo-policy && yum clean all Is needed to get glance to operate. Oslo-log and oslo-policy should be added to the dependencies. You wouldn?t notice this on an AIO install because other packages probably have those packages as dependencies. 2. Neutron for whatever reason depends on a file fwwas_driver.ini which has been removed from the master of neutron. But the agents will exit if its not in the config directory. I used juno?s version of fwaas_driver.ini to get the agents to stop exiting. 3. The file dnsmasq-neutron.conf is misconfigured in the default installation. This causes the neutron agents to exit. I delete the file during docker build which fixes the problem. I?m not sure what this config file is supposed to look like. I found the root cause of this problem. This was actually an error in Kolla. Neutron-dnsmasq.conf (where you would specify a 1450 MTU if using vlans) cannot go in the same directory as /etc/neutron where the agents read config files when used with the ?config-dir option. The agents try to read all configuration files there as INI format files, which dnsmasq is not formatted as. 4. A critical bug was found in both Juno and Kilo versions of nova. If I launch approximately 20 Vms via a heat resource group with floating ips, only about 7 of the Vms get ports assigned. The others do get their ports assigned because they can access dhcp and metadata server, so their networking is operational. Neutron port-list shows their ports are active. However nova-list does not show their IPs from the instance info cache. My only workaround to this problem is to run the icehouse version of nova (api, conductor, scheduler, compute) which works perfectly. I have filed a bug with a 100% reliable easy to use reproducer and more details and logs here: https://bugzilla.redhat.com/show_bug.cgi?id=1213547 Interestingly in my informal tests icehouse nova is about 4x faster at placing Vms in the active state as compared to juno or kilo, so that may need some attention as well. Just watching top, it appears neutron-server is much busier (~35% cpu utilization of 1 core during the entire ->ACTIVE process) with the juno/kilo releases. Note I spent about 7 days trying to debug this problem but the code literally calls IP assignments in about 40 different places in the code base, including exchanges over RPC and python-neutronclient, so it is very difficult to track. I would appreciate finding a nova expert to debug the problem further. Other than those problems, RDO Kilo RC2 looks spectacular and works perfectly in my dead chicken testing. Nice job guys! Regards -steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Sun May 3 22:54:45 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sun, 3 May 2015 22:54:45 +0000 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: Message-ID: Boris, Feel free to try out my Magnum packages here. They work in containers, not sure about CentOS. I?m not certain the systemd files are correct (I didn?t test that part) but the dependencies are correct: https://copr.fedoraproject.org/coprs/sdake/openstack-magnum/ NB you will have to run through the quickstart configuration guide here: https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-manual-devstack.rst Regards -steve From: Boris Derzhavets > Date: Sunday, May 3, 2015 at 11:20 AM To: Arash Kaffamanesh > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Arash, Please, disregard this notice :- >You wrote :- >> What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the >> connectivity >to the instance and Kilo Different types of VMs in yours and mine environments. Boris. ________________________________ Date: Sun, 3 May 2015 16:51:54 +0200 Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Boris, thanks for your kind feedback. I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal. The installation was successful by the first run. The network looks like this: https://cloudssky.com/.galleries/images/kilo-virt-setup.png For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login, added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated, rebooted and spawn the network and compute1 vm nodes from that snapshot. (To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN on it.) What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilo becomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again). The packstack file was created in interactive mode with: packstack --answer-file= --> press enter I accepted most default values and selected trove and heat to be installed. The answers are on pastebin: http://pastebin.com/SYp8Qf7d The generated packstack file is here: http://pastebin.com/XqJuvQxf The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below). And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm). Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited as I'm now about Kilo :-) Thanks! Arash --- Some outputs here: [root at controller ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | compute1.novalocal | up | enabled | +----+---------------------+-------+---------+ [root at network ~]# ovs-vsctl show 436a6114-d489-4160-b469-f088d66bd752 Bridge br-tun fail_mode: secure Port "vxlan-14000212" Interface "vxlan-14000212" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" ovs_version: "2.3.1" [root at compute~]# ovs-vsctl show 8123433e-b477-4ef5-88aa-721487a4bd58 Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-14000213" Interface "vxlan-14000213" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets > wrote: Thank you once again it really works. [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list +----+----------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+----------------------------------------+-------+---------+ | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | +----+----------------------------------------+-------+---------+ [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net +--------------------------------------+-------------------+---------------+----------------------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+----------------------------------------+ | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net | | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net | +--------------------------------------+-------------------+---------------+----------------------------------------+ with only one issue:- during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 and finally it results mess in ml2_vxlan_endpoints table. I had manually update ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes afterwards VMs on compute node obtained access to meta-data server. I also believe that synchronized delete records from tables "compute_nodes && services" ( along with disabling nova-compute on Controller) could turn AIO host into real Controller. Boris. ________________________________ Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node, yum updating the compute node, rebooted and extended the packstack file from the first AIO install with the IP of compute node and ran packstack again with NetworkManager enabled and did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node, before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a 3rd cirros instance which landed on 2nd compute node. ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets > wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris ________________________________ From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Mon May 4 00:02:56 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Mon, 4 May 2015 00:02:56 +0000 Subject: [Rdo-list] RC2 neutron metadata service responsive but not with 2009-04-04 (returns 404) Message-ID: Boris also reported this bug. I thought I?d pile on and file a bug report. The metadata service is not responding to requests on 2009-04-04. I have filed a bug at https://bugzilla.redhat.com/show_bug.cgi?id=1217999 This is a major blocker for two summit demos I have lined up in May. Neither Heat nor Magnum work without the metadata service. >From inside a cirros instance: $ curl http://169.254.169.254/ 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 This is correct $ curl http://169.254.169.254/2009-04-04 404 Not Found

404 Not Found

The resource could not be found.

That should have returned some files. $ curl http://169.254.169.254/2009-04-04/ 404 Not Found

404 Not Found

The resource could not be found.

Any tips? Regards, -steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Mon May 4 08:12:32 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 4 May 2015 04:12:32 -0400 Subject: [Rdo-list] RE(3) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: , , , , , , , , , Message-ID: Yes , works both ways ( delorean.repo installed on Compute node as well) CONFIG_CONTROLLER_HOST=192.169.142.127 CONFIG_COMPUTE_HOSTS=192.169.142.137 CONFIG_NETWORK_HOSTS=192.169.142.127 with separate network for VTEP's interfaces Bridge br-tun fail_mode: secure Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-c0a87a89" Interface "vxlan-c0a87a89" type: vxlan options: {df_default="true", in_key=flow, local_ip="192.168.122.127", out_key=flow, remote_ip="192.168.122.137"} RC2 looks fine (CentOS 7.1 yum updated) Boris. From: bderzhavets at hotmail.com To: ak at cloudssky.com Date: Sun, 3 May 2015 13:40:14 -0400 CC: rdo-list at redhat.com Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Yes, it looks possible to perform multi node deployment with RDO Kilo RC2 via single packstack run. I've tried:- CONFIG_CONTROLLER_HOST=192.169.142.127 CONFIG_COMPUTE_HOSTS=192.169.142.127,192.169.142.137 CONFIG_NETWORK_HOSTS=192.169.142.127 was able use different IPs for VTEPs ( CONFIG_TUNNEL_IF=eth1 works as expected ). Bridge br-tun fail_mode: secure Port "vxlan-0a000089" Interface "vxlan-0a000089" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.0.0.127", out_key=flow, remote_ip="10.0.0.137"} and succeeded . Per your report looks like 192.169.142.127 may be removed from CONFIG_COMPUTE_HOSTS. AIO host plus separate Compute node setups scared me too much ;) The point seems to be presence of delorean.repo on Compute nodes. Am I correct ? My testing resources are limited 16 GB RAM and 4CORE CPU , I cannot start third VM for testing You wrote :- > What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity >to the instance and Kilo I just used VMs with eth0 for public && management network and eth1 for VXLAN endpoints Answer-file is attached. Thank you for keeping me posted. Boris Date: Sun, 3 May 2015 16:51:54 +0200 Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Boris, thanks for your kind feedback. I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal.The installation was successful by the first run. The network looks like this:https://cloudssky.com/.galleries/images/kilo-virt-setup.png For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login,added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated,rebooted and spawn the network and compute1 vm nodes from that snapshot.(To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN on it.) What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilobecomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again). The packstack file was created in interactive mode with: packstack --answer-file= --> press enter I accepted most default values and selected trove and heat to be installed. The answers are on pastebin: http://pastebin.com/SYp8Qf7d The generated packstack file is here: http://pastebin.com/XqJuvQxf The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below). And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm). Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited as I'm now about Kilo :-) Thanks! Arash --- Some outputs here: [root at controller ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | compute1.novalocal | up | enabled | +----+---------------------+-------+---------+ [root at network ~]# ovs-vsctl show 436a6114-d489-4160-b469-f088d66bd752 Bridge br-tun fail_mode: secure Port "vxlan-14000212" Interface "vxlan-14000212" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" ovs_version: "2.3.1" [root at compute~]# ovs-vsctl show 8123433e-b477-4ef5-88aa-721487a4bd58 Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-14000213" Interface "vxlan-14000213" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets wrote: Thank you once again it really works. [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list +----+----------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+----------------------------------------+-------+---------+ | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | +----+----------------------------------------+-------+---------+ [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net +--------------------------------------+-------------------+---------------+----------------------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+----------------------------------------+ | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net | | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net | +--------------------------------------+-------------------+---------------+----------------------------------------+ with only one issue:- during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 and finally it results mess in ml2_vxlan_endpoints table. I had manually update ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes afterwards VMs on compute node obtained access to meta-data server. I also believe that synchronized delete records from tables "compute_nodes && services" ( along with disabling nova-compute on Controller) could turn AIO host into real Controller. Boris. Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node,yum updating the compute node, rebooted and extended the packstack file from the first AIOinstall with the IP of compute node and ran packstack again with NetworkManager enabledand did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node,before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a3rd cirros instance which landed on 2nd compute node.ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcoufal at redhat.com Mon May 4 09:09:12 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Mon, 04 May 2015 11:09:12 +0200 Subject: [Rdo-list] rdo-manager virt setup still stuck on neutron In-Reply-To: References: Message-ID: <55473738.5030700@redhat.com> On 01/05/15 19:16, Mohammed Arafa wrote: > + setup-neutron -n /tmp/tmp.Qxz7JkrlLI > /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: > UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for > novaclient.v2). The preferable way to get client class or object you can > find in novaclient.client module. > warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis > for " > > -- Hi Mohammand, this will not tell us much since novaclient.v1_1 deprecation is expected behavior. Can you paste exact steps what you did + a little bit more from your console? Thanks -- Jarda From hguemar at fedoraproject.org Mon May 4 15:00:02 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 4 May 2015 15:00:02 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150504150002.EA7B360029C0@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-05-06 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From pgsousa at gmail.com Mon May 4 16:38:18 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 4 May 2015 17:38:18 +0100 Subject: [Rdo-list] error deploying rdo-manager overcloud controller node Message-ID: Hi all, I'm having a problem deploying overcloud controller node, the stack is failling, when I run the command: instack-deploy-overcloud --tuskar Logging on the controller and looking to cloud-init.log I see this: 2015-05-04 16:17:51,839 - url_helper.py[WARNING]: Calling ' http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: unexpected error ['NoneType' object has no attribute 'status_code'] May 4 16:17:51 localhost cloud-init: ci-info: +++++++++++++++++++++++Net device info+++++++++++++++++++++++ May 4 16:17:51 localhost cloud-init: ci-info: +--------+------+-----------+-----------+-------------------+ May 4 16:17:51 localhost cloud-init: ci-info: | Device | Up | Address | Mask | Hw-Address | May 4 16:17:51 localhost cloud-init: ci-info: +--------+------+-----------+-----------+-------------------+ May 4 16:17:51 localhost cloud-init: ci-info: | lo: | True | 127.0.0.1 | 255.0.0.0 | . | May 4 16:17:51 localhost cloud-init: ci-info: | eth0: | True | . | . | 00:f9:f1:ed:e5:92 | May 4 16:17:51 localhost cloud-init: ci-info: +--------+------+-----------+-----------+-------------------+ May 4 16:17:51 localhost cloud-init: ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! [root at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6 ~]# ifconfig br-ex: flags=4163 mtu 1500 inet 192.0.2.10 netmask 255.255.255.0 broadcast 192.0.2.255 inet6 fe80::2f9:f1ff:feed:e592 prefixlen 64 scopeid 0x20 ether 00:f9:f1:ed:e5:92 txqueuelen 0 (Ethernet) RX packets 18091 bytes 4033820 (3.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18423 bytes 1486362 (1.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::2f9:f1ff:feed:e592 prefixlen 64 scopeid 0x20 ether 00:f9:f1:ed:e5:92 txqueuelen 1000 (Ethernet) RX packets 18291 bytes 4100404 (3.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18673 bytes 1519312 (1.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Seems some network issue, any hint? Thanks, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Mon May 4 17:43:20 2015 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 4 May 2015 17:43:20 +0000 Subject: [Rdo-list] [RDO] RDO blog roundup, May 4, 2015 Message-ID: <0000014d200666d5-131d1b76-0591-430f-b939-aacbd341700f-000000@email.amazonses.com> rbowen started a discussion. RDO blog roundup, May 4, 2015 --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1014/rdo-blog-roundup-may-4-2015 Have a great day! From jslagle at redhat.com Tue May 5 00:37:13 2015 From: jslagle at redhat.com (James Slagle) Date: Mon, 4 May 2015 20:37:13 -0400 Subject: [Rdo-list] rdo-manager virt setup still stuck on neutron In-Reply-To: <55473738.5030700@redhat.com> References: <55473738.5030700@redhat.com> Message-ID: <20150505003713.GA4356@teletran-1.redhat.com> On Mon, May 04, 2015 at 11:09:12AM +0200, Jaromir Coufal wrote: > On 01/05/15 19:16, Mohammed Arafa wrote: > >+ setup-neutron -n /tmp/tmp.Qxz7JkrlLI > >/usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: > >UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for > >novaclient.v2). The preferable way to get client class or object you can > >find in novaclient.client module. > > warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis > >for " > > > >-- > > Hi Mohammand, > > this will not tell us much since novaclient.v1_1 deprecation is expected > behavior. Can you paste exact steps what you did + a little bit more from > your console? I think the symptom is that the installation actually hangs there, which to me sounds like there's still a rabbitmq problem. I'd check /var/log/neutron/server.log for a relevant error. Or, /var/log/rabbitmq/rabbit@.log and make sure that looks as expected, especially the node value under INFO REPORT when rabbitmq starts up. You could also generate a sosreport per 'Using SOS' from https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/troubleshooting/troubleshooting-overcloud.html and upload it somewhere for us to review. > > Thanks > -- Jarda > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From pgsousa at gmail.com Tue May 5 01:27:35 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 5 May 2015 02:27:35 +0100 Subject: [Rdo-list] error deploying rdo-manager overcloud controller node In-Reply-To: References: Message-ID: Hi, following https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/troubleshooting/troubleshooting-overcloud.html and running sudo journalctl -u os-collect-config on the controller node it seems to be a rabbitmq issue, it's not running. I see this in the logs: Stack trace: [{rabbit_node_monitor,write_cluster_status,1, [{file,"src/rabbit_node_monitor.erl"},{line,137}]}, {rabbit_node_monitor,prepare_cluster_status_files,0, [{file,"src/rabbit_node_monitor.erl"},{line,123}]}, {rabbit,'-boot/0-fun-1-',0,[{file,"src/rabbit.erl"},{line,328}]}, {rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,358}]}, {init,start_it,1,[]}, {init,start_em,1,[]}] =INFO REPORT==== 5-May-2015::01:18:00 === Error description: {error,{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6 /cluster_nodes.config", enospc}} Log files (may contain more information): /var/log/rabbitmq/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6.log /var/log/rabbitmq/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6-sasl.log Stack trace: [{rabbit_node_monitor,write_cluster_status,1, [{file,"src/rabbit_node_monitor.erl"},{line,137}]}, {rabbit_node_monitor,prepare_cluster_status_files,0, [{file,"src/rabbit_node_monitor.erl"},{line,123}]}, {rabbit,'-boot/0-fun-1-',0,[{file,"src/rabbit.erl"},{line,328}]}, {rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,358}]}, {init,start_it,1,[]}, {init,start_em,1,[]}] Any hint? Thanks On Mon, May 4, 2015 at 5:38 PM, Pedro Sousa wrote: > Hi all, > > I'm having a problem deploying overcloud controller node, the stack is > failling, when I run the command: instack-deploy-overcloud --tuskar > > > Logging on the controller and looking to cloud-init.log I see this: > > 2015-05-04 16:17:51,839 - url_helper.py[WARNING]: Calling ' > http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: > unexpected error ['NoneType' object has no attribute 'status_code'] > > > May 4 16:17:51 localhost cloud-init: ci-info: +++++++++++++++++++++++Net > device info+++++++++++++++++++++++ > May 4 16:17:51 localhost cloud-init: ci-info: > +--------+------+-----------+-----------+-------------------+ > May 4 16:17:51 localhost cloud-init: ci-info: | Device | Up | Address > | Mask | Hw-Address | > May 4 16:17:51 localhost cloud-init: ci-info: > +--------+------+-----------+-----------+-------------------+ > May 4 16:17:51 localhost cloud-init: ci-info: | lo: | True | 127.0.0.1 > | 255.0.0.0 | . | > May 4 16:17:51 localhost cloud-init: ci-info: | eth0: | True | . > | . | 00:f9:f1:ed:e5:92 | > May 4 16:17:51 localhost cloud-init: ci-info: > +--------+------+-----------+-----------+-------------------+ > May 4 16:17:51 localhost cloud-init: ci-info: > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info > failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > > > > > > [root at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6 ~]# ifconfig > br-ex: flags=4163 mtu 1500 > inet 192.0.2.10 netmask 255.255.255.0 broadcast 192.0.2.255 > inet6 fe80::2f9:f1ff:feed:e592 prefixlen 64 scopeid 0x20 > ether 00:f9:f1:ed:e5:92 txqueuelen 0 (Ethernet) > RX packets 18091 bytes 4033820 (3.8 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 18423 bytes 1486362 (1.4 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::2f9:f1ff:feed:e592 prefixlen 64 scopeid 0x20 > ether 00:f9:f1:ed:e5:92 txqueuelen 1000 (Ethernet) > RX packets 18291 bytes 4100404 (3.9 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 18673 bytes 1519312 (1.4 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > Seems some network issue, any hint? > > Thanks, > Pedro Sousa > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbrock at redhat.com Tue May 5 08:14:46 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Tue, 5 May 2015 10:14:46 +0200 Subject: [Rdo-list] error deploying rdo-manager overcloud controller node In-Reply-To: References: Message-ID: <20150505081445.GB31344@redhat.com> On Tue, May 05, 2015 at 02:27:35AM +0100, Pedro Sousa wrote: > Hi, > > following > https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/troubleshooting/troubleshooting-overcloud.html > and running sudo journalctl -u os-collect-config on the controller node it > seems to be a rabbitmq issue, it's not running. I see this in the logs: > > Stack trace: > [{rabbit_node_monitor,write_cluster_status,1, > [{file,"src/rabbit_node_monitor.erl"},{line,137}]}, > {rabbit_node_monitor,prepare_cluster_status_files,0, > [{file,"src/rabbit_node_monitor.erl"},{line,123}]}, > {rabbit,'-boot/0-fun-1-',0,[{file,"src/rabbit.erl"},{line,328}]}, > {rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,358}]}, > {init,start_it,1,[]}, > {init,start_em,1,[]}] > > > =INFO REPORT==== 5-May-2015::01:18:00 === > Error description: > > {error,{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6 > /cluster_nodes.config", > enospc}} > > Log files (may contain more information): > > /var/log/rabbitmq/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6.log > > /var/log/rabbitmq/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6-sasl.log > > Stack trace: > [{rabbit_node_monitor,write_cluster_status,1, > [{file,"src/rabbit_node_monitor.erl"},{line,137}]}, > {rabbit_node_monitor,prepare_cluster_status_files,0, > [{file,"src/rabbit_node_monitor.erl"},{line,123}]}, > {rabbit,'-boot/0-fun-1-',0,[{file,"src/rabbit.erl"},{line,328}]}, > {rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,358}]}, > {init,start_it,1,[]}, > {init,start_em,1,[]}] > > Any hint? > > Thanks Is this thread: https://www.redhat.com/archives/rdo-list/2015-April/msg00298.html useful at all? --Hugh > On Mon, May 4, 2015 at 5:38 PM, Pedro Sousa wrote: > > > Hi all, > > > > I'm having a problem deploying overcloud controller node, the stack is > > failling, when I run the command: instack-deploy-overcloud --tuskar > > > > > > Logging on the controller and looking to cloud-init.log I see this: > > > > 2015-05-04 16:17:51,839 - url_helper.py[WARNING]: Calling ' > > http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: > > unexpected error ['NoneType' object has no attribute 'status_code'] > > > > > > May 4 16:17:51 localhost cloud-init: ci-info: +++++++++++++++++++++++Net > > device info+++++++++++++++++++++++ > > May 4 16:17:51 localhost cloud-init: ci-info: > > +--------+------+-----------+-----------+-------------------+ > > May 4 16:17:51 localhost cloud-init: ci-info: | Device | Up | Address > > | Mask | Hw-Address | > > May 4 16:17:51 localhost cloud-init: ci-info: > > +--------+------+-----------+-----------+-------------------+ > > May 4 16:17:51 localhost cloud-init: ci-info: | lo: | True | 127.0.0.1 > > | 255.0.0.0 | . | > > May 4 16:17:51 localhost cloud-init: ci-info: | eth0: | True | . > > | . | 00:f9:f1:ed:e5:92 | > > May 4 16:17:51 localhost cloud-init: ci-info: > > +--------+------+-----------+-----------+-------------------+ > > May 4 16:17:51 localhost cloud-init: ci-info: > > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info > > failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > > > > > > > > > > > > [root at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6 ~]# ifconfig > > br-ex: flags=4163 mtu 1500 > > inet 192.0.2.10 netmask 255.255.255.0 broadcast 192.0.2.255 > > inet6 fe80::2f9:f1ff:feed:e592 prefixlen 64 scopeid 0x20 > > ether 00:f9:f1:ed:e5:92 txqueuelen 0 (Ethernet) > > RX packets 18091 bytes 4033820 (3.8 MiB) > > RX errors 0 dropped 0 overruns 0 frame 0 > > TX packets 18423 bytes 1486362 (1.4 MiB) > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > eth0: flags=4163 mtu 1500 > > inet6 fe80::2f9:f1ff:feed:e592 prefixlen 64 scopeid 0x20 > > ether 00:f9:f1:ed:e5:92 txqueuelen 1000 (Ethernet) > > RX packets 18291 bytes 4100404 (3.9 MiB) > > RX errors 0 dropped 0 overruns 0 frame 0 > > TX packets 18673 bytes 1519312 (1.4 MiB) > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > Seems some network issue, any hint? > > > > Thanks, > > Pedro Sousa > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == RDO Manager: Install, configure, and scale OpenStack == == http://rdoproject.org == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From ichi.sara at gmail.com Tue May 5 08:46:08 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 5 May 2015 10:46:08 +0200 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress Message-ID: Hello there, I started a project where I need to deploy stacks and orchastrate them using heat (autoscaling and so on..). I just started playing with heat and the creation of my first stack is never complete. It stays in the status create in progress. My log files don't say much. For my template i'm using a veery simple one to launch a small instance. Any ideas what that might be? In advance, thank you for your response. Sara -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Tue May 5 09:41:15 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 5 May 2015 10:41:15 +0100 Subject: [Rdo-list] error deploying rdo-manager overcloud controller node In-Reply-To: <20150505081445.GB31344@redhat.com> References: <20150505081445.GB31344@redhat.com> Message-ID: Hi, I don't think so, my problem is with overcloud deployment. It seems to be some permission problem with rabbitmq: =INFO REPORT==== 5-May-2015::01:18:00 === Error description: {error,{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6 /cluster_nodes.config", enospc}} Regards On Tue, May 5, 2015 at 9:14 AM, Hugh O. Brock wrote: > On Tue, May 05, 2015 at 02:27:35AM +0100, Pedro Sousa wrote: > > Hi, > > > > following > > > https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/troubleshooting/troubleshooting-overcloud.html > > and running sudo journalctl -u os-collect-config on the controller node > it > > seems to be a rabbitmq issue, it's not running. I see this in the logs: > > > > Stack trace: > > [{rabbit_node_monitor,write_cluster_status,1, > > > [{file,"src/rabbit_node_monitor.erl"},{line,137}]}, > > {rabbit_node_monitor,prepare_cluster_status_files,0, > > > [{file,"src/rabbit_node_monitor.erl"},{line,123}]}, > > {rabbit,'-boot/0-fun-1-',0,[{file,"src/rabbit.erl"},{line,328}]}, > > {rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,358}]}, > > {init,start_it,1,[]}, > > {init,start_em,1,[]}] > > > > > > =INFO REPORT==== 5-May-2015::01:18:00 === > > Error description: > > > > > {error,{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6 > > /cluster_nodes.config", > > enospc}} > > > > Log files (may contain more information): > > > > > /var/log/rabbitmq/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6.log > > > > > /var/log/rabbitmq/rabbit at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6-sasl.log > > > > Stack trace: > > [{rabbit_node_monitor,write_cluster_status,1, > > > [{file,"src/rabbit_node_monitor.erl"},{line,137}]}, > > {rabbit_node_monitor,prepare_cluster_status_files,0, > > > [{file,"src/rabbit_node_monitor.erl"},{line,123}]}, > > {rabbit,'-boot/0-fun-1-',0,[{file,"src/rabbit.erl"},{line,328}]}, > > {rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,358}]}, > > {init,start_it,1,[]}, > > {init,start_em,1,[]}] > > > > Any hint? > > > > Thanks > > Is this thread: > > https://www.redhat.com/archives/rdo-list/2015-April/msg00298.html > > useful at all? > > --Hugh > > > On Mon, May 4, 2015 at 5:38 PM, Pedro Sousa wrote: > > > > > Hi all, > > > > > > I'm having a problem deploying overcloud controller node, the stack is > > > failling, when I run the command: instack-deploy-overcloud --tuskar > > > > > > > > > Logging on the controller and looking to cloud-init.log I see this: > > > > > > 2015-05-04 16:17:51,839 - url_helper.py[WARNING]: Calling ' > > > http://169.254.169.254/2009-04-04/meta-data/instance-id' failed > [0/120s]: > > > unexpected error ['NoneType' object has no attribute 'status_code'] > > > > > > > > > May 4 16:17:51 localhost cloud-init: ci-info: > +++++++++++++++++++++++Net > > > device info+++++++++++++++++++++++ > > > May 4 16:17:51 localhost cloud-init: ci-info: > > > +--------+------+-----------+-----------+-------------------+ > > > May 4 16:17:51 localhost cloud-init: ci-info: | Device | Up | > Address > > > | Mask | Hw-Address | > > > May 4 16:17:51 localhost cloud-init: ci-info: > > > +--------+------+-----------+-----------+-------------------+ > > > May 4 16:17:51 localhost cloud-init: ci-info: | lo: | True | > 127.0.0.1 > > > | 255.0.0.0 | . | > > > May 4 16:17:51 localhost cloud-init: ci-info: | eth0: | True | . > > > | . | 00:f9:f1:ed:e5:92 | > > > May 4 16:17:51 localhost cloud-init: ci-info: > > > +--------+------+-----------+-----------+-------------------+ > > > May 4 16:17:51 localhost cloud-init: ci-info: > > > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info > > > failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > > > > > > > > > > > > > > > > > > [root at ov-x3un7k6nv4d-0-fjkcdpnvfnew-controller-pjaxihpukyl6 ~]# > ifconfig > > > br-ex: flags=4163 mtu 1500 > > > inet 192.0.2.10 netmask 255.255.255.0 broadcast 192.0.2.255 > > > inet6 fe80::2f9:f1ff:feed:e592 prefixlen 64 scopeid > 0x20 > > > ether 00:f9:f1:ed:e5:92 txqueuelen 0 (Ethernet) > > > RX packets 18091 bytes 4033820 (3.8 MiB) > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > TX packets 18423 bytes 1486362 (1.4 MiB) > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > eth0: flags=4163 mtu 1500 > > > inet6 fe80::2f9:f1ff:feed:e592 prefixlen 64 scopeid > 0x20 > > > ether 00:f9:f1:ed:e5:92 txqueuelen 1000 (Ethernet) > > > RX packets 18291 bytes 4100404 (3.9 MiB) > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > TX packets 18673 bytes 1519312 (1.4 MiB) > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > > Seems some network issue, any hint? > > > > > > Thanks, > > > Pedro Sousa > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- > == Hugh Brock, hbrock at redhat.com == > == Senior Engineering Manager, Cloud Engineering == > == RDO Manager: Install, configure, and scale OpenStack == > == http://rdoproject.org == > > "I know that you believe you understand what you think I said, but I?m > not sure you realize that what you heard is not what I meant." > --Robert McCloskey > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevintibi at hotmail.com Tue May 5 10:16:45 2015 From: kevintibi at hotmail.com (Kevin Tibi) Date: Tue, 5 May 2015 12:16:45 +0200 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: Message-ID: Hi, Can you copy your hot file and heat logs? Kevin Tibi Le 5 mai 2015 10:48, "ICHIBA Sara" a ?crit : > Hello there, > > I started a project where I need to deploy stacks and orchastrate them > using heat (autoscaling and so on..). I just started playing with heat and > the creation of my first stack is never complete. It stays in the status > create in progress. My log files don't say much. For my template i'm using > a veery simple one to launch a small instance. > > Any ideas what that might be? > > In advance, thank you for your response. > Sara > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcoufal at redhat.com Tue May 5 10:39:23 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Tue, 05 May 2015 12:39:23 +0200 Subject: [Rdo-list] RDO-Manager testing pushed ahead a few more days In-Reply-To: <5543BBC2.7060104@redhat.com> References: <5543BBC2.7060104@redhat.com> Message-ID: <55489DDB.9020307@redhat.com> Dear Community, We are going to push RDO-Manager testing ahead a few more days to give us time to stabilize the Kilo repositories we depend on. The exact date will be announced here (on rdo-list) but you can expect it to happen during the week of May 11. Cheers -- Jarda From ichi.sara at gmail.com Tue May 5 11:18:17 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 5 May 2015 13:18:17 +0200 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: Message-ID: hello, here is my HOT template, it's very basic: heat_template_version: 2013-05-23 description: Simple template to deploy a single compute instance resources: my_instance: type: OS::Nova::Server properties: image: Cirros 0.3.3 flavor: m1.small key_name: userkey networks: - network: fdf2bb77-a828-401d-969a-736a8028950f for the logs please find them attached. 2015-05-05 12:16 GMT+02:00 Kevin Tibi : > Hi, > > Can you copy your hot file and heat logs? > > Kevin Tibi > Le 5 mai 2015 10:48, "ICHIBA Sara" a ?crit : > >> Hello there, >> >> I started a project where I need to deploy stacks and orchastrate them >> using heat (autoscaling and so on..). I just started playing with heat and >> the creation of my first stack is never complete. It stays in the status >> create in progress. My log files don't say much. For my template i'm using >> a veery simple one to launch a small instance. >> >> Any ideas what that might be? >> >> In advance, thank you for your response. >> Sara >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: heat-api.log Type: application/octet-stream Size: 3342 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: heat-engine.log Type: application/octet-stream Size: 56454 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: heat-manage.log Type: application/octet-stream Size: 4480 bytes Desc: not available URL: From christian at berendt.io Tue May 5 11:41:07 2015 From: christian at berendt.io (Christian Berendt) Date: Tue, 05 May 2015 13:41:07 +0200 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter - April 2015 In-Reply-To: <5543BBC2.7060104@redhat.com> References: <5543BBC2.7060104@redhat.com> Message-ID: <5548AC53.4030106@berendt.io> On 05/01/2015 07:45 PM, Rich Bowen wrote: > Details about the test day are developing in the wiki, at > https://www.rdoproject.org/RDO_test_day_Kilo Test cases and > documentation will be appearing there over the coming few days. On https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test: # you are ready , its the time to configure tempest.conf How do I have to configure tempest and how do I have to call tempest to verify that my test environment is working like expected? At the moment I cannot find further documentation about this step. Is it sufficient to run manual tests? E.g. at the moment I installed "All-in-One, Glance=localfs, Cinder=lvm". Is it sufficient to upload/download images and to create/delete volumes to verify that the environment is working like expected? Christian. From yrabl at redhat.com Tue May 5 14:28:49 2015 From: yrabl at redhat.com (Yogev Rabl) Date: Tue, 5 May 2015 17:28:49 +0300 Subject: [Rdo-list] Cinder client packaging Message-ID: <20150505142849.GA25571@dhcp-2-52.tlv.redhat.com> Hi, I'm testing the Cinder in RDO and I've noticed that the package available in rhe rdo-kilo-testing repository is outdated python-cinderclient-1.1.1-1.el7.noarch.rpm 21-Dec-2014 13:16 169K When can it be fixed? Thanks, Yogev X-User: yrabl X-Operating-System: Fedora 21 (Twenty One) Linux 3.19.2-201.fc21.x86_64 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From apevec at gmail.com Tue May 5 20:12:56 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 5 May 2015 22:12:56 +0200 Subject: [Rdo-list] Cinder client packaging In-Reply-To: <20150505142849.GA25571@dhcp-2-52.tlv.redhat.com> References: <20150505142849.GA25571@dhcp-2-52.tlv.redhat.com> Message-ID: > I'm testing the Cinder in RDO and I've noticed that the package available in > rhe rdo-kilo-testing repository is outdated > python-cinderclient-1.1.1-1.el7.noarch.rpm 21-Dec-2014 13:16 169K 1.1.1 is latest released upstream from stable/kilo branch. If specific Kilo feature is missing in that version, please report it upstream. Cheers, Alan From rbowen at redhat.com Tue May 5 20:31:00 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 05 May 2015 16:31:00 -0400 Subject: [Rdo-list] OpenStack meetups, week of May 4th, 2015 Message-ID: <55492884.5030002@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Tue May 5 in Saint Paul, MN, US: May OpenStack Meetup - http://www.meetup.com/Minnesota-OpenStack-Meetup/events/221870529/ * Tue May 5 in London, 17, GB: London OpenStack May Meetup - http://www.meetup.com/Openstack-London/events/221676467/ * Wed May 6 in Seattle, WA, US: OpenStack Seattle Meetup: The Ins and Outs of Deploying OpenStack - http://www.meetup.com/OpenStack-Seattle/events/219315723/ * Wed May 6 in Richardson, TX, US: OpenStack Meetup May - Ceilometer Telemetry Presentation - http://www.meetup.com/OpenStack-DFW/events/218264792/ * Wed May 6 in Vancouver, BC, CA: OpenStack cloud management platform - By Yaniv - http://www.meetup.com/Network-Admin-and-Support-Group/events/221902596/ * Wed May 6 in New York, NY, US: The Ultimate OpenStack Meet and Greet - http://www.meetup.com/OpenStack-for-Enterprises-NYC/events/221726757/ * Thu May 7 in Bangalore, IN: "What's New : OpenStack Kilo " - http://www.meetup.com/Cloud-Enabled-Meetup/events/222232056/ * Thu May 7 in San Francisco, CA, US: Automation of OpenStack Deployments with Big Data Solutions - http://www.meetup.com/San-Francisco-Silicon-Valley-OpenStack-Meetup/events/221895045/ * Sat May 9 in Ha Noi, VN: 4th Public Meetup Vietnam OpenStack. - http://www.meetup.com/VietStack/events/222157997/ * Sun May 10 in Beijing, CN: ????OpenStack? UnitedStack????? ??? 5?10 - http://www.meetup.com/China-OpenStack-User-Group/events/222125771/ * Tue May 12 in Beijing, CN: 2015 China OpenStack ?????? - http://www.meetup.com/China-OpenStack-User-Group/events/221992827/ * Tue May 12 in Reston, VA, US: OpenStack + Heterogeneous Docker Orchestration (#1) - http://www.meetup.com/OpenStack-Nova/events/221839689/ * Tue May 12 in Athens, GR: High Availability in OpenStack - http://www.meetup.com/Athens-OpenStack-User-Group/events/222128761/ * Tue May 12 in Berlin, DE: OpenStack MUC? CenterDevice UserStory / Mirantis experiences selecting Hardware - http://www.meetup.com/openstack-de/events/222031110/ * Tue May 12 in M?nchen, DE: OpenStack MUC? CenterDevice UserStory / Mirantis experiences selecting Hardware - http://www.meetup.com/OpenStack-Munich/events/222030952/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From jweber at cofront.net Tue May 5 22:59:29 2015 From: jweber at cofront.net (Jeff Weber) Date: Tue, 5 May 2015 18:59:29 -0400 Subject: [Rdo-list] packaging lifecycles Message-ID: Is there any documentation which describes the packaging lifecycles for RDO packages? I'm currently using the Juno el-7 packages, and was curious since kilo is coming out now if updates will stop being built there. The 2014.2.3 update has been out for a bit, but I wasn't able to find where if anywhere this might be in the pipeline for update. What is the lifecycle for these kinds of updates? Do you just follow upstream for releases and EOL or stop once a new release is available? Is it possible to get involved with helping on older releases which aren't EOL but don't have updates if they're not normally going to be done? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Wed May 6 05:22:42 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Wed, 6 May 2015 05:22:42 +0000 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: Message-ID: Heat event-list will give you more detail on what is happening. From: Kevin Tibi > Date: Tuesday, May 5, 2015 at 3:16 AM To: ICHIBA Sara > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] [heat]: stack stays interminably under the status create in progress Hi, Can you copy your hot file and heat logs? Kevin Tibi Le 5 mai 2015 10:48, "ICHIBA Sara" > a ?crit : Hello there, I started a project where I need to deploy stacks and orchastrate them using heat (autoscaling and so on..). I just started playing with heat and the creation of my first stack is never complete. It stays in the status create in progress. My log files don't say much. For my template i'm using a veery simple one to launch a small instance. Any ideas what that might be? In advance, thank you for your response. Sara _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Wed May 6 08:29:05 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 6 May 2015 10:29:05 +0200 Subject: [Rdo-list] [keystone] fails to start Message-ID: hi, I have some issues with keystone since this morning, I couldn't connect to the dashboard so I checked my keystone service and found out that it was not running. I tried to restart it but no luck. I did some changes in the keystone.conf to make this work but no chance. The changes I did are: *cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ chown keystone:keystone /etc/keystone/*updated keystone.conf with the new config_file value* */etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD@* *controller/keystone* and I got this logs errors : 2015-05-06 10:17:28.352 24135 ERROR keystone.common.environment.eventlet_server [-] Could not bind to 0.0.0.0:35357 2015-05-06 10:17:28.353 24135 ERROR root [-] Failed to start the admin server 2015-05-06 10:17:29.221 24145 ERROR keystone.common.environment.eventlet_server [-] Could not bind to 0.0.0.0:35357 2015-05-06 10:17:29.221 24145 ERROR root [-] Failed to start the admin server 2015-05-06 10:17:30.059 24151 ERROR keystone.common.environment.eventlet_server [-] Could not bind to 0.0.0.0:35357 2015-05-06 10:17:30.060 24151 ERROR root [-] Failed to start the admin server 2015-05-06 10:17:30.895 24158 ERROR keystone.common.environment.eventlet_server [-] Could not bind to 0.0.0.0:35357 2015-05-06 10:17:30.896 24158 ERROR root [-] Failed to start the admin server 2015-05-06 10:17:31.727 24167 ERROR keystone.common.environment.eventlet_server [-] Could not bind to 0.0.0.0:35357 2015-05-06 10:17:31.728 24167 ERROR root [-] Failed to start the admin server -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed May 6 09:14:33 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 6 May 2015 11:14:33 +0200 Subject: [Rdo-list] [keystone] fails to start In-Reply-To: References: Message-ID: <20150506091433.GI30897@tesla.redhat.com> On Wed, May 06, 2015 at 10:29:05AM +0200, ICHIBA Sara wrote: > hi, > > I have some issues with keystone since this morning, I couldn't connect to > the dashboard so I checked my keystone service and found out that it was > not running. I tried to restart it but no luck. You don't mention exact versions of OpenStack/Keystone you're running. > I did some changes in the keystone.conf to make this work but no chance. > The changes I did are: > > > *cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > chown keystone:keystone /etc/keystone/*updated keystone.conf with the > new config_file value* > > > */etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD@* > *controller/keystone* > > and I got this logs errors : > > 2015-05-06 10:17:28.352 24135 ERROR > keystone.common.environment.eventlet_server [-] Could not bind to > 0.0.0.0:35357 Maybe that port is already busy? You can quickly check if it is: $ netstat -lnptu | grep 35357 -- /kashyap From javier.pena at redhat.com Wed May 6 09:14:50 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 6 May 2015 05:14:50 -0400 (EDT) Subject: [Rdo-list] [keystone] fails to start In-Reply-To: References: Message-ID: <2039497067.14629442.1430903690631.JavaMail.zimbra@redhat.com> ----- Original Message ----- > hi, > I have some issues with keystone since this morning, I couldn't connect to > the dashboard so I checked my keystone service and found out that it was not > running. I tried to restart it but no luck. > I did some changes in the keystone.conf to make this work but no chance. The > changes I did are: > cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > chown keystone:keystone /etc/keystone/* > updated keystone.conf with the new config_file value > /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD@ > controller/keystone > and I got this logs errors : > 2015-05-06 10:17:28.352 24135 ERROR > keystone.common.environment.eventlet_server [-] Could not bind to > 0.0.0.0:35357 > 2015-05-06 10:17:28.353 24135 ERROR root [-] Failed to start the admin server > 2015-05-06 10:17:29.221 24145 ERROR > keystone.common.environment.eventlet_server [-] Could not bind to > 0.0.0.0:35357 > 2015-05-06 10:17:29.221 24145 ERROR root [-] Failed to start the admin server > 2015-05-06 10:17:30.059 24151 ERROR > keystone.common.environment.eventlet_server [-] Could not bind to > 0.0.0.0:35357 > 2015-05-06 10:17:30.060 24151 ERROR root [-] Failed to start the admin server > 2015-05-06 10:17:30.895 24158 ERROR > keystone.common.environment.eventlet_server [-] Could not bind to > 0.0.0.0:35357 > 2015-05-06 10:17:30.896 24158 ERROR root [-] Failed to start the admin server > 2015-05-06 10:17:31.727 24167 ERROR > keystone.common.environment.eventlet_server [-] Could not bind to > 0.0.0.0:35357 > 2015-05-06 10:17:31.728 24167 ERROR root [-] Failed to start the admin server Hi, It is possible that keystone is running as a wsgi process under apache. That would explain why it cannot bind to port 35357. Can you check if your httpd process is running? If so, please check if it is bound to port 35357 (lsof -i -n -P|grep httpd). Regards, Javier > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From ichi.sara at gmail.com Wed May 6 09:19:34 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 6 May 2015 11:19:34 +0200 Subject: [Rdo-list] [keystone] fails to start In-Reply-To: <2039497067.14629442.1430903690631.JavaMail.zimbra@redhat.com> References: <2039497067.14629442.1430903690631.JavaMail.zimbra@redhat.com> Message-ID: I'm afaraid it is not, here is the output of lsof -i -n -P|grep httpd httpd 1855 root 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 1855 root 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 1855 root 8u IPv6 35709 0t0 TCP *:80 (LISTEN) httpd 3969 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 3969 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 3969 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) httpd 3970 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 3970 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 3970 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) httpd 3971 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 3971 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 3971 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) httpd 3972 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 3972 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 3972 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) httpd 3973 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 3973 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 3973 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) httpd 3974 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 3974 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 3974 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) httpd 3975 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 3975 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 3975 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) httpd 3976 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) httpd 3976 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) httpd 3976 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) 2015-05-06 11:14 GMT+02:00 Javier Pena : > ----- Original Message ----- > > > hi, > > > I have some issues with keystone since this morning, I couldn't connect > to > > the dashboard so I checked my keystone service and found out that it was > not > > running. I tried to restart it but no luck. > > > I did some changes in the keystone.conf to make this work but no chance. > The > > changes I did are: > > > cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > > chown keystone:keystone /etc/keystone/* > > > updated keystone.conf with the new config_file value > > > /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD@ > > controller/keystone > > > and I got this logs errors : > > > 2015-05-06 10:17:28.352 24135 ERROR > > keystone.common.environment.eventlet_server [-] Could not bind to > > 0.0.0.0:35357 > > 2015-05-06 10:17:28.353 24135 ERROR root [-] Failed to start the admin > server > > 2015-05-06 10:17:29.221 24145 ERROR > > keystone.common.environment.eventlet_server [-] Could not bind to > > 0.0.0.0:35357 > > 2015-05-06 10:17:29.221 24145 ERROR root [-] Failed to start the admin > server > > 2015-05-06 10:17:30.059 24151 ERROR > > keystone.common.environment.eventlet_server [-] Could not bind to > > 0.0.0.0:35357 > > 2015-05-06 10:17:30.060 24151 ERROR root [-] Failed to start the admin > server > > 2015-05-06 10:17:30.895 24158 ERROR > > keystone.common.environment.eventlet_server [-] Could not bind to > > 0.0.0.0:35357 > > 2015-05-06 10:17:30.896 24158 ERROR root [-] Failed to start the admin > server > > 2015-05-06 10:17:31.727 24167 ERROR > > keystone.common.environment.eventlet_server [-] Could not bind to > > 0.0.0.0:35357 > > 2015-05-06 10:17:31.728 24167 ERROR root [-] Failed to start the admin > server > > Hi, > > It is possible that keystone is running as a wsgi process under apache. > That would explain why it cannot bind to port 35357. > > Can you check if your httpd process is running? If so, please check if it > is bound to port 35357 (lsof -i -n -P|grep httpd). > > Regards, > Javier > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Wed May 6 09:25:26 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 6 May 2015 11:25:26 +0200 Subject: [Rdo-list] [keystone] fails to start In-Reply-To: <20150506091433.GI30897@tesla.redhat.com> References: <20150506091433.GI30897@tesla.redhat.com> Message-ID: [root at CONTROLLER ~]# netstat -lnptu | grep 35357 tcp6 0 0 :::35357 :::* LISTEN 1855/httpd the version of keystone is 1.4.0 2015-05-06 11:14 GMT+02:00 Kashyap Chamarthy : > On Wed, May 06, 2015 at 10:29:05AM +0200, ICHIBA Sara wrote: > > hi, > > > > I have some issues with keystone since this morning, I couldn't connect > to > > the dashboard so I checked my keystone service and found out that it was > > not running. I tried to restart it but no luck. > > You don't mention exact versions of OpenStack/Keystone you're running. > > > I did some changes in the keystone.conf to make this work but no chance. > > The changes I did are: > > > > > > *cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > > chown keystone:keystone /etc/keystone/*updated keystone.conf with the > > new config_file value* > > > > > > */etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD@* > > *controller/keystone* > > > > and I got this logs errors : > > > > 2015-05-06 10:17:28.352 24135 ERROR > > keystone.common.environment.eventlet_server [-] Could not bind to > > 0.0.0.0:35357 > > Maybe that port is already busy? You can quickly check if it is: > > $ netstat -lnptu | grep 35357 > > -- > /kashyap > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Wed May 6 09:27:19 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 6 May 2015 05:27:19 -0400 (EDT) Subject: [Rdo-list] [keystone] fails to start In-Reply-To: References: <2039497067.14629442.1430903690631.JavaMail.zimbra@redhat.com> Message-ID: <180644532.14634804.1430904439500.JavaMail.zimbra@redhat.com> ----- Original Message ----- > I'm afaraid it is not, here is the output of lsof -i -n -P|grep httpd > httpd 1855 root 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 1855 root 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 1855 root 8u IPv6 35709 0t0 TCP *:80 (LISTEN) Actually, this is what I meant. Keystone is running as a WSGI process inside Apache, so this is why you see httpd listening on ports 5000 and 35357. The next question would be why you are having trouble connecting to the dashboard. Could you provide some more details on that issue? Regards, Javier > httpd 3969 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 3969 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 3969 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > httpd 3970 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 3970 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 3970 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > httpd 3971 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 3971 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 3971 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > httpd 3972 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 3972 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 3972 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > httpd 3973 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 3973 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 3973 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > httpd 3974 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 3974 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 3974 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > httpd 3975 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 3975 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 3975 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > httpd 3976 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > httpd 3976 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > httpd 3976 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > 2015-05-06 11:14 GMT+02:00 Javier Pena < javier.pena at redhat.com > : > > ----- Original Message ----- > > > > hi, > > > > I have some issues with keystone since this morning, I couldn't connect > > > to > > > > the dashboard so I checked my keystone service and found out that it was > > > not > > > > running. I tried to restart it but no luck. > > > > I did some changes in the keystone.conf to make this work but no chance. > > > The > > > > changes I did are: > > > > cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > > > > chown keystone:keystone /etc/keystone/* > > > > updated keystone.conf with the new config_file value > > > > /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD@ > > > > controller/keystone > > > > and I got this logs errors : > > > > 2015-05-06 10:17:28.352 24135 ERROR > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > 0.0.0.0:35357 > > > > 2015-05-06 10:17:28.353 24135 ERROR root [-] Failed to start the admin > > > server > > > > 2015-05-06 10:17:29.221 24145 ERROR > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > 0.0.0.0:35357 > > > > 2015-05-06 10:17:29.221 24145 ERROR root [-] Failed to start the admin > > > server > > > > 2015-05-06 10:17:30.059 24151 ERROR > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > 0.0.0.0:35357 > > > > 2015-05-06 10:17:30.060 24151 ERROR root [-] Failed to start the admin > > > server > > > > 2015-05-06 10:17:30.895 24158 ERROR > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > 0.0.0.0:35357 > > > > 2015-05-06 10:17:30.896 24158 ERROR root [-] Failed to start the admin > > > server > > > > 2015-05-06 10:17:31.727 24167 ERROR > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > 0.0.0.0:35357 > > > > 2015-05-06 10:17:31.728 24167 ERROR root [-] Failed to start the admin > > > server > > > Hi, > > > It is possible that keystone is running as a wsgi process under apache. > > That > > would explain why it cannot bind to port 35357. > > > Can you check if your httpd process is running? If so, please check if it > > is > > bound to port 35357 (lsof -i -n -P|grep httpd). > > > Regards, > > > Javier > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From ichi.sara at gmail.com Wed May 6 09:37:37 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 6 May 2015 11:37:37 +0200 Subject: [Rdo-list] [keystone] fails to start In-Reply-To: <180644532.14634804.1430904439500.JavaMail.zimbra@redhat.com> References: <2039497067.14629442.1430903690631.JavaMail.zimbra@redhat.com> <180644532.14634804.1430904439500.JavaMail.zimbra@redhat.com> Message-ID: Actually I don't know much. Until yesterday keystone was just fine. and as I had some issues with heat I tried to install os-collect-config.( I don't know if this info is relevant). but it didn't work as I had some errors pumping out (*Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-7ApEOu/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-1psZ04-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip-build-7ApEOu/lxml )* after that i couldn't connect to the dashboard nor any other service. I had the errors *Traceback (most recent call last): File "/usr/bin/heat", line 7, in from heatclient.shell import main File "/usr/lib/python2.7/site-packages/heatclient/shell.py", line 30, in from keystoneclient import discover File "/usr/lib/python2.7/site-packages/keystoneclient/discover.py", line 20, in from keystoneclient import session as client_session File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 22, in from oslo_serialization import jsonutilsImportError: No module named oslo_serialization* so I reinstalled oslo.serialization and oslo.concurrency. Here I was able to get my dashboard UI in in the browser (which i couldn't do earlier) but i can't log in. 2015-05-06 11:27 GMT+02:00 Javier Pena : > ----- Original Message ----- > > > I'm afaraid it is not, here is the output of lsof -i -n -P|grep httpd > > > httpd 1855 root 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 1855 root 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 1855 root 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > Actually, this is what I meant. Keystone is running as a WSGI process > inside Apache, so this is why you see httpd listening on ports 5000 and > 35357. > > The next question would be why you are having trouble connecting to the > dashboard. Could you provide some more details on that issue? > > Regards, > Javier > > > httpd 3969 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 3969 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 3969 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > httpd 3970 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 3970 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 3970 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > httpd 3971 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 3971 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 3971 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > httpd 3972 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 3972 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 3972 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > httpd 3973 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 3973 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 3973 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > httpd 3974 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 3974 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 3974 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > httpd 3975 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 3975 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 3975 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > httpd 3976 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > httpd 3976 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > httpd 3976 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > 2015-05-06 11:14 GMT+02:00 Javier Pena < javier.pena at redhat.com > : > > > > ----- Original Message ----- > > > > > > > hi, > > > > > > > I have some issues with keystone since this morning, I couldn't > connect > > > > to > > > > > > the dashboard so I checked my keystone service and found out that it > was > > > > not > > > > > > running. I tried to restart it but no luck. > > > > > > > I did some changes in the keystone.conf to make this work but no > chance. > > > > The > > > > > > changes I did are: > > > > > > > cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > > > > > > chown keystone:keystone /etc/keystone/* > > > > > > > updated keystone.conf with the new config_file value > > > > > > > /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD@ > > > > > > controller/keystone > > > > > > > and I got this logs errors : > > > > > > > 2015-05-06 10:17:28.352 24135 ERROR > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > 0.0.0.0:35357 > > > > > > 2015-05-06 10:17:28.353 24135 ERROR root [-] Failed to start the > admin > > > > server > > > > > > 2015-05-06 10:17:29.221 24145 ERROR > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > 0.0.0.0:35357 > > > > > > 2015-05-06 10:17:29.221 24145 ERROR root [-] Failed to start the > admin > > > > server > > > > > > 2015-05-06 10:17:30.059 24151 ERROR > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > 0.0.0.0:35357 > > > > > > 2015-05-06 10:17:30.060 24151 ERROR root [-] Failed to start the > admin > > > > server > > > > > > 2015-05-06 10:17:30.895 24158 ERROR > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > 0.0.0.0:35357 > > > > > > 2015-05-06 10:17:30.896 24158 ERROR root [-] Failed to start the > admin > > > > server > > > > > > 2015-05-06 10:17:31.727 24167 ERROR > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > 0.0.0.0:35357 > > > > > > 2015-05-06 10:17:31.728 24167 ERROR root [-] Failed to start the > admin > > > > server > > > > > > Hi, > > > > > > It is possible that keystone is running as a wsgi process under apache. > > > That > > > would explain why it cannot bind to port 35357. > > > > > > Can you check if your httpd process is running? If so, please check if > it > > > is > > > bound to port 35357 (lsof -i -n -P|grep httpd). > > > > > > Regards, > > > > > Javier > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Wed May 6 10:25:45 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 6 May 2015 06:25:45 -0400 (EDT) Subject: [Rdo-list] [keystone] fails to start In-Reply-To: References: <2039497067.14629442.1430903690631.JavaMail.zimbra@redhat.com> <180644532.14634804.1430904439500.JavaMail.zimbra@redhat.com> Message-ID: <1173047412.14654404.1430907945985.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Actually I don't know much. Until yesterday keystone was just fine. and as I > had some issues with heat I tried to install os-collect-config.( I don't > know if this info is relevant). but it didn't work as I had some errors > pumping out ( Command /usr/bin/python -c "import setuptools, > tokenize;__file__='/tmp/pip-build-7ApEOu/lxml/setup.py';exec(compile(getattr(tokenize, > 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" > install --record /tmp/pip-1psZ04-record/install-record.txt > --single-version-externally-managed --compile failed with error code 1 in > /tmp/pip-build-7ApEOu/lxml ) > after that i couldn't connect to the dashboard nor any other service. I had > the errors > Traceback (most recent call last): > File "/usr/bin/heat", line 7, in > from heatclient.shell import main > File "/usr/lib/python2.7/site-packages/heatclient/shell.py", line 30, in > > from keystoneclient import discover > File "/usr/lib/python2.7/site-packages/keystoneclient/discover.py", line 20, > in > from keystoneclient import session as client_session > File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 22, > in > from oslo_serialization import jsonutils > ImportError: No module named oslo_serialization > so I reinstalled oslo.serialization and oslo.concurrency. Here I was able to > get my dashboard UI in in the browser (which i couldn't do earlier) but i > can't log in. Did you try to install os-collect-config using pip? If you did that, pip might have tried to update several Python libraries used by OpenStack components, and that usually creates a mess in your system. It is possible to recover from it by manually uninstalling and reinstalling the modules updated by pip, but it can be complicated. If it is a test system I would suggest reinstalling. Regards, Javier > 2015-05-06 11:27 GMT+02:00 Javier Pena < javier.pena at redhat.com > : > > ----- Original Message ----- > > > > I'm afaraid it is not, here is the output of lsof -i -n -P|grep httpd > > > > httpd 1855 root 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 1855 root 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 1855 root 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > Actually, this is what I meant. Keystone is running as a WSGI process > > inside > > Apache, so this is why you see httpd listening on ports 5000 and 35357. > > > The next question would be why you are having trouble connecting to the > > dashboard. Could you provide some more details on that issue? > > > Regards, > > > Javier > > > > httpd 3969 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 3969 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 3969 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > httpd 3970 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 3970 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 3970 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > httpd 3971 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 3971 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 3971 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > httpd 3972 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 3972 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 3972 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > httpd 3973 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 3973 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 3973 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > httpd 3974 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 3974 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 3974 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > httpd 3975 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 3975 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 3975 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > httpd 3976 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > httpd 3976 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > httpd 3976 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > 2015-05-06 11:14 GMT+02:00 Javier Pena < javier.pena at redhat.com > : > > > > > ----- Original Message ----- > > > > > > > > > > hi, > > > > > > > > > > I have some issues with keystone since this morning, I couldn't > > > > > connect > > > > > > to > > > > > > > > > > the dashboard so I checked my keystone service and found out that it > > > > > was > > > > > > not > > > > > > > > > > running. I tried to restart it but no luck. > > > > > > > > > > I did some changes in the keystone.conf to make this work but no > > > > > chance. > > > > > > The > > > > > > > > > > changes I did are: > > > > > > > > > > cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > > > > > > > > > > chown keystone:keystone /etc/keystone/* > > > > > > > > > > updated keystone.conf with the new config_file value > > > > > > > > > > /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD@ > > > > > > > > > > controller/keystone > > > > > > > > > > and I got this logs errors : > > > > > > > > > > 2015-05-06 10:17:28.352 24135 ERROR > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > 2015-05-06 10:17:28.353 24135 ERROR root [-] Failed to start the > > > > > admin > > > > > > server > > > > > > > > > > 2015-05-06 10:17:29.221 24145 ERROR > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > 2015-05-06 10:17:29.221 24145 ERROR root [-] Failed to start the > > > > > admin > > > > > > server > > > > > > > > > > 2015-05-06 10:17:30.059 24151 ERROR > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > 2015-05-06 10:17:30.060 24151 ERROR root [-] Failed to start the > > > > > admin > > > > > > server > > > > > > > > > > 2015-05-06 10:17:30.895 24158 ERROR > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > 2015-05-06 10:17:30.896 24158 ERROR root [-] Failed to start the > > > > > admin > > > > > > server > > > > > > > > > > 2015-05-06 10:17:31.727 24167 ERROR > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > 2015-05-06 10:17:31.728 24167 ERROR root [-] Failed to start the > > > > > admin > > > > > > server > > > > > > > > > Hi, > > > > > > > > > It is possible that keystone is running as a wsgi process under apache. > > > > > That > > > > > would explain why it cannot bind to port 35357. > > > > > > > > > Can you check if your httpd process is running? If so, please check if > > > > it > > > > > is > > > > > bound to port 35357 (lsof -i -n -P|grep httpd). > > > > > > > > > Regards, > > > > > > > > > Javier > > > > > > > > > > _______________________________________________ > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Wed May 6 10:58:11 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 6 May 2015 12:58:11 +0200 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter - April 2015 In-Reply-To: <5548AC53.4030106@berendt.io> References: <5543BBC2.7060104@redhat.com> <5548AC53.4030106@berendt.io> Message-ID: 2015-05-05 13:41 GMT+02:00 Christian Berendt : > On https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test: > # you are ready , its the time to configure tempest.conf > > How do I have to configure tempest and how do I have to call tempest to > verify that my test environment is working like expected? At the moment I > cannot find further documentation about this step. > > Is it sufficient to run manual tests? E.g. at the moment I installed > "All-in-One, Glance=localfs, Cinder=lvm". Is it sufficient to > upload/download images and to create/delete volumes to verify that the > environment is working like expected? I'm in the same boat, so I'd like to ask QE/Tempest experts to provide more information on the wiki. Here are my observation after quickly poking at it: - setting up venv is not needed with openstack-tempest RPM - RPM includes config_tempest script to generate tempest.conf by examining your cloud's service catalog. I managed to invoke it as a non-root user: cd /usr/share/openstack-tempest-kilo source keystonerc_admin # from packstack --allinone tools/config_tempest.py --debug --out /tmp/tempest.conf identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD --create ... and then it fails with Permission denied: 'etc/cirros-0.3.1-x86_64-disk.img' Looks like it wants root but I don't like that. Cheers, Alan From apevec at gmail.com Wed May 6 11:06:23 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 6 May 2015 13:06:23 +0200 Subject: [Rdo-list] [keystone] fails to start In-Reply-To: <1173047412.14654404.1430907945985.JavaMail.zimbra@redhat.com> References: <2039497067.14629442.1430903690631.JavaMail.zimbra@redhat.com> <180644532.14634804.1430904439500.JavaMail.zimbra@redhat.com> <1173047412.14654404.1430907945985.JavaMail.zimbra@redhat.com> Message-ID: > Did you try to install os-collect-config using pip? If you did that, pip might have tried to update several Python libraries used by OpenStack components, and that usually creates a mess in your system. > > It is possible to recover from it by manually uninstalling and reinstalling the modules updated by pip, but it can be complicated. If it is a test system I would suggest reinstalling. Oh yes, please do not ever run pip system-wide on RPM based installation, two don't mix at all! If using stuff from pip, install in venv. But in this case, we do have os-* packages in RDO repo, so just yum install them. Cheers, Alan From apevec at gmail.com Wed May 6 11:08:51 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 6 May 2015 13:08:51 +0200 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter - April 2015 In-Reply-To: References: <5543BBC2.7060104@redhat.com> <5548AC53.4030106@berendt.io> Message-ID: > as a non-root user: > > cd /usr/share/openstack-tempest-kilo > source keystonerc_admin # from packstack --allinone > tools/config_tempest.py --debug --out /tmp/tempest.conf identity.uri > $OS_AUTH_URL identity.admin_password $OS_PASSWORD --create UPDATE, this worked: tools/config_tempest.py --out /tmp/tempest.conf --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD scenario.img_dir /tmp From ichi.sara at gmail.com Wed May 6 11:18:31 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 6 May 2015 13:18:31 +0200 Subject: [Rdo-list] [keystone] fails to start In-Reply-To: <1173047412.14654404.1430907945985.JavaMail.zimbra@redhat.com> References: <2039497067.14629442.1430903690631.JavaMail.zimbra@redhat.com> <180644532.14634804.1430904439500.JavaMail.zimbra@redhat.com> <1173047412.14654404.1430907945985.JavaMail.zimbra@redhat.com> Message-ID: yes infortunately i did it with pip. Ok i'll start from scratch. are their any methods to clean my CentOs from Openstack? 2015-05-06 12:25 GMT+02:00 Javier Pena : > ----- Original Message ----- > > > Actually I don't know much. Until yesterday keystone was just fine. and > as I > > had some issues with heat I tried to install os-collect-config.( I don't > > know if this info is relevant). but it didn't work as I had some errors > > pumping out ( Command /usr/bin/python -c "import setuptools, > > > tokenize;__file__='/tmp/pip-build-7ApEOu/lxml/setup.py';exec(compile(getattr(tokenize, > > 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" > > install --record /tmp/pip-1psZ04-record/install-record.txt > > --single-version-externally-managed --compile failed with error code 1 in > > /tmp/pip-build-7ApEOu/lxml ) > > > after that i couldn't connect to the dashboard nor any other service. I > had > > the errors > > > Traceback (most recent call last): > > File "/usr/bin/heat", line 7, in > > from heatclient.shell import main > > File "/usr/lib/python2.7/site-packages/heatclient/shell.py", line 30, in > > > > from keystoneclient import discover > > File "/usr/lib/python2.7/site-packages/keystoneclient/discover.py", line > 20, > > in > > from keystoneclient import session as client_session > > File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line > 22, > > in > > from oslo_serialization import jsonutils > > ImportError: No module named oslo_serialization > > > so I reinstalled oslo.serialization and oslo.concurrency. Here I was > able to > > get my dashboard UI in in the browser (which i couldn't do earlier) but i > > can't log in. > > Did you try to install os-collect-config using pip? If you did that, pip > might have tried to update several Python libraries used by OpenStack > components, and that usually creates a mess in your system. > > It is possible to recover from it by manually uninstalling and > reinstalling the modules updated by pip, but it can be complicated. If it > is a test system I would suggest reinstalling. > > Regards, > Javier > > > > 2015-05-06 11:27 GMT+02:00 Javier Pena < javier.pena at redhat.com > : > > > > ----- Original Message ----- > > > > > > > I'm afaraid it is not, here is the output of lsof -i -n -P|grep httpd > > > > > > > httpd 1855 root 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 1855 root 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 1855 root 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > Actually, this is what I meant. Keystone is running as a WSGI process > > > inside > > > Apache, so this is why you see httpd listening on ports 5000 and 35357. > > > > > > The next question would be why you are having trouble connecting to the > > > dashboard. Could you provide some more details on that issue? > > > > > > Regards, > > > > > Javier > > > > > > > httpd 3969 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 3969 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 3969 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > httpd 3970 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 3970 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 3970 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > httpd 3971 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 3971 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 3971 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > httpd 3972 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 3972 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 3972 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > httpd 3973 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 3973 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 3973 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > httpd 3974 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 3974 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 3974 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > httpd 3975 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 3975 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 3975 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > httpd 3976 apache 4u IPv6 35701 0t0 TCP *:35357 (LISTEN) > > > > > > httpd 3976 apache 6u IPv6 35705 0t0 TCP *:5000 (LISTEN) > > > > > > httpd 3976 apache 8u IPv6 35709 0t0 TCP *:80 (LISTEN) > > > > > > > 2015-05-06 11:14 GMT+02:00 Javier Pena < javier.pena at redhat.com > : > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > > > > hi, > > > > > > > > > > > > > > > I have some issues with keystone since this morning, I couldn't > > > > > > connect > > > > > > > > to > > > > > > > > > > > > > > the dashboard so I checked my keystone service and found out > that it > > > > > > was > > > > > > > > not > > > > > > > > > > > > > > running. I tried to restart it but no luck. > > > > > > > > > > > > > > > I did some changes in the keystone.conf to make this work but no > > > > > > chance. > > > > > > > > The > > > > > > > > > > > > > > changes I did are: > > > > > > > > > > > > > > > cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > > > > > > > > > > > > > > chown keystone:keystone /etc/keystone/* > > > > > > > > > > > > > > > updated keystone.conf with the new config_file value > > > > > > > > > > > > > > > /etc/keystone/keystone.conf sql connection > mysql://keystone:PASSWD@ > > > > > > > > > > > > > > controller/keystone > > > > > > > > > > > > > > > and I got this logs errors : > > > > > > > > > > > > > > > 2015-05-06 10:17:28.352 24135 ERROR > > > > > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > > > > > 2015-05-06 10:17:28.353 24135 ERROR root [-] Failed to start the > > > > > > admin > > > > > > > > server > > > > > > > > > > > > > > 2015-05-06 10:17:29.221 24145 ERROR > > > > > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > > > > > 2015-05-06 10:17:29.221 24145 ERROR root [-] Failed to start the > > > > > > admin > > > > > > > > server > > > > > > > > > > > > > > 2015-05-06 10:17:30.059 24151 ERROR > > > > > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > > > > > 2015-05-06 10:17:30.060 24151 ERROR root [-] Failed to start the > > > > > > admin > > > > > > > > server > > > > > > > > > > > > > > 2015-05-06 10:17:30.895 24158 ERROR > > > > > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > > > > > 2015-05-06 10:17:30.896 24158 ERROR root [-] Failed to start the > > > > > > admin > > > > > > > > server > > > > > > > > > > > > > > 2015-05-06 10:17:31.727 24167 ERROR > > > > > > > > > > > > > > keystone.common.environment.eventlet_server [-] Could not bind to > > > > > > > > > > > > > > 0.0.0.0:35357 > > > > > > > > > > > > > > 2015-05-06 10:17:31.728 24167 ERROR root [-] Failed to start the > > > > > > admin > > > > > > > > server > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > It is possible that keystone is running as a wsgi process under > apache. > > > > > > > That > > > > > > > would explain why it cannot bind to port 35357. > > > > > > > > > > > > > > Can you check if your httpd process is running? If so, please > check if > > > > > it > > > > > > > is > > > > > > > bound to port 35357 (lsof -i -n -P|grep httpd). > > > > > > > > > > > > > > Regards, > > > > > > > > > > > > > Javier > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Wed May 6 11:24:56 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 6 May 2015 13:24:56 +0200 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter - April 2015 In-Reply-To: References: <5543BBC2.7060104@redhat.com> <5548AC53.4030106@berendt.io> Message-ID: >> as a non-root user: >> >> cd /usr/share/openstack-tempest-kilo >> source keystonerc_admin # from packstack --allinone >> tools/config_tempest.py --debug --out /tmp/tempest.conf identity.uri >> $OS_AUTH_URL identity.admin_password $OS_PASSWORD --create > > UPDATE, this worked: > tools/config_tempest.py --out /tmp/tempest.conf --create identity.uri > $OS_AUTH_URL identity.admin_password $OS_PASSWORD scenario.img_dir > /tmp Tempest still wanted to write files all around its tree, I had to: cd ~ # or some other writable location cp -a /usr/share/openstack-tempest-kilo . cd openstack-tempest-kilo ./run_tempest.sh -N -C /tmp/tempest.conf --smoke This is obviously suboptimal and we need to figure out how to run from packaged files without copying them. Cheers, Alan From christian at berendt.io Wed May 6 11:09:09 2015 From: christian at berendt.io (Christian Berendt) Date: Wed, 06 May 2015 13:09:09 +0200 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter - April 2015 In-Reply-To: References: <5543BBC2.7060104@redhat.com> <5548AC53.4030106@berendt.io> Message-ID: <5549F655.1070202@berendt.io> On 05/06/2015 12:58 PM, Alan Pevec wrote: > ... and then it fails with Permission denied: > 'etc/cirros-0.3.1-x86_64-disk.img' > Looks like it wants root but I don't like that. Yes. The script tries to download the Cirros image to /usr/share/openstack-tempest-kilo/etc/cirros-0.3.1-x86_64-disk.img. Running the script as privileged user works for me, I now have a prepared tempest.conf file that can be used with /usr/share/openstack-tempest-kilo/run_tempest.sh. I placed the tempest.conf file in the directory /usr/share/openstack-tempest-kilo/etc/. Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From dkranz at redhat.com Wed May 6 12:12:18 2015 From: dkranz at redhat.com (David Kranz) Date: Wed, 06 May 2015 08:12:18 -0400 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter - April 2015 In-Reply-To: References: <5543BBC2.7060104@redhat.com> <5548AC53.4030106@berendt.io> Message-ID: <554A0522.5050502@redhat.com> On 05/06/2015 07:24 AM, Alan Pevec wrote: >>> as a non-root user: >>> >>> cd /usr/share/openstack-tempest-kilo >>> source keystonerc_admin # from packstack --allinone >>> tools/config_tempest.py --debug --out /tmp/tempest.conf identity.uri >>> $OS_AUTH_URL identity.admin_password $OS_PASSWORD --create >> UPDATE, this worked: >> tools/config_tempest.py --out /tmp/tempest.conf --create identity.uri >> $OS_AUTH_URL identity.admin_password $OS_PASSWORD scenario.img_dir >> /tmp > Tempest still wanted to write files all around its tree, I had to: > > cd ~ # or some other writable location > cp -a /usr/share/openstack-tempest-kilo . > cd openstack-tempest-kilo > ./run_tempest.sh -N -C /tmp/tempest.conf --smoke > > This is obviously suboptimal and we need to figure out how to run from > packaged files without copying them. > > > Cheers, > Alan Upstream tempest was not really constructed to serve as a shared library. It really wants its own dir to read/write and should not be run from /usr/share/openstack-tempest-kilo. The missing step here is to run /usr/share/openstack-tempest-kilo/tools/configure-tempest-directory while in some other directory. This script creates some symlinks and other files in that directory so that tempest can run without these issues. It also allows you to run tempest against multiple clouds from the same machine. Hope that helps. -David From rmeggins at redhat.com Wed May 6 13:40:03 2015 From: rmeggins at redhat.com (Rich Megginson) Date: Wed, 06 May 2015 07:40:03 -0600 Subject: [Rdo-list] [keystone] fails to start In-Reply-To: References: <20150506091433.GI30897@tesla.redhat.com> Message-ID: <554A19B3.3060606@redhat.com> On 05/06/2015 03:25 AM, ICHIBA Sara wrote: > [root at CONTROLLER ~]# netstat -lnptu | grep 35357 > tcp6 0 0 :::35357 :::* LISTEN > 1855/httpd > > the version of keystone is 1.4.0 > Are you running into this issue? https://bugzilla.redhat.com/show_bug.cgi?id=1213149 > > 2015-05-06 11:14 GMT+02:00 Kashyap Chamarthy >: > > On Wed, May 06, 2015 at 10:29:05AM +0200, ICHIBA Sara wrote: > > hi, > > > > I have some issues with keystone since this morning, I couldn't > connect to > > the dashboard so I checked my keystone service and found out > that it was > > not running. I tried to restart it but no luck. > > You don't mention exact versions of OpenStack/Keystone you're running. > > > I did some changes in the keystone.conf to make this work but no > chance. > > The changes I did are: > > > > > > *cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/ > > chown keystone:keystone /etc/keystone/*updated keystone.conf > with the > > new config_file value* > > > > > > */etc/keystone/keystone.conf sql connection > mysql://keystone:PASSWD@* > > *controller/keystone* > > > > and I got this logs errors : > > > > 2015-05-06 10:17:28.352 24135 ERROR > > keystone.common.environment.eventlet_server [-] Could not bind to > > 0.0.0.0:35357 > > Maybe that port is already busy? You can quickly check if it is: > > $ netstat -lnptu | grep 35357 > > -- > /kashyap > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkranz at redhat.com Wed May 6 15:48:17 2015 From: dkranz at redhat.com (David Kranz) Date: Wed, 06 May 2015 11:48:17 -0400 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master Message-ID: <554A37C1.9070405@redhat.com> In case any one cares, it had been juno for some time. -David From Yaniv.Kaul at emc.com Wed May 6 19:10:13 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Wed, 6 May 2015 15:10:13 -0400 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master In-Reply-To: <554A37C1.9070405@redhat.com> References: <554A37C1.9070405@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> Thanks -someone does care ;-) Is it Kilo'ed already? Y. > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of David Kranz > Sent: Wednesday, May 06, 2015 6:48 PM > To: rdo-list at redhat.com > Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now > master > > In case any one cares, it had been juno for some time. > > -David > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dkranz at redhat.com Wed May 6 19:12:33 2015 From: dkranz at redhat.com (David Kranz) Date: Wed, 06 May 2015 15:12:33 -0400 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master In-Reply-To: <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> References: <554A37C1.9070405@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> Message-ID: <554A67A1.7080600@redhat.com> On 05/06/2015 03:10 PM, Kaul, Yaniv wrote: > Thanks -someone does care ;-) > Is it Kilo'ed already? > Y. Yes, there is a kilo branch as of today. The intent is to keep the default as master going forward. -David > >> -----Original Message----- >> From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On >> Behalf Of David Kranz >> Sent: Wednesday, May 06, 2015 6:48 PM >> To: rdo-list at redhat.com >> Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now >> master >> >> In case any one cares, it had been juno for some time. >> >> -David >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From ak at cloudssky.com Wed May 6 19:17:09 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Wed, 6 May 2015 21:17:09 +0200 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master In-Reply-To: <554A67A1.7080600@redhat.com> References: <554A37C1.9070405@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> <554A67A1.7080600@redhat.com> Message-ID: How ist default defined? Is master, the trunk from upstream? On Wed, May 6, 2015 at 9:12 PM, David Kranz wrote: > On 05/06/2015 03:10 PM, Kaul, Yaniv wrote: > >> Thanks -someone does care ;-) >> Is it Kilo'ed already? >> Y. >> > Yes, there is a kilo branch as of today. The intent is to keep the default > as master going forward. > > -David > > > >> -----Original Message----- >>> From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] >>> On >>> Behalf Of David Kranz >>> Sent: Wednesday, May 06, 2015 6:48 PM >>> To: rdo-list at redhat.com >>> Subject: [Rdo-list] Default github branch for redhat-openstack/tempest >>> is now >>> master >>> >>> In case any one cares, it had been juno for some time. >>> >>> -David >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkranz at redhat.com Wed May 6 19:21:47 2015 From: dkranz at redhat.com (David Kranz) Date: Wed, 06 May 2015 15:21:47 -0400 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master In-Reply-To: References: <554A37C1.9070405@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> <554A67A1.7080600@redhat.com> Message-ID: <554A69CB.3030500@redhat.com> On 05/06/2015 03:17 PM, Arash Kaffamanesh wrote: > How ist default defined? > Is master, the trunk from upstream? No. Each repo in github has a default branch, which is what gets checked out if you do a 'git clone'. I changed the default for https://github.com/redhat-openstack/tempest to be master. I sent the email because if any one had a script or automation that checked out this repo and was using the juno branch, this change would impact them. They now have to explicitly checkout the juno branch. -David > > > On Wed, May 6, 2015 at 9:12 PM, David Kranz > wrote: > > On 05/06/2015 03:10 PM, Kaul, Yaniv wrote: > > Thanks -someone does care ;-) > Is it Kilo'ed already? > Y. > > Yes, there is a kilo branch as of today. The intent is to keep the > default as master going forward. > > -David > > > > -----Original Message----- > From: rdo-list-bounces at redhat.com > > [mailto:rdo-list-bounces at redhat.com > ] On > Behalf Of David Kranz > Sent: Wednesday, May 06, 2015 6:48 PM > To: rdo-list at redhat.com > Subject: [Rdo-list] Default github branch for > redhat-openstack/tempest is now > master > > In case any one cares, it had been juno for some time. > > -David > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Wed May 6 20:22:03 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Wed, 6 May 2015 16:22:03 -0400 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master In-Reply-To: <554A67A1.7080600@redhat.com> References: <554A37C1.9070405@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> <554A67A1.7080600@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03D0DCD8C6@MX19A.corp.emc.com> Non-zero exit code (2) from test listing. running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \ OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \ ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./tempest/test_discover} --list --- import errors --- tempest.api.baremetal.admin.test_api_discovery tempest.api.baremetal.admin.test_chassis tempest.api.baremetal.admin.test_drivers tempest.api.baremetal.admin.test_nodes tempest.api.baremetal.admin.test_nodestates tempest.api.baremetal.admin.test_ports tempest.api.baremetal.admin.test_ports_negative tempest.api.compute.admin.test_agents ... On CentOS 7.1. Y. > -----Original Message----- > From: David Kranz [mailto:dkranz at redhat.com] > Sent: Wednesday, May 06, 2015 10:13 PM > To: Kaul, Yaniv; rdo-list at redhat.com > Subject: Re: [Rdo-list] Default github branch for redhat-openstack/tempest is > now master > > On 05/06/2015 03:10 PM, Kaul, Yaniv wrote: > > Thanks -someone does care ;-) > > Is it Kilo'ed already? > > Y. > Yes, there is a kilo branch as of today. The intent is to keep the default as > master going forward. > > -David > > > > >> -----Original Message----- > >> From: rdo-list-bounces at redhat.com > >> [mailto:rdo-list-bounces at redhat.com] On Behalf Of David Kranz > >> Sent: Wednesday, May 06, 2015 6:48 PM > >> To: rdo-list at redhat.com > >> Subject: [Rdo-list] Default github branch for > >> redhat-openstack/tempest is now master > >> > >> In case any one cares, it had been juno for some time. > >> > >> -David > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com From whayutin at redhat.com Wed May 6 20:25:08 2015 From: whayutin at redhat.com (whayutin) Date: Wed, 06 May 2015 16:25:08 -0400 Subject: [Rdo-list] [CI] public jenkins down Message-ID: <1430943908.2545.41.camel@redhat.com> FYI.. The jenkins server prod-rdojenkins.rhcloud.com is currently down. I'm backing up the config, debugging, and contacting openshift support to assist in restarting the instance. Sorry for the downtime. Thank you. From whayutin at redhat.com Wed May 6 21:04:40 2015 From: whayutin at redhat.com (whayutin) Date: Wed, 06 May 2015 17:04:40 -0400 Subject: [Rdo-list] [CI] public jenkins down In-Reply-To: <1430943908.2545.41.camel@redhat.com> References: <1430943908.2545.41.camel@redhat.com> Message-ID: <1430946280.2545.43.camel@redhat.com> On Wed, 2015-05-06 at 16:25 -0400, whayutin wrote: > FYI.. > The jenkins server prod-rdojenkins.rhcloud.com is currently down. > I'm backing up the config, debugging, and contacting openshift support > to assist in restarting the instance. > > Sorry for the downtime. > Thank you. OK.. the jenkins server is backup I've also increased the heap size, and installed a monitoring plugin Sorry for the inconvenience. > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Wed May 6 21:31:59 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 6 May 2015 23:31:59 +0200 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master In-Reply-To: <648473255763364B961A02AC3BE1060D03D0DCD8C6@MX19A.corp.emc.com> References: <554A37C1.9070405@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> <554A67A1.7080600@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8C6@MX19A.corp.emc.com> Message-ID: > --- import errors --- openstack-tempest RPM should install all deps, what do full backtraces with import errors says? Cheers, Alan From Yaniv.Kaul at emc.com Wed May 6 21:38:03 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Wed, 6 May 2015 17:38:03 -0400 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master In-Reply-To: References: <554A37C1.9070405@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> <554A67A1.7080600@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8C6@MX19A.corp.emc.com> Message-ID: <648473255763364B961A02AC3BE1060D03D0DCD8CD@MX19A.corp.emc.com> There's an RPM for OpenStack Tempest? And I've been pulling from https://github.com/redhat-openstack/tempest all that time? Where? Where? I've just discovered I see them for previous releases: [root at lgdrm1457 ~]# sud yum search tempest -bash: sud: command not found [root at lgdrm1457 ~]# sudo yum search tempest Loaded plugins: fastestmirror, rhnplugin This system is receiving updates from RHN Classic or Red Hat Satellite. Loading mirror speeds from cached hostfile * base: mirror.team-cymru.org * epel: mirror.sfo12.us.leaseweb.net * extras: ftpmirror.your.org * updates: mirrors.chkhosting.com ================================================================ N/S matched: tempest ================================================================ openstack-tempest-icehouse.noarch : OpenStack Integration Test Suite (Tempest) openstack-tempest-juno.noarch : OpenStack Integration Test Suite (Tempest) [root at lgdrm1457 ~]# cat /etc/yum.repos.d/delorean-kilo.repo /etc/yum.repos.d/rdo-release.repo [delorean-kilo] name=delorean-kilo-openstack-glance-93b0d5fce3a41e4a3a549f98f78b6681cbc3ea95 baseurl=https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2 enabled=1 gpgcheck=0 priority=1 [openstack-juno] name=OpenStack Juno Repository baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ enabled=1 skip_if_unavailable=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno [openstack-kilo] name=Temporary OpenStack Kilo new deps baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-kilo/epel-7/ skip_if_unavailable=0 gpgcheck=0 enabled=1 Do they contain the tempest-config utility? Y. > -----Original Message----- > From: Alan Pevec [mailto:apevec at gmail.com] > Sent: Thursday, May 07, 2015 12:32 AM > To: Kaul, Yaniv > Cc: David Kranz; rdo-list at redhat.com > Subject: Re: [Rdo-list] Default github branch for redhat-openstack/tempest is > now master > > > --- import errors --- > > openstack-tempest RPM should install all deps, what do full backtraces with > import errors says? > > Cheers, > Alan From apevec at gmail.com Wed May 6 21:47:39 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 6 May 2015 23:47:39 +0200 Subject: [Rdo-list] Default github branch for redhat-openstack/tempest is now master In-Reply-To: <648473255763364B961A02AC3BE1060D03D0DCD8CD@MX19A.corp.emc.com> References: <554A37C1.9070405@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8BF@MX19A.corp.emc.com> <554A67A1.7080600@redhat.com> <648473255763364B961A02AC3BE1060D03D0DCD8C6@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03D0DCD8CD@MX19A.corp.emc.com> Message-ID: 2015-05-06 23:38 GMT+02:00 Kaul, Yaniv : > There's an RPM for OpenStack Tempest? And I've been pulling from https://github.com/redhat-openstack/tempest all that time? > Where? Where? Remove Kilo RC2 repos (delorean-kilo.repo and rdo-release-kilo.rpm) and install kilo testing repo: https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test > Do they contain the tempest-config utility? Yes, mentioned in the other thread: https://www.redhat.com/archives/rdo-list/2015-May/msg00062.html and David has now added a quick howto which will be included in the next openstack-tempest build: https://github.com/redhat-openstack/tempest/blob/kilo/README.rpm Cheers, Alan From hguemar at fedoraproject.org Wed May 6 22:33:45 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 7 May 2015 00:33:45 +0200 Subject: [Rdo-list] packaging lifecycles In-Reply-To: References: Message-ID: we're following upstream release cycle. From hguemar at fedoraproject.org Wed May 6 22:39:34 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 7 May 2015 00:39:34 +0200 Subject: [Rdo-list] packaging lifecycles In-Reply-To: References: Message-ID: Sorry, something fell on my keyboard and sent the message while typing it. As stated, we follow upstream release cycle: https://wiki.openstack.org/wiki/Releases And if you want to help in maintaining RDO, you're more than welcome https://www.rdoproject.org/packaging/rdo-packaging.html Our long-term goal is to fully open up RDO contribution process, and most of it is done. RDO Juno already includes features contributed by non-redhatters. We hold a packaging meeting every wedsnesday (notification sent on the list 2 days prior), feel free to join us. Regards, H. From ichi.sara at gmail.com Thu May 7 07:58:48 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Thu, 7 May 2015 09:58:48 +0200 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: Message-ID: the heat event-list command is giving me nothing. the output is a blank scheme 2015-05-06 7:22 GMT+02:00 Steven Dake (stdake) : > Heat event-list will give you more detail on what is happening. > > > From: Kevin Tibi > Date: Tuesday, May 5, 2015 at 3:16 AM > To: ICHIBA Sara > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] [heat]: stack stays interminably under the status > create in progress > > Hi, > > Can you copy your hot file and heat logs? > > Kevin Tibi > Le 5 mai 2015 10:48, "ICHIBA Sara" a ?crit : > >> Hello there, >> >> I started a project where I need to deploy stacks and orchastrate them >> using heat (autoscaling and so on..). I just started playing with heat and >> the creation of my first stack is never complete. It stays in the status >> create in progress. My log files don't say much. For my template i'm using >> a veery simple one to launch a small instance. >> >> Any ideas what that might be? >> >> In advance, thank you for your response. >> Sara >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcoufal at redhat.com Thu May 7 09:13:51 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Thu, 07 May 2015 11:13:51 +0200 Subject: [Rdo-list] Moving Docs builds of RDO-Manager Message-ID: <554B2CCF.2070705@redhat.com> Hi Ben, I wanted to sync with you if we could coordinate the movement of documentation from: https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/* to: https://repos.fedorapeople.org/repos/openstack-m/docs/* with following sub-directories: * ../docs/master * ../docs/sprint4 * ../docs/sprint5 (doesn't exist yet) etc. Since we don't have any date for testing day yet, I think we can do it this week. For old sites, I don't think that we can somehow control redirects, so I would suggest to place there temporary index.html file with information that docs moved and direct people to new location (example: http://paste.openstack.org/show/215975/). Thanks -- Jarda From shardy at redhat.com Thu May 7 09:45:44 2015 From: shardy at redhat.com (Steven Hardy) Date: Thu, 7 May 2015 10:45:44 +0100 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: Message-ID: <20150507094544.GA31444@t430slt.redhat.com> On Tue, May 05, 2015 at 01:18:17PM +0200, ICHIBA Sara wrote: > hello, > > here is my HOT template, it's very basic: > > heat_template_version: 2013-05-23 > > description: Simple template to deploy a single compute instance > > resources: > A my_instance: > A A A type: OS::Nova::Server > A A A properties: > A A A A A image: Cirros 0.3.3 > A A A A A flavor: m1.small > A A A A A key_name: userkey > A A A A A networks: > A A A A A A A - network: fdf2bb77-a828-401d-969a-736a8028950f > > for the logs please find them attached. These logs are a little confusing - it looks like you failed to create the stack due to some validation errors, then tried again and did a stack-check and a stack resume? Can you please set debug = True in the [DEFAULT] section of your heat.conf, restart heat-engine and try again please? Also, some basic checks are: 1. When the stack is CREATE_IN_PROGRESS, what does nova list show for the instance? 2. Is it possible to boot an instance using nova boot, using the same arguments (image, flavor, key etc) that you specify in the heat template? I suspect that Heat is not actually the problem here, and that some part of Nova is either misconfigured or not running, but I can't prove that without seeing the nova CLI output and/or the nova logs. Steve From tooyama at virtualtech.jp Thu May 7 10:11:33 2015 From: tooyama at virtualtech.jp (Youhei Tooyama) Date: Thu, 07 May 2015 19:11:33 +0900 Subject: [Rdo-list] Just a little mistake on the OpenStack Dashboard Message-ID: <554B3A55.4070703@virtualtech.jp> Hi I'm try the OpenStack Kilo using the RDO packstack. This is Good Work on the My Server. Thanks! But,Just a little mistake. What is This? Thank you. Youhei Tooyama -- How to setup: systemctl stop NetworkManager systemctl disable NetworkManager systemctl enable network reboot yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm yum -y update yum install -y openstack-packstack python-netaddr packstack --gen-answer-file=answer.txt vi answer.txt ... setenforce 0 packstack --answer-file=/root/answer.txt rpm -q openstack-dashboard openstack-dashboard-2015.1.0-2.el7.noarch -------------- next part -------------- [general] # Path to a public key to install on servers. If a usable key has not # been installed on the remote servers, the user is prompted for a # password and this key is installed so the password will not be # required again. CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub # Default password to be used everywhere (overridden by passwords set # for individual services or users). CONFIG_DEFAULT_PASSWORD=password # Specify 'y' to install MariaDB. ['y', 'n'] CONFIG_MARIADB_INSTALL=y # Specify 'y' to install OpenStack Image Service (glance). ['y', 'n'] CONFIG_GLANCE_INSTALL=y # Specify 'y' to install OpenStack Block Storage (cinder). ['y', 'n'] CONFIG_CINDER_INSTALL=y # Specify 'y' to install OpenStack Shared File System (manila). ['y', # 'n'] CONFIG_MANILA_INSTALL=n # Specify 'y' to install OpenStack Compute (nova). ['y', 'n'] CONFIG_NOVA_INSTALL=y # Specify 'y' to install OpenStack Networking (neutron); otherwise, # Compute Networking (nova) will be used. ['y', 'n'] CONFIG_NEUTRON_INSTALL=y # Specify 'y' to install OpenStack Dashboard (horizon). ['y', 'n'] CONFIG_HORIZON_INSTALL=y # Specify 'y' to install OpenStack Object Storage (swift). ['y', 'n'] CONFIG_SWIFT_INSTALL=n # Specify 'y' to install OpenStack Metering (ceilometer). ['y', 'n'] CONFIG_CEILOMETER_INSTALL=n # Specify 'y' to install OpenStack Orchestration (heat). ['y', 'n'] CONFIG_HEAT_INSTALL=n # Specify 'y' to install OpenStack Data Processing (sahara). ['y', # 'n'] CONFIG_SAHARA_INSTALL=n # Specify 'y' to install OpenStack Database (trove) ['y', 'n'] CONFIG_TROVE_INSTALL=n # Specify 'y' to install OpenStack Bare Metal Provisioning (ironic). # ['y', 'n'] CONFIG_IRONIC_INSTALL=n # Specify 'y' to install the OpenStack Client packages (command-line # tools). An admin "rc" file will also be installed. ['y', 'n'] CONFIG_CLIENT_INSTALL=y # Comma-separated list of NTP servers. Leave plain if Packstack # should not install ntpd on instances. CONFIG_NTP_SERVERS=172.17.14.2 # Specify 'y' to install Nagios to monitor OpenStack hosts. Nagios # provides additional tools for monitoring the OpenStack environment. # ['y', 'n'] CONFIG_NAGIOS_INSTALL=n # Comma-separated list of servers to be excluded from the # installation. This is helpful if you are running Packstack a second # time with the same answer file and do not want Packstack to # overwrite these server's configurations. Leave empty if you do not # need to exclude any servers. EXCLUDE_SERVERS= # Specify 'y' if you want to run OpenStack services in debug mode; # otherwise, specify 'n'. ['y', 'n'] CONFIG_DEBUG_MODE=n # IP address of the server on which to install OpenStack services # specific to the controller role (for example, API servers or # dashboard). CONFIG_CONTROLLER_HOST=172.17.14.100 # List of IP addresses of the servers on which to install the Compute # service. CONFIG_COMPUTE_HOSTS=172.17.14.100 # List of IP addresses of the server on which to install the network # service such as Compute networking (nova network) or OpenStack # Networking (neutron). CONFIG_NETWORK_HOSTS=172.17.14.100 # Specify 'y' if you want to use VMware vCenter as hypervisor and # storage; otherwise, specify 'n'. ['y', 'n'] CONFIG_VMWARE_BACKEND=n # Specify 'y' if you want to use unsupported parameters. This should # be used only if you know what you are doing. Issues caused by using # unsupported options will not be fixed before the next major release. # ['y', 'n'] CONFIG_UNSUPPORTED=n # IP address of the VMware vCenter server. CONFIG_VCENTER_HOST= # User name for VMware vCenter server authentication. CONFIG_VCENTER_USER= # Password for VMware vCenter server authentication. CONFIG_VCENTER_PASSWORD= # Name of the VMware vCenter cluster. CONFIG_VCENTER_CLUSTER_NAME= # (Unsupported!) IP address of the server on which to install # OpenStack services specific to storage servers such as Image or # Block Storage services. CONFIG_STORAGE_HOST=172.17.14.100 # (Unsupported!) IP address of the server on which to install # OpenStack services specific to OpenStack Data Processing (sahara). CONFIG_SAHARA_HOST=172.17.14.100 # Specify 'y' to enable the EPEL repository (Extra Packages for # Enterprise Linux). ['y', 'n'] CONFIG_USE_EPEL=n # Comma-separated list of URLs for any additional yum repositories, # to use for installation. CONFIG_REPO= # To subscribe each server with Red Hat Subscription Manager, include # this with CONFIG_RH_PW. CONFIG_RH_USER= # To subscribe each server to receive updates from a Satellite # server, provide the URL of the Satellite server. You must also # provide a user name (CONFIG_SATELLITE_USERNAME) and password # (CONFIG_SATELLITE_PASSWORD) or an access key (CONFIG_SATELLITE_AKEY) # for authentication. CONFIG_SATELLITE_URL= # To subscribe each server with Red Hat Subscription Manager, include # this with CONFIG_RH_USER. CONFIG_RH_PW= # Specify 'y' to enable RHEL optional repositories. ['y', 'n'] CONFIG_RH_OPTIONAL=y # HTTP proxy to use with Red Hat Subscription Manager. CONFIG_RH_PROXY= # Port to use for Red Hat Subscription Manager's HTTP proxy. CONFIG_RH_PROXY_PORT= # User name to use for Red Hat Subscription Manager's HTTP proxy. CONFIG_RH_PROXY_USER= # Password to use for Red Hat Subscription Manager's HTTP proxy. CONFIG_RH_PROXY_PW= # User name to authenticate with the RHN Satellite server; if you # intend to use an access key for Satellite authentication, leave this # blank. CONFIG_SATELLITE_USER= # Password to authenticate with the RHN Satellite server; if you # intend to use an access key for Satellite authentication, leave this # blank. CONFIG_SATELLITE_PW= # Access key for the Satellite server; if you intend to use a user # name and password for Satellite authentication, leave this blank. CONFIG_SATELLITE_AKEY= # Certificate path or URL of the certificate authority to verify that # the connection with the Satellite server is secure. If you are not # using Satellite in your deployment, leave this blank. CONFIG_SATELLITE_CACERT= # Profile name that should be used as an identifier for the system in # RHN Satellite (if required). CONFIG_SATELLITE_PROFILE= # Comma-separated list of flags passed to the rhnreg_ks command. # Valid flags are: novirtinfo, norhnsd, nopackages ['novirtinfo', # 'norhnsd', 'nopackages'] CONFIG_SATELLITE_FLAGS= # HTTP proxy to use when connecting to the RHN Satellite server (if # required). CONFIG_SATELLITE_PROXY= # User name to authenticate with the Satellite-server HTTP proxy. CONFIG_SATELLITE_PROXY_USER= # User password to authenticate with the Satellite-server HTTP proxy. CONFIG_SATELLITE_PROXY_PW= # Service to be used as the AMQP broker. Allowed values are: qpid, # rabbitmq ['qpid', 'rabbitmq'] CONFIG_AMQP_BACKEND=rabbitmq # IP address of the server on which to install the AMQP service. CONFIG_AMQP_HOST=172.17.14.100 # Specify 'y' to enable SSL for the AMQP service. ['y', 'n'] CONFIG_AMQP_ENABLE_SSL=n # Specify 'y' to enable authentication for the AMQP service. ['y', # 'n'] CONFIG_AMQP_ENABLE_AUTH=n # Password for the NSS certificate database of the AMQP service. CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER # Port on which the AMQP service listens for SSL connections. CONFIG_AMQP_SSL_PORT=5671 # File name of the CAcertificate that the AMQP service will use for # verification. CONFIG_AMQP_SSL_CACERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem # File name of the certificate that the AMQP service will use for # verification. CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem # File name of the private key that the AMQP service will use for # verification. CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem # Specify 'y' to automatically generate a self-signed SSL certificate # and key. ['y', 'n'] CONFIG_AMQP_SSL_SELF_SIGNED=y # User for AMQP authentication. CONFIG_AMQP_AUTH_USER=amqp_user # Password for AMQP authentication. CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER # IP address of the server on which to install MariaDB. If a MariaDB # installation was not specified in CONFIG_MARIADB_INSTALL, specify # the IP address of an existing database server (a MariaDB cluster can # also be specified). CONFIG_MARIADB_HOST=172.17.14.100 # User name for the MariaDB administrative user. CONFIG_MARIADB_USER=root # Password for the MariaDB administrative user. CONFIG_MARIADB_PW=5dc8cb1452624106 # Password to use for the Identity service (keystone) to access the # database. CONFIG_KEYSTONE_DB_PW=ea6950af87c14b38 # Default region name to use when creating tenants in the Identity # service. CONFIG_KEYSTONE_REGION=RegionOne # Token to use for the Identity service API. CONFIG_KEYSTONE_ADMIN_TOKEN=00d7e7f048a548adad2a2cae7a1b3f2a # Email address for the Identity service 'admin' user. Defaults to CONFIG_KEYSTONE_ADMIN_EMAIL=root at localhost # User name for the Identity service 'admin' user. Defaults to # 'admin'. CONFIG_KEYSTONE_ADMIN_USERNAME=admin # Password to use for the Identity service 'admin' user. CONFIG_KEYSTONE_ADMIN_PW=admin # Password to use for the Identity service 'demo' user. CONFIG_KEYSTONE_DEMO_PW=dfb5b0540f4f4398 # Identity service API version string. ['v2.0', 'v3'] CONFIG_KEYSTONE_API_VERSION=v2.0 # Identity service token format (UUID or PKI). The recommended format # for new deployments is UUID. ['UUID', 'PKI'] CONFIG_KEYSTONE_TOKEN_FORMAT=UUID # Name of service to use to run the Identity service (keystone or # httpd). ['keystone', 'httpd'] CONFIG_KEYSTONE_SERVICE_NAME=httpd # Type of Identity service backend (sql or ldap). ['sql', 'ldap'] CONFIG_KEYSTONE_IDENTITY_BACKEND=sql # URL for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_URL=ldap://172.17.14.100 # User DN for the Identity service LDAP backend. Used to bind to the # LDAP server if the LDAP server does not allow anonymous # authentication. CONFIG_KEYSTONE_LDAP_USER_DN= # User DN password for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_PASSWORD= # Base suffix for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_SUFFIX= # Query scope for the Identity service LDAP backend (base, one, sub). # ['base', 'one', 'sub'] CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one # Query page size for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1 # User subtree for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_SUBTREE= # User query filter for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_FILTER= # User object class for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS= # User ID attribute for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE= # User name attribute for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE= # User email address attribute for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE= # User-enabled attribute for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE= # Bit mask applied to user-enabled attribute for the Identity service # LDAP backend. CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1 # Value of enabled attribute which indicates user is enabled for the # Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE # Specify 'y' if users are disabled (not enabled) in the Identity # service LDAP backend. ['n', 'y'] CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n # Comma-separated list of attributes stripped from LDAP user entry # upon update. CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE= # Identity service LDAP attribute mapped to default_project_id for # users. CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE= # Specify 'y' if you want to be able to create Identity service users # through the Identity service interface; specify 'n' if you will # create directly in the LDAP backend. ['n', 'y'] CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n # Specify 'y' if you want to be able to update Identity service users # through the Identity service interface; specify 'n' if you will # update directly in the LDAP backend. ['n', 'y'] CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n # Specify 'y' if you want to be able to delete Identity service users # through the Identity service interface; specify 'n' if you will # delete directly in the LDAP backend. ['n', 'y'] CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n # Identity service LDAP attribute mapped to password. CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE= # DN of the group entry to hold enabled LDAP users when using enabled # emulation. CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN= # List of additional LDAP attributes for mapping additional attribute # mappings for users. The attribute-mapping format is # :, where ldap_attr is the attribute in the # LDAP entry and user_attr is the Identity API attribute. CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING= # Group subtree for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE= # Group query filter for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_GROUP_FILTER= # Group object class for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS= # Group ID attribute for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE= # Group name attribute for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE= # Group member attribute for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE= # Group description attribute for the Identity service LDAP backend. CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE= # Comma-separated list of attributes stripped from LDAP group entry # upon update. CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE= # Specify 'y' if you want to be able to create Identity service # groups through the Identity service interface; specify 'n' if you # will create directly in the LDAP backend. ['n', 'y'] CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n # Specify 'y' if you want to be able to update Identity service # groups through the Identity service interface; specify 'n' if you # will update directly in the LDAP backend. ['n', 'y'] CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n # Specify 'y' if you want to be able to delete Identity service # groups through the Identity service interface; specify 'n' if you # will delete directly in the LDAP backend. ['n', 'y'] CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n # List of additional LDAP attributes used for mapping additional # attribute mappings for groups. The attribute=mapping format is # :, where ldap_attr is the attribute in the # LDAP entry and group_attr is the Identity API attribute. CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING= # Specify 'y' if the Identity service LDAP backend should use TLS. # ['n', 'y'] CONFIG_KEYSTONE_LDAP_USE_TLS=n # CA certificate directory for Identity service LDAP backend (if TLS # is used). CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR= # CA certificate file for Identity service LDAP backend (if TLS is # used). CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE= # Certificate-checking strictness level for Identity service LDAP # backend; valid options are: never, allow, demand. ['never', 'allow', # 'demand'] CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand # Password to use for the Image service (glance) to access the # database. CONFIG_GLANCE_DB_PW=fd468e6ef79547d3 # Password to use for the Image service to authenticate with the # Identity service. CONFIG_GLANCE_KS_PW=1afb008c07794914 # Storage backend for the Image service (controls how the Image # service stores disk images). Valid options are: file or swift # (Object Storage). The Object Storage service must be enabled to use # it as a working backend; otherwise, Packstack falls back to 'file'. # ['file', 'swift'] CONFIG_GLANCE_BACKEND=file # Password to use for the Block Storage service (cinder) to access # the database. CONFIG_CINDER_DB_PW=0251bd57a70441b0 # Password to use for the Block Storage service to authenticate with # the Identity service. CONFIG_CINDER_KS_PW=c4d781f2cc144652 # Storage backend to use for the Block Storage service; valid options # are: lvm, gluster, nfs, vmdk, netapp. ['lvm', 'gluster', 'nfs', # 'vmdk', 'netapp'] CONFIG_CINDER_BACKEND=lvm # Specify 'y' to create the Block Storage volumes group. That is, # Packstack creates a raw disk image in /var/lib/cinder, and mounts it # using a loopback device. This should only be used for testing on a # proof-of-concept installation of the Block Storage service (a file- # backed volume group is not suitable for production usage). ['y', # 'n'] CONFIG_CINDER_VOLUMES_CREATE=y # Size of Block Storage volumes group. Actual volume size will be # extended with 3% more space for VG metadata. Remember that the size # of the volume group will restrict the amount of disk space that you # can expose to Compute instances, and that the specified amount must # be available on the device used for /var/lib/cinder. CONFIG_CINDER_VOLUMES_SIZE=20G # A single or comma-separated list of Red Hat Storage (gluster) # volume shares to mount. Example: 'ip-address:/vol-name', 'domain # :/vol-name' CONFIG_CINDER_GLUSTER_MOUNTS= # A single or comma-separated list of NFS exports to mount. Example: # 'ip-address:/export-name' CONFIG_CINDER_NFS_MOUNTS= # Administrative user account name used to access the NetApp storage # system or proxy server. CONFIG_CINDER_NETAPP_LOGIN= # Password for the NetApp administrative user account specified in # the CONFIG_CINDER_NETAPP_LOGIN parameter. CONFIG_CINDER_NETAPP_PASSWORD= # Hostname (or IP address) for the NetApp storage system or proxy # server. CONFIG_CINDER_NETAPP_HOSTNAME= # The TCP port to use for communication with the storage system or # proxy. If not specified, Data ONTAP drivers will use 80 for HTTP and # 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. # Defaults to 80. CONFIG_CINDER_NETAPP_SERVER_PORT=80 # Storage family type used on the NetApp storage system; valid # options are ontap_7mode for using Data ONTAP operating in 7-Mode, # ontap_cluster for using clustered Data ONTAP, or E-Series for NetApp # E-Series. Defaults to ontap_cluster. ['ontap_7mode', # 'ontap_cluster', 'eseries'] CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster # The transport protocol used when communicating with the NetApp # storage system or proxy server. Valid values are http or https. # Defaults to 'http'. ['http', 'https'] CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http # Storage protocol to be used on the data path with the NetApp # storage system; valid options are iscsi, fc, nfs. Defaults to nfs. # ['iscsi', 'fc', 'nfs'] CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs # Quantity to be multiplied by the requested volume size to ensure # enough space is available on the virtual storage server (Vserver) to # fulfill the volume creation request. Defaults to 1.0. CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0 # Time period (in minutes) that is allowed to elapse after the image # is last accessed, before it is deleted from the NFS image cache. # When a cache-cleaning cycle begins, images in the cache that have # not been accessed in the last M minutes, where M is the value of # this parameter, are deleted from the cache to create free space on # the NFS share. Defaults to 720. CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720 # If the percentage of available space for an NFS share has dropped # below the value specified by this parameter, the NFS image cache is # cleaned. Defaults to 20. CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20 # When the percentage of available space on an NFS share has reached # the percentage specified by this parameter, the driver stops # clearing files from the NFS image cache that have not been accessed # in the last M minutes, where M is the value of the # CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES parameter. Defaults to 60. CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60 # Single or comma-separated list of NetApp NFS shares for Block # Storage to use. Format: ip-address:/export-name. Defaults to ''. CONFIG_CINDER_NETAPP_NFS_SHARES= # File with the list of available NFS shares. Defaults to # '/etc/cinder/shares.conf'. CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf # This parameter is only utilized when the storage protocol is # configured to use iSCSI or FC. This parameter is used to restrict # provisioning to the specified controller volumes. Specify the value # of this parameter to be a comma separated list of NetApp controller # volume names to be used for provisioning. Defaults to ''. CONFIG_CINDER_NETAPP_VOLUME_LIST= # The vFiler unit on which provisioning of block storage volumes will # be done. This parameter is only used by the driver when connecting # to an instance with a storage family of Data ONTAP operating in # 7-Mode Only use this parameter when utilizing the MultiStore feature # on the NetApp storage system. Defaults to ''. CONFIG_CINDER_NETAPP_VFILER= # The name of the config.conf stanza for a Data ONTAP (7-mode) HA # partner. This option is only used by the driver when connecting to # an instance with a storage family of Data ONTAP operating in 7-Mode, # and it is required if the storage protocol selected is FC. Defaults # to ''. CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME= # This option specifies the virtual storage server (Vserver) name on # the storage cluster on which provisioning of block storage volumes # should occur. Defaults to ''. CONFIG_CINDER_NETAPP_VSERVER= # Restricts provisioning to the specified controllers. Value must be # a comma-separated list of controller hostnames or IP addresses to be # used for provisioning. This option is only utilized when the storage # family is configured to use E-Series. Defaults to ''. CONFIG_CINDER_NETAPP_CONTROLLER_IPS= # Password for the NetApp E-Series storage array. Defaults to ''. CONFIG_CINDER_NETAPP_SA_PASSWORD= # This option is used to define how the controllers in the E-Series # storage array will work with the particular operating system on the # hosts that are connected to it. Defaults to 'linux_dm_mp' CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp # Path to the NetApp E-Series proxy application on a proxy server. # The value is combined with the value of the # CONFIG_CINDER_NETAPP_TRANSPORT_TYPE, CONFIG_CINDER_NETAPP_HOSTNAME, # and CONFIG_CINDER_NETAPP_HOSTNAME options to create the URL used by # the driver to connect to the proxy application. Defaults to # '/devmgr/v2'. CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2 # Restricts provisioning to the specified storage pools. Only dynamic # disk pools are currently supported. The value must be a comma- # separated list of disk pool names to be used for provisioning. # Defaults to ''. CONFIG_CINDER_NETAPP_STORAGE_POOLS= # Password to use for the OpenStack File Share service (manila) to # access the database. CONFIG_MANILA_DB_PW=PW_PLACEHOLDER # Password to use for the OpenStack File Share service (manila) to # authenticate with the Identity service. CONFIG_MANILA_KS_PW=PW_PLACEHOLDER # Backend for the OpenStack File Share service (manila); valid # options are: generic or netapp. ['generic', 'netapp'] CONFIG_MANILA_BACKEND=generic # Denotes whether the driver should handle the responsibility of # managing share servers. This must be set to false if the driver is # to operate without managing share servers. Defaults to 'false' # ['true', 'false'] CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false # The transport protocol used when communicating with the storage # system or proxy server. Valid values are 'http' and 'https'. # Defaults to 'https'. ['https', 'http'] CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https # Administrative user account name used to access the NetApp storage # system. Defaults to ''. CONFIG_MANILA_NETAPP_LOGIN=admin # Password for the NetApp administrative user account specified in # the CONFIG_MANILA_NETAPP_LOGIN parameter. Defaults to ''. CONFIG_MANILA_NETAPP_PASSWORD= # Hostname (or IP address) for the NetApp storage system or proxy # server. Defaults to ''. CONFIG_MANILA_NETAPP_SERVER_HOSTNAME= # The storage family type used on the storage system; valid values # are ontap_cluster for clustered Data ONTAP. Defaults to # 'ontap_cluster'. ['ontap_cluster'] CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster # The TCP port to use for communication with the storage system or # proxy server. If not specified, Data ONTAP drivers will use 80 for # HTTP and 443 for HTTPS. Defaults to '443'. CONFIG_MANILA_NETAPP_SERVER_PORT=443 # Pattern for searching available aggregates for NetApp provisioning. # Defaults to '(.*)'. CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*) # Name of aggregate on which to create the NetApp root volume. This # option only applies when the option # CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS is set to True. CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE= # NetApp root volume name. Defaults to 'root'. CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root # This option specifies the storage virtual machine (previously # called a Vserver) name on the storage cluster on which provisioning # of shared file systems should occur. This option only applies when # the option driver_handles_share_servers is set to False. Defaults to # ''. CONFIG_MANILA_NETAPP_VSERVER= # Denotes whether the driver should handle the responsibility of # managing share servers. This must be set to false if the driver is # to operate without managing share servers. Defaults to 'true'. # ['true', 'false'] CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true # Volume name template for Manila service. Defaults to 'manila- # share-%s'. CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s # Share mount path for Manila service. Defaults to '/shares'. CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares # Location of disk image for Manila service instance. Defaults to ' CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2 # User in Manila service instance. CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu # Password to service instance user. CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu # Type of networking that the backend will use. A more detailed # description of each option is available in the Manila docs. Defaults # to 'neutron'. ['neutron', 'nova-network', 'standalone'] CONFIG_MANILA_NETWORK_TYPE=neutron # Gateway IPv4 address that should be used. Required. Defaults to ''. CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY= # Network mask that will be used. Can be either decimal like '24' or # binary like '255.255.255.0'. Required. Defaults to ''. CONFIG_MANILA_NETWORK_STANDALONE_NETMASK= # Set it if network has segmentation (VLAN, VXLAN, etc). It will be # assigned to share-network and share drivers will be able to use this # for network interfaces within provisioned share servers. Optional. # Example: 1001. Defaults to ''. CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID= # Can be IP address, range of IP addresses or list of addresses or # ranges. Contains addresses from IP network that are allowed to be # used. If empty, then will be assumed that all host addresses from # network can be used. Optional. Examples: 10.0.0.10 or # 10.0.0.10-10.0.0.20 or # 10.0.0.10-10.0.0.20,10.0.0.30-10.0.0.40,10.0.0.50. Defaults to ''. CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE= # IP version of network. Optional. Defaults to '4'. ['4', '6'] CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4 # Password to use for OpenStack Bare Metal Provisioning (ironic) to # access the database. CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER # Password to use for OpenStack Bare Metal Provisioning to # authenticate with the Identity service. CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER # Password to use for the Compute service (nova) to access the # database. CONFIG_NOVA_DB_PW=73571255c0cd4ffa # Password to use for the Compute service to authenticate with the # Identity service. CONFIG_NOVA_KS_PW=3fddf326928641c1 # Overcommitment ratio for virtual to physical CPUs. Specify 1.0 to # disable CPU overcommitment. CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0 # Overcommitment ratio for virtual to physical RAM. Specify 1.0 to # disable RAM overcommitment. CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5 # Protocol used for instance migration. Valid options are: tcp and # ssh. Note that by default, the Compute user is created with the # /sbin/nologin shell so that the SSH protocol will not work. To make # the SSH protocol work, you must configure the Compute user on # compute hosts manually. ['tcp', 'ssh'] CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp # Manager that runs the Compute service. CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager # Private interface for flat DHCP on the Compute servers. CONFIG_NOVA_COMPUTE_PRIVIF=lo # Compute Network Manager. ['^nova\.network\.manager\.\w+Manager$'] CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager # Public interface on the Compute network server. CONFIG_NOVA_NETWORK_PUBIF=eno16777984 # Private interface for flat DHCP on the Compute network server. CONFIG_NOVA_NETWORK_PRIVIF=lo # IP Range for flat DHCP. ['^[\:\.\da-fA-f]+(\/\d+){0,1}$'] CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22 # IP Range for floating IP addresses. ['^[\:\.\da- # fA-f]+(\/\d+){0,1}$'] CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22 # Specify 'y' to automatically assign a floating IP to new instances. # ['y', 'n'] CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n # First VLAN for private networks (Compute networking). CONFIG_NOVA_NETWORK_VLAN_START=100 # Number of networks to support (Compute networking). CONFIG_NOVA_NETWORK_NUMBER=1 # Number of addresses in each private subnet (Compute networking). CONFIG_NOVA_NETWORK_SIZE=255 # Password to use for OpenStack Networking (neutron) to authenticate # with the Identity service. CONFIG_NEUTRON_KS_PW=ac26759156364acc # The password to use for OpenStack Networking to access the # database. CONFIG_NEUTRON_DB_PW=90066d916c32454e # The name of the Open vSwitch bridge (or empty for linuxbridge) for # the OpenStack Networking L3 agent to use for external traffic. # Specify 'provider' if you intend to use a provider network to handle # external traffic. CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex # Password for the OpenStack Networking metadata agent. CONFIG_NEUTRON_METADATA_PW=f8fa8777dab04123 # Specify 'y' to install OpenStack Networking's Load-Balancing- # as-a-Service (LBaaS). ['y', 'n'] CONFIG_LBAAS_INSTALL=n # Specify 'y' to install OpenStack Networking's L3 Metering agent # ['y', 'n'] CONFIG_NEUTRON_METERING_AGENT_INSTALL=n # Specify 'y' to configure OpenStack Networking's Firewall- # as-a-Service (FWaaS). ['y', 'n'] CONFIG_NEUTRON_FWAAS=n # Comma-separated list of network-type driver entry points to be # loaded from the neutron.ml2.type_drivers namespace. ['local', # 'flat', 'vlan', 'gre', 'vxlan'] CONFIG_NEUTRON_ML2_TYPE_DRIVERS=local # Comma-separated, ordered list of network types to allocate as # tenant networks. The 'local' value is only useful for single-box # testing and provides no connectivity between hosts. ['local', # 'vlan', 'gre', 'vxlan'] CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=local # Comma-separated ordered list of networking mechanism driver entry # points to be loaded from the neutron.ml2.mechanism_drivers # namespace. ['logger', 'test', 'linuxbridge', 'openvswitch', # 'hyperv', 'ncs', 'arista', 'cisco_nexus', 'mlnx', 'l2population'] CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch # Comma-separated list of physical_network names with which flat # networks can be created. Use * to allow flat networks with arbitrary # physical_network names. CONFIG_NEUTRON_ML2_FLAT_NETWORKS=* # Comma-separated list of :: or # specifying physical_network names usable for VLAN # provider and tenant networks, as well as ranges of VLAN tags on each # available for allocation to tenant networks. CONFIG_NEUTRON_ML2_VLAN_RANGES= # Comma-separated list of : tuples enumerating # ranges of GRE tunnel IDs that are available for tenant-network # allocation. A tuple must be an array with tun_max +1 - tun_min > # 1000000. CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES= # Comma-separated list of addresses for VXLAN multicast group. If # left empty, disables VXLAN from sending allocate broadcast traffic # (disables multicast VXLAN mode). Should be a Multicast IP (v4 or v6) # address. CONFIG_NEUTRON_ML2_VXLAN_GROUP= # Comma-separated list of : tuples enumerating # ranges of VXLAN VNI IDs that are available for tenant network # allocation. Minimum value is 0 and maximum value is 16777215. CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 # Name of the L2 agent to be used with OpenStack Networking. # ['linuxbridge', 'openvswitch'] CONFIG_NEUTRON_L2_AGENT=openvswitch # Comma-separated list of interface mappings for the OpenStack # Networking linuxbridge plugin. Each tuple in the list must be in the # format :. Example: # physnet1:eth1,physnet2:eth2,physnet3:eth3. CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= # Comma-separated list of bridge mappings for the OpenStack # Networking Open vSwitch plugin. Each tuple in the list must be in # the format :. Example: physnet1:br- # eth1,physnet2:br-eth2,physnet3:br-eth3 CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= # Comma-separated list of colon-separated Open vSwitch # : pairs. The interface will be added to the # associated bridge. CONFIG_NEUTRON_OVS_BRIDGE_IFACES= # Interface for the Open vSwitch tunnel. Packstack overrides the IP # address used for tunnels on this hypervisor to the IP found on the # specified interface (for example, eth1). CONFIG_NEUTRON_OVS_TUNNEL_IF= # VXLAN UDP port. CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789 # Specify 'y' to set up Horizon communication over https. ['y', 'n'] CONFIG_HORIZON_SSL=n # PEM-encoded certificate to be used for SSL connections on the https # server (the certificate should not require a passphrase). To # generate a certificate, leave blank. CONFIG_SSL_CERT= # SSL keyfile corresponding to the certificate if one was specified. CONFIG_SSL_KEY= # PEM-encoded CA certificates from which the certificate chain of the # server certificate can be assembled. CONFIG_SSL_CACHAIN= # Password to use for the Object Storage service to authenticate with # the Identity service. CONFIG_SWIFT_KS_PW=9fbd6dc79ac74098 # Comma-separated list of devices to use as storage device for Object # Storage. Each entry must take the format /path/to/dev (for example, # specifying /dev/vdb installs /dev/vdb as the Object Storage storage # device; Packstack does not create the filesystem, you must do this # first). If left empty, Packstack creates a loopback device for test # setup. CONFIG_SWIFT_STORAGES= # Number of Object Storage storage zones; this number MUST be no # larger than the number of configured storage devices. CONFIG_SWIFT_STORAGE_ZONES=1 # Number of Object Storage storage replicas; this number MUST be no # larger than the number of configured storage zones. CONFIG_SWIFT_STORAGE_REPLICAS=1 # File system type for storage nodes. ['xfs', 'ext4'] CONFIG_SWIFT_STORAGE_FSTYPE=ext4 # Custom seed number to use for swift_hash_path_suffix in # /etc/swift/swift.conf. If you do not provide a value, a seed number # is automatically generated. CONFIG_SWIFT_HASH=0cc78b328bd94179 # Size of the Object Storage loopback file storage device. CONFIG_SWIFT_STORAGE_SIZE=2G # Password used by Orchestration service user to authenticate against # the database. CONFIG_HEAT_DB_PW=PW_PLACEHOLDER # Encryption key to use for authentication in the Orchestration # database (16, 24, or 32 chars). CONFIG_HEAT_AUTH_ENC_KEY=9cff695f909f4db1 # Password to use for the Orchestration service to authenticate with # the Identity service. CONFIG_HEAT_KS_PW=PW_PLACEHOLDER # Specify 'y' to install the Orchestration CloudWatch API. ['y', 'n'] CONFIG_HEAT_CLOUDWATCH_INSTALL=n # Specify 'y' to install the Orchestration CloudFormation API. ['y', # 'n'] CONFIG_HEAT_CFN_INSTALL=n # Name of the Identity domain for Orchestration. CONFIG_HEAT_DOMAIN=heat # Name of the Identity domain administrative user for Orchestration. CONFIG_HEAT_DOMAIN_ADMIN=heat_admin # Password for the Identity domain administrative user for # Orchestration. CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER # Specify 'y' to provision for demo usage and testing. ['y', 'n'] CONFIG_PROVISION_DEMO=n # Specify 'y' to configure the OpenStack Integration Test Suite # (tempest) for testing. The test suite requires OpenStack Networking # to be installed. ['y', 'n'] CONFIG_PROVISION_TEMPEST=n # CIDR network address for the floating IP subnet. CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28 # The name to be assigned to the demo image in Glance (default # "cirros"). CONFIG_PROVISION_IMAGE_NAME=cirros # A URL or local file location for an image to download and provision # in Glance (defaults to a URL for a recent "cirros" image). CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img # Format for the demo image (default "qcow2"). CONFIG_PROVISION_IMAGE_FORMAT=qcow2 # User to use when connecting to instances booted from the demo # image. CONFIG_PROVISION_IMAGE_SSH_USER=cirros # Name of the Integration Test Suite provisioning user. If you do not # provide a user name, Tempest is configured in a standalone mode. CONFIG_PROVISION_TEMPEST_USER= # Password to use for the Integration Test Suite provisioning user. CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER # CIDR network address for the floating IP subnet. CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28 # URI of the Integration Test Suite git repository. CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git # Revision (branch) of the Integration Test Suite git repository. CONFIG_PROVISION_TEMPEST_REPO_REVISION=master # Specify 'y' to configure the Open vSwitch external bridge for an # all-in-one deployment (the L3 external bridge acts as the gateway # for virtual machines). ['y', 'n'] CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n # Secret key for signing Telemetry service (ceilometer) messages. CONFIG_CEILOMETER_SECRET=d51cb879144c4f00 # Password to use for Telemetry to authenticate with the Identity # service. CONFIG_CEILOMETER_KS_PW=de15255736b84375 # Backend driver for Telemetry's group membership coordination. # ['redis', 'none'] CONFIG_CEILOMETER_COORDINATION_BACKEND=redis # IP address of the server on which to install MongoDB. CONFIG_MONGODB_HOST=172.17.14.100 # IP address of the server on which to install the Redis master # server. CONFIG_REDIS_MASTER_HOST=172.17.14.100 # Port on which the Redis server(s) listens. CONFIG_REDIS_PORT=6379 # Specify 'y' to have Redis try to use HA. ['y', 'n'] CONFIG_REDIS_HA=n # Hosts on which to install Redis slaves. CONFIG_REDIS_SLAVE_HOSTS= # Hosts on which to install Redis sentinel servers. CONFIG_REDIS_SENTINEL_HOSTS= # Host to configure as the Redis coordination sentinel. CONFIG_REDIS_SENTINEL_CONTACT_HOST= # Port on which Redis sentinel servers listen. CONFIG_REDIS_SENTINEL_PORT=26379 # Quorum value for Redis sentinel servers. CONFIG_REDIS_SENTINEL_QUORUM=2 # Name of the master server watched by the Redis sentinel. ['[a-z]+'] CONFIG_REDIS_MASTER_NAME=mymaster # Password to use for OpenStack Data Processing (sahara) to access # the database. CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER # Password to use for OpenStack Data Processing to authenticate with # the Identity service. CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER # Password to use for OpenStack Database-as-a-Service (trove) to # access the database. CONFIG_TROVE_DB_PW=PW_PLACEHOLDER # Password to use for OpenStack Database-as-a-Service to authenticate # with the Identity service. CONFIG_TROVE_KS_PW=PW_PLACEHOLDER # User name to use when OpenStack Database-as-a-Service connects to # the Compute service. CONFIG_TROVE_NOVA_USER=admin # Tenant to use when OpenStack Database-as-a-Service connects to the # Compute service. CONFIG_TROVE_NOVA_TENANT=services # Password to use when OpenStack Database-as-a-Service connects to # the Compute service. CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER # Password of the nagiosadmin user on the Nagios server. CONFIG_NAGIOS_PW=44909c3334d241b7 -------------- next part -------------- A non-text attachment was scrubbed... Name: use-devstack.png Type: image/png Size: 321866 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: use-rdo-packstack.png Type: image/png Size: 321747 bytes Desc: not available URL: From bderzhavets at hotmail.com Thu May 7 11:17:52 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 7 May 2015 07:17:52 -0400 Subject: [Rdo-list] Just a little mistake on the OpenStack Dashboard In-Reply-To: <554B3A55.4070703@virtualtech.jp> References: <554B3A55.4070703@virtualtech.jp> Message-ID: Per your answer file :- CONFIG_CONTROLLER_HOST=172.17.14.100 CONFIG_COMPUTE_HOSTS=172.17.14.100 CONFIG_NETWORK_HOSTS=172.17.14.100 You would better run :- # packstack --allinone it will generate answer file automatically, then you will be able to compare. Your host should have static IP and Internet access. Boris Date: Thu, 7 May 2015 19:11:33 +0900 From: tooyama at virtualtech.jp To: rdo-list at redhat.com Subject: [Rdo-list] Just a little mistake on the OpenStack Dashboard Hi I'm try the OpenStack Kilo using the RDO packstack. This is Good Work on the My Server. Thanks! But,Just a little mistake. What is This? Thank you. Youhei Tooyama -- How to setup: systemctl stop NetworkManager systemctl disable NetworkManager systemctl enable network reboot yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm yum -y update yum install -y openstack-packstack python-netaddr packstack --gen-answer-file=answer.txt vi answer.txt ... setenforce 0 packstack --answer-file=/root/answer.txt rpm -q openstack-dashboard openstack-dashboard-2015.1.0-2.el7.noarch _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu May 7 11:49:51 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 07 May 2015 07:49:51 -0400 Subject: [Rdo-list] Test day: Thanks, and bugs Message-ID: <554B515F.5020309@redhat.com> Thank you so much to everyone that participated in the test days. It looks like 24 tickets were opened - http://tm3.org/testdaybugs - as well as a few that have already been closed. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From mrunge at redhat.com Thu May 7 11:53:45 2015 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 07 May 2015 13:53:45 +0200 Subject: [Rdo-list] Just a little mistake on the OpenStack Dashboard In-Reply-To: <554B3A55.4070703@virtualtech.jp> References: <554B3A55.4070703@virtualtech.jp> Message-ID: <554B5249.8050508@redhat.com> On 07/05/15 12:11, Youhei Tooyama wrote: > Hi > I'm try the OpenStack Kilo using the RDO packstack. > This is Good Work on the My Server. > Thanks! > Congrats! Glad to hear. > But,Just a little mistake. > What is This? > Uhm, what is what? You see an issue with Horizon? What happens or what do you see? What would you expect? Matthias From rbowen at redhat.com Thu May 7 11:56:05 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 07 May 2015 07:56:05 -0400 Subject: [Rdo-list] Just a little mistake on the OpenStack Dashboard In-Reply-To: <554B5249.8050508@redhat.com> References: <554B3A55.4070703@virtualtech.jp> <554B5249.8050508@redhat.com> Message-ID: <554B52D5.8060004@redhat.com> Matthias, there was an image attached much further down in the email, and it appears that it was the same issue as https://bugzilla.redhat.com/show_bug.cgi?id=1218627 --Rich On 05/07/2015 07:53 AM, Matthias Runge wrote: > On 07/05/15 12:11, Youhei Tooyama wrote: >> Hi >> I'm try the OpenStack Kilo using the RDO packstack. >> This is Good Work on the My Server. >> Thanks! >> > > Congrats! Glad to hear. > >> But,Just a little mistake. >> What is This? >> > Uhm, > what is what? > > You see an issue with Horizon? What happens or what do you see? What > would you expect? > > Matthias > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ichi.sara at gmail.com Thu May 7 12:08:30 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Thu, 7 May 2015 14:08:30 +0200 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: <20150507094544.GA31444@t430slt.redhat.com> References: <20150507094544.GA31444@t430slt.redhat.com> Message-ID: Actually, Nova is working I just spawned a VM with the same flavor and image. and when I try to do the same with heat it fails. below some logs: nova-compute.log 2015-05-07 13:58:56.208 3928 AUDIT nova.compute.manager [req-b57bfc38-9d77-48fc-8185-474d1f9076a6 None] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Starting instance... 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Attempting claim: memory 512 MB, disk 1 GB 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total memory: 3791 MB, used: 512.00 MB 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] memory limit: 5686.50 MB, free: 5174.50 MB 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total disk: 13 GB, used: 0.00 GB 2015-05-07 13:58:56.378 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] disk limit not specified, defaulting to unlimited 2015-05-07 13:58:56.395 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Claim successful 2015-05-07 13:58:56.590 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:58:56.787 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:58:57.269 3928 INFO nova.virt.libvirt.driver [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Creating image 2015-05-07 13:59:27.642 3928 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 13:59:27.661 3928 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2015-05-07 13:59:27.702 3928 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 13:59:27.800 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 3791, total allocated virtual ram (MB): 1024 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 12 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 4, total allocated vcpus: 0 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] PCI stats: [] 2015-05-07 13:59:28.101 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:59:28.101 3928 INFO nova.compute.resource_tracker [-] Compute_service record updated for localhost.localdomain:localhost.localdomain 2015-05-07 13:59:47.110 3928 WARNING nova.virt.disk.vfs.guestfs [-] Failed to close augeas aug_close: do_aug_close: you must call 'aug-init' first to initialize Augeas 2015-05-07 13:59:51.364 3928 INFO nova.compute.manager [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] VM Started (Lifecycle Event) 2015-05-07 13:59:51.384 3928 INFO nova.virt.libvirt.driver [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Instance spawned successfully. 2015-05-07 14:00:28.264 3928 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2015-05-07 14:00:29.007 3928 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 3791, total allocated virtual ram (MB): 1024 2015-05-07 14:00:29.008 3928 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 12 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 4, total allocated vcpus: 1 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] PCI stats: [] 2015-05-07 14:00:29.048 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 14:00:29.048 3928 INFO nova.compute.resource_tracker [-] Compute_service record updated for localhost.localdomain:localhost.localdomain heat-engine.log 2015-05-07 14:02:45.177 3942 INFO heat.engine.service [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Creating stack my_first_stack 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('AWSTemplateFormatVersion.2010-09-09 = heat.engine.cfn.template:CfnTemplate') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('heat_template_version.2013-05-23 = heat.engine.hot.template:HOTemplate20130523') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('HeatTemplateFormatVersion.2012-12-12 = heat.engine.cfn.template:HeatTemplate') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('heat_template_version.2014-10-16 = heat.engine.hot.template:HOTemplate20141016') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.224 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] __init__ /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:31 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] __init__ /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:32 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Parameter Groups. validate /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:43 2015-05-07 14:02:45.226 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] ['OS::stack_id'] validate /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:44 2015-05-07 14:02:45.233 3942 INFO heat.engine.resource [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Server "my_instance" 2015-05-07 14:02:45.385 3942 DEBUG heat.common.keystoneclient [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Using stack domain 9ce7896d79914b68aef34d397fd3fde4 __init__ /usr/lib/python2.7/site-packages/heat/common/heat_keystoneclient.py:115 2015-05-07 14:02:45.405 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:45.448 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET /v2/fac261cd98974411a9b2e977cd9ec876/os-keypairs/userkey HTTP/1.1" 200 674 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:45.486 3942 DEBUG glanceclient.common.http [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] curl -i -X GET -H 'User-Agent: python-glanceclient' -H 'Content-Type: application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'X-Auth-Token: {SHA1}bc08e056cd78f34e9ada3dd99ceae37d33daf3f0' http://192.168.5.33:9292/v1/images/detail?limit=20&name=cirros log_curl_request /usr/lib/python2.7/site-packages/glanceclient/common/http.py:122 2015-05-07 14:02:45.488 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:45.770 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET /v1/images/detail?limit=20&name=cirros HTTP/1.1" 200 481 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:45.771 3942 DEBUG glanceclient.common.http [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] HTTP/1.1 200 OK date: Thu, 07 May 2015 12:02:45 GMT content-length: 481 content-type: application/json; charset=UTF-8 x-openstack-request-id: req-c62b407b-0d8f-43c6-a8be-df02015dc846 {"images": [{"status": "active", "deleted_at": null, "name": "cirros", "deleted": false, "container_format": "bare", "created_at": "2015-05-06T14:27:54", "disk_format": "qcow2", "updated_at": "2015-05-06T15:01:15", "min_disk": 0, "protected": false, "id": "d80b5a24-2567-438f-89f8-b381a6716887", "min_ram": 0, "checksum": "133eae9fb1c98f45894a4e60d8736619", "owner": "3740df0f18754509a252738385d375b9", "is_public": true, "virtual_size": null, "properties": {}, "size": 13200896}]} log_http_response /usr/lib/python2.7/site-packages/glanceclient/common/http.py:135 2015-05-07 14:02:46.049 3942 DEBUG keystoneclient.auth.identity.v3 [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Making authentication request to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 2015-05-07 14:02:46.052 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.247 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/auth/tokens HTTP/1.1" 201 8275 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.250 3942 DEBUG keystoneclient.session [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] REQ: curl -i -X POST http://192.168.5.33:35357/v3/OS-TRUST/trusts -H "User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H "X-Auth-Token: TOKEN_REDACTED" -d '{"trust": {"impersonation": true, "project_id": "fac261cd98974411a9b2e977cd9ec876", "trustor_user_id": "24f6dac3f1444d89884c1b1977bb0d87", "roles": [{"name": "heat_stack_owner"}], "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e"}}' _http_log_request /usr/lib/python2.7/site-packages/keystoneclient/session.py:155 2015-05-07 14:02:46.252 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.381 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/OS-TRUST/trusts HTTP/1.1" 201 717 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.383 3942 DEBUG keystoneclient.session [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] RESP: [201] {'content-length': '717', 'vary': 'X-Auth-Token', 'server': 'Apache/2.4.6 (CentOS)', 'connection': 'close', 'date': 'Thu, 07 May 2015 12:02:46 GMT', 'content-type': 'application/json'} RESP BODY: {"trust": {"impersonation": true, "roles_links": {"self": " http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f/roles", "previous": null, "next": null}, "deleted_at": null, "trustor_user_id": "24f6dac3f1444d89884c1b1977bb0d87", "links": {"self": " http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f"}, "roles": [{"id": "cf346090e5a042ebac674bcfe14f4076", "links": {"self": " http://192.168.5.33:35357/v3/roles/cf346090e5a042ebac674bcfe14f4076"}, "name": "heat_stack_owner"}], "remaining_uses": null, "expires_at": null, "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e", "project_id": "fac261cd98974411a9b2e977cd9ec876", "id": "96e0e32b7c504d3f8f9b82fcc2658e5f"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneclient/session.py:182 2015-05-07 14:02:46.454 3942 DEBUG heat.engine.stack_lock [-] Engine b3d3b0e1-5c47-4912-94e9-1b75159b9b10 acquired lock on stack bb7d16c4-b73f-428d-9dbd-5b089748f374 acquire /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:72 2015-05-07 14:02:46.456 3942 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 14:02:46.474 3942 DEBUG keystoneclient.auth.identity.v3 [-] Making authentication request to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 2015-05-07 14:02:46.477 3942 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.481 3942 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 14:02:46.666 3942 DEBUG urllib3.connectionpool [-] "POST /v3/auth/tokens HTTP/1.1" 401 114 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.session [-] Request returned failure status: 401 request /usr/lib/python2.7/site-packages/keystoneclient/session.py:345 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.v3.client [-] Authorization failed. get_raw_token_from_identity_service /usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:267 2015-05-07 14:02:46.673 3942 DEBUG heat.engine.stack_lock [-] Engine b3d3b0e1-5c47-4912-94e9-1b75159b9b10 released lock on stack bb7d16c4-b73f-428d-9dbd-5b089748f374 release /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:122 2015-05-07 14:03:33.623 3942 INFO oslo.messaging._drivers.impl_rabbit [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 14:03:33.645 3942 INFO oslo.messaging._drivers.impl_rabbit [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 11:45 GMT+02:00 Steven Hardy : > On Tue, May 05, 2015 at 01:18:17PM +0200, ICHIBA Sara wrote: > > hello, > > > > here is my HOT template, it's very basic: > > > > heat_template_version: 2013-05-23 > > > > description: Simple template to deploy a single compute instance > > > > resources: > > A my_instance: > > A A A type: OS::Nova::Server > > A A A properties: > > A A A A A image: Cirros 0.3.3 > > A A A A A flavor: m1.small > > A A A A A key_name: userkey > > A A A A A networks: > > A A A A A A A - network: fdf2bb77-a828-401d-969a-736a8028950f > > > > for the logs please find them attached. > > These logs are a little confusing - it looks like you failed to create the > stack due to some validation errors, then tried again and did a stack-check > and a stack resume? > > Can you please set debug = True in the [DEFAULT] section of your heat.conf, > restart heat-engine and try again please? > > Also, some basic checks are: > > 1. When the stack is CREATE_IN_PROGRESS, what does nova list show for the > instance? > > 2. Is it possible to boot an instance using nova boot, using the same > arguments (image, flavor, key etc) that you specify in the heat template? > > I suspect that Heat is not actually the problem here, and that some part of > Nova is either misconfigured or not running, but I can't prove that without > seeing the nova CLI output and/or the nova logs. > > Steve > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Thu May 7 12:19:10 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Thu, 7 May 2015 14:19:10 +0200 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: <20150507094544.GA31444@t430slt.redhat.com> Message-ID: you can find the rest of heat logs attached. I launched the stack at 14:02:45. 2015-05-07 14:08 GMT+02:00 ICHIBA Sara : > Actually, Nova is working I just spawned a VM with the same flavor and > image. and when I try to do the same with heat it fails. below some logs: > > > nova-compute.log > > 2015-05-07 13:58:56.208 3928 AUDIT nova.compute.manager > [req-b57bfc38-9d77-48fc-8185-474d1f9076a6 None] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Starting instance... > 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Attempting claim: memory 512 MB, disk > 1 GB > 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total memory: 3791 MB, used: 512.00 MB > 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] memory limit: 5686.50 MB, free: > 5174.50 MB > 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total disk: 13 GB, used: 0.00 GB > 2015-05-07 13:58:56.378 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] disk limit not specified, defaulting > to unlimited > 2015-05-07 13:58:56.395 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Claim successful > 2015-05-07 13:58:56.590 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 13:58:56.787 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 13:58:57.269 3928 INFO nova.virt.libvirt.driver [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Creating image > 2015-05-07 13:59:27.642 3928 INFO oslo.messaging._drivers.impl_rabbit [-] > Connecting to AMQP server on 192.168.5.33:5672 > 2015-05-07 13:59:27.661 3928 AUDIT nova.compute.resource_tracker [-] > Auditing locally available compute resources > 2015-05-07 13:59:27.702 3928 INFO oslo.messaging._drivers.impl_rabbit [-] > Connected to AMQP server on 192.168.5.33:5672 > 2015-05-07 13:59:27.800 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Total > physical ram (MB): 3791, total allocated virtual ram (MB): 1024 > 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Free > disk (GB): 12 > 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] Total > usable vcpus: 4, total allocated vcpus: 0 > 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] PCI > stats: [] > 2015-05-07 13:59:28.101 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 13:59:28.101 3928 INFO nova.compute.resource_tracker [-] > Compute_service record updated for > localhost.localdomain:localhost.localdomain > 2015-05-07 13:59:47.110 3928 WARNING nova.virt.disk.vfs.guestfs [-] Failed > to close augeas aug_close: do_aug_close: you must call 'aug-init' first to > initialize Augeas > 2015-05-07 13:59:51.364 3928 INFO nova.compute.manager [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] VM Started (Lifecycle Event) > 2015-05-07 13:59:51.384 3928 INFO nova.virt.libvirt.driver [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Instance spawned successfully. > 2015-05-07 14:00:28.264 3928 AUDIT nova.compute.resource_tracker [-] > Auditing locally available compute resources > 2015-05-07 14:00:29.007 3928 AUDIT nova.compute.resource_tracker [-] Total > physical ram (MB): 3791, total allocated virtual ram (MB): 1024 > 2015-05-07 14:00:29.008 3928 AUDIT nova.compute.resource_tracker [-] Free > disk (GB): 12 > 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] Total > usable vcpus: 4, total allocated vcpus: 1 > 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] PCI > stats: [] > 2015-05-07 14:00:29.048 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 14:00:29.048 3928 INFO nova.compute.resource_tracker [-] > Compute_service record updated for > localhost.localdomain:localhost.localdomain > > > > > heat-engine.log > > 2015-05-07 14:02:45.177 3942 INFO heat.engine.service > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Creating stack > my_first_stack > 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension > EntryPoint.parse('AWSTemplateFormatVersion.2010-09-09 = > heat.engine.cfn.template:CfnTemplate') _load_plugins > /usr/lib/python2.7/site-packages/stevedore/extension.py:156 > 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension > EntryPoint.parse('heat_template_version.2013-05-23 = > heat.engine.hot.template:HOTemplate20130523') _load_plugins > /usr/lib/python2.7/site-packages/stevedore/extension.py:156 > 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension > EntryPoint.parse('HeatTemplateFormatVersion.2012-12-12 = > heat.engine.cfn.template:HeatTemplate') _load_plugins > /usr/lib/python2.7/site-packages/stevedore/extension.py:156 > 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension > EntryPoint.parse('heat_template_version.2014-10-16 = > heat.engine.hot.template:HOTemplate20141016') _load_plugins > /usr/lib/python2.7/site-packages/stevedore/extension.py:156 > 2015-05-07 14:02:45.224 3942 DEBUG heat.engine.parameter_groups > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] > __init__ > /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:31 > 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] > __init__ > /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:32 > 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Parameter > Groups. validate > /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:43 > 2015-05-07 14:02:45.226 3942 DEBUG heat.engine.parameter_groups > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] ['OS::stack_id'] validate > /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:44 > 2015-05-07 14:02:45.233 3942 INFO heat.engine.resource > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Server > "my_instance" > 2015-05-07 14:02:45.385 3942 DEBUG heat.common.keystoneclient > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Using stack domain > 9ce7896d79914b68aef34d397fd3fde4 __init__ > /usr/lib/python2.7/site-packages/heat/common/heat_keystoneclient.py:115 > 2015-05-07 14:02:45.405 3942 INFO urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection > (1): 192.168.5.33 > 2015-05-07 14:02:45.448 3942 DEBUG urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET > /v2/fac261cd98974411a9b2e977cd9ec876/os-keypairs/userkey HTTP/1.1" 200 674 > _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:45.486 3942 DEBUG glanceclient.common.http > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] curl -i -X GET -H 'User-Agent: > python-glanceclient' -H 'Content-Type: application/octet-stream' -H > 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'X-Auth-Token: > {SHA1}bc08e056cd78f34e9ada3dd99ceae37d33daf3f0' > http://192.168.5.33:9292/v1/images/detail?limit=20&name=cirros > log_curl_request > /usr/lib/python2.7/site-packages/glanceclient/common/http.py:122 > 2015-05-07 14:02:45.488 3942 INFO urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection > (1): 192.168.5.33 > 2015-05-07 14:02:45.770 3942 DEBUG urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET > /v1/images/detail?limit=20&name=cirros HTTP/1.1" 200 481 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:45.771 3942 DEBUG glanceclient.common.http > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] > HTTP/1.1 200 OK > date: Thu, 07 May 2015 12:02:45 GMT > content-length: 481 > content-type: application/json; charset=UTF-8 > x-openstack-request-id: req-c62b407b-0d8f-43c6-a8be-df02015dc846 > > {"images": [{"status": "active", "deleted_at": null, "name": "cirros", > "deleted": false, "container_format": "bare", "created_at": > "2015-05-06T14:27:54", "disk_format": "qcow2", "updated_at": > "2015-05-06T15:01:15", "min_disk": 0, "protected": false, "id": > "d80b5a24-2567-438f-89f8-b381a6716887", "min_ram": 0, "checksum": > "133eae9fb1c98f45894a4e60d8736619", "owner": > "3740df0f18754509a252738385d375b9", "is_public": true, "virtual_size": > null, "properties": {}, "size": 13200896}]} > log_http_response > /usr/lib/python2.7/site-packages/glanceclient/common/http.py:135 > 2015-05-07 14:02:46.049 3942 DEBUG keystoneclient.auth.identity.v3 > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Making authentication request > to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref > /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 > 2015-05-07 14:02:46.052 3942 INFO urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection > (1): 192.168.5.33 > 2015-05-07 14:02:46.247 3942 DEBUG urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/auth/tokens HTTP/1.1" > 201 8275 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:46.250 3942 DEBUG keystoneclient.session > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] REQ: curl -i -X POST > http://192.168.5.33:35357/v3/OS-TRUST/trusts -H "User-Agent: > python-keystoneclient" -H "Content-Type: application/json" -H > "X-Auth-Token: TOKEN_REDACTED" -d '{"trust": {"impersonation": true, > "project_id": "fac261cd98974411a9b2e977cd9ec876", "trustor_user_id": > "24f6dac3f1444d89884c1b1977bb0d87", "roles": [{"name": > "heat_stack_owner"}], "trustee_user_id": > "4f7e2e76441e483982fb863ed02fe63e"}}' _http_log_request > /usr/lib/python2.7/site-packages/keystoneclient/session.py:155 > 2015-05-07 14:02:46.252 3942 INFO urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection > (1): 192.168.5.33 > 2015-05-07 14:02:46.381 3942 DEBUG urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/OS-TRUST/trusts > HTTP/1.1" 201 717 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:46.383 3942 DEBUG keystoneclient.session > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] RESP: [201] {'content-length': > '717', 'vary': 'X-Auth-Token', 'server': 'Apache/2.4.6 (CentOS)', > 'connection': 'close', 'date': 'Thu, 07 May 2015 12:02:46 GMT', > 'content-type': 'application/json'} > RESP BODY: {"trust": {"impersonation": true, "roles_links": {"self": " > http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f/roles", > "previous": null, "next": null}, "deleted_at": null, "trustor_user_id": > "24f6dac3f1444d89884c1b1977bb0d87", "links": {"self": " > http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f"}, > "roles": [{"id": "cf346090e5a042ebac674bcfe14f4076", "links": {"self": " > http://192.168.5.33:35357/v3/roles/cf346090e5a042ebac674bcfe14f4076"}, > "name": "heat_stack_owner"}], "remaining_uses": null, "expires_at": null, > "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e", "project_id": > "fac261cd98974411a9b2e977cd9ec876", "id": > "96e0e32b7c504d3f8f9b82fcc2658e5f"}} > _http_log_response > /usr/lib/python2.7/site-packages/keystoneclient/session.py:182 > 2015-05-07 14:02:46.454 3942 DEBUG heat.engine.stack_lock [-] Engine > b3d3b0e1-5c47-4912-94e9-1b75159b9b10 acquired lock on stack > bb7d16c4-b73f-428d-9dbd-5b089748f374 acquire > /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:72 > 2015-05-07 14:02:46.456 3942 INFO oslo.messaging._drivers.impl_rabbit [-] > Connecting to AMQP server on 192.168.5.33:5672 > 2015-05-07 14:02:46.474 3942 DEBUG keystoneclient.auth.identity.v3 [-] > Making authentication request to http://192.168.5.33:35357/v3/auth/tokens > get_auth_ref > /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 > 2015-05-07 14:02:46.477 3942 INFO urllib3.connectionpool [-] Starting new > HTTP connection (1): 192.168.5.33 > 2015-05-07 14:02:46.481 3942 INFO oslo.messaging._drivers.impl_rabbit [-] > Connected to AMQP server on 192.168.5.33:5672 > 2015-05-07 14:02:46.666 3942 DEBUG urllib3.connectionpool [-] "POST > /v3/auth/tokens HTTP/1.1" 401 114 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.session [-] Request > returned failure status: 401 request > /usr/lib/python2.7/site-packages/keystoneclient/session.py:345 > 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.v3.client [-] Authorization > failed. get_raw_token_from_identity_service > /usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:267 > 2015-05-07 14:02:46.673 3942 DEBUG heat.engine.stack_lock [-] Engine > b3d3b0e1-5c47-4912-94e9-1b75159b9b10 released lock on stack > bb7d16c4-b73f-428d-9dbd-5b089748f374 release > /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:122 > 2015-05-07 14:03:33.623 3942 INFO oslo.messaging._drivers.impl_rabbit > [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connecting to AMQP server on > 192.168.5.33:5672 > 2015-05-07 14:03:33.645 3942 INFO oslo.messaging._drivers.impl_rabbit > [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connected to AMQP server on > 192.168.5.33:5672 > > > > > > 2015-05-07 11:45 GMT+02:00 Steven Hardy : > >> On Tue, May 05, 2015 at 01:18:17PM +0200, ICHIBA Sara wrote: >> > hello, >> > >> > here is my HOT template, it's very basic: >> > >> > heat_template_version: 2013-05-23 >> > >> > description: Simple template to deploy a single compute instance >> > >> > resources: >> > A my_instance: >> > A A A type: OS::Nova::Server >> > A A A properties: >> > A A A A A image: Cirros 0.3.3 >> > A A A A A flavor: m1.small >> > A A A A A key_name: userkey >> > A A A A A networks: >> > A A A A A A A - network: fdf2bb77-a828-401d-969a-736a8028950f >> > >> > for the logs please find them attached. >> >> These logs are a little confusing - it looks like you failed to create the >> stack due to some validation errors, then tried again and did a >> stack-check >> and a stack resume? >> >> Can you please set debug = True in the [DEFAULT] section of your >> heat.conf, >> restart heat-engine and try again please? >> >> Also, some basic checks are: >> >> 1. When the stack is CREATE_IN_PROGRESS, what does nova list show for the >> instance? >> >> 2. Is it possible to boot an instance using nova boot, using the same >> arguments (image, flavor, key etc) that you specify in the heat template? >> >> I suspect that Heat is not actually the problem here, and that some part >> of >> Nova is either misconfigured or not running, but I can't prove that >> without >> seeing the nova CLI output and/or the nova logs. >> >> Steve >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: heat-api-cfn.log Type: application/octet-stream Size: 50924 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: heat-api-cloudwatch.log Type: application/octet-stream Size: 52795 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: heat-engine.log Type: application/octet-stream Size: 146302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: heat-manage.log Type: application/octet-stream Size: 4544 bytes Desc: not available URL: From ichi.sara at gmail.com Thu May 7 12:41:22 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Thu, 7 May 2015 14:41:22 +0200 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: <20150507094544.GA31444@t430slt.redhat.com> Message-ID: I found this https://bugs.launchpad.net/heat/+bug/1405110 . Apparently i'm not the only one to have this problem 2015-05-07 14:26 GMT+02:00 ICHIBA Sara : > > > 2015-05-07 14:08 GMT+02:00 ICHIBA Sara : > >> Actually, Nova is working I just spawned a VM with the same flavor and >> image. and when I try to do the same with heat it fails. below some logs: >> >> >> nova-compute.log >> >> 2015-05-07 13:58:56.208 3928 AUDIT nova.compute.manager >> [req-b57bfc38-9d77-48fc-8185-474d1f9076a6 None] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Starting instance... >> 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Attempting claim: memory 512 MB, disk >> 1 GB >> 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total memory: 3791 MB, used: 512.00 MB >> 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] memory limit: 5686.50 MB, free: >> 5174.50 MB >> 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total disk: 13 GB, used: 0.00 GB >> 2015-05-07 13:58:56.378 3928 AUDIT nova.compute.claims [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] disk limit not specified, defaulting >> to unlimited >> 2015-05-07 13:58:56.395 3928 AUDIT nova.compute.claims [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Claim successful >> 2015-05-07 13:58:56.590 3928 INFO nova.scheduler.client.report [-] >> Compute_service record updated for ('localhost.localdomain', >> 'localhost.localdomain') >> 2015-05-07 13:58:56.787 3928 INFO nova.scheduler.client.report [-] >> Compute_service record updated for ('localhost.localdomain', >> 'localhost.localdomain') >> 2015-05-07 13:58:57.269 3928 INFO nova.virt.libvirt.driver [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Creating image >> 2015-05-07 13:59:27.642 3928 INFO oslo.messaging._drivers.impl_rabbit [-] >> Connecting to AMQP server on 192.168.5.33:5672 >> 2015-05-07 13:59:27.661 3928 AUDIT nova.compute.resource_tracker [-] >> Auditing locally available compute resources >> 2015-05-07 13:59:27.702 3928 INFO oslo.messaging._drivers.impl_rabbit [-] >> Connected to AMQP server on 192.168.5.33:5672 >> 2015-05-07 13:59:27.800 3928 INFO nova.scheduler.client.report [-] >> Compute_service record updated for ('localhost.localdomain', >> 'localhost.localdomain') >> 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] >> Total physical ram (MB): 3791, total allocated virtual ram (MB): 1024 >> 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Free >> disk (GB): 12 >> 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] >> Total usable vcpus: 4, total allocated vcpus: 0 >> 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] PCI >> stats: [] >> 2015-05-07 13:59:28.101 3928 INFO nova.scheduler.client.report [-] >> Compute_service record updated for ('localhost.localdomain', >> 'localhost.localdomain') >> 2015-05-07 13:59:28.101 3928 INFO nova.compute.resource_tracker [-] >> Compute_service record updated for >> localhost.localdomain:localhost.localdomain >> 2015-05-07 13:59:47.110 3928 WARNING nova.virt.disk.vfs.guestfs [-] >> Failed to close augeas aug_close: do_aug_close: you must call 'aug-init' >> first to initialize Augeas >> 2015-05-07 13:59:51.364 3928 INFO nova.compute.manager [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] VM Started (Lifecycle Event) >> 2015-05-07 13:59:51.384 3928 INFO nova.virt.libvirt.driver [-] [instance: >> 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Instance spawned successfully. >> 2015-05-07 14:00:28.264 3928 AUDIT nova.compute.resource_tracker [-] >> Auditing locally available compute resources >> 2015-05-07 14:00:29.007 3928 AUDIT nova.compute.resource_tracker [-] >> Total physical ram (MB): 3791, total allocated virtual ram (MB): 1024 >> 2015-05-07 14:00:29.008 3928 AUDIT nova.compute.resource_tracker [-] Free >> disk (GB): 12 >> 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] >> Total usable vcpus: 4, total allocated vcpus: 1 >> 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] PCI >> stats: [] >> 2015-05-07 14:00:29.048 3928 INFO nova.scheduler.client.report [-] >> Compute_service record updated for ('localhost.localdomain', >> 'localhost.localdomain') >> 2015-05-07 14:00:29.048 3928 INFO nova.compute.resource_tracker [-] >> Compute_service record updated for >> localhost.localdomain:localhost.localdomain >> >> >> >> >> heat-engine.log >> >> 2015-05-07 14:02:45.177 3942 INFO heat.engine.service >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Creating stack >> my_first_stack >> 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension >> EntryPoint.parse('AWSTemplateFormatVersion.2010-09-09 = >> heat.engine.cfn.template:CfnTemplate') _load_plugins >> /usr/lib/python2.7/site-packages/stevedore/extension.py:156 >> 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension >> EntryPoint.parse('heat_template_version.2013-05-23 = >> heat.engine.hot.template:HOTemplate20130523') _load_plugins >> /usr/lib/python2.7/site-packages/stevedore/extension.py:156 >> 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension >> EntryPoint.parse('HeatTemplateFormatVersion.2012-12-12 = >> heat.engine.cfn.template:HeatTemplate') _load_plugins >> /usr/lib/python2.7/site-packages/stevedore/extension.py:156 >> 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension >> EntryPoint.parse('heat_template_version.2014-10-16 = >> heat.engine.hot.template:HOTemplate20141016') _load_plugins >> /usr/lib/python2.7/site-packages/stevedore/extension.py:156 >> 2015-05-07 14:02:45.224 3942 DEBUG heat.engine.parameter_groups >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] >> __init__ >> /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:31 >> 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] >> __init__ >> /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:32 >> 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Parameter >> Groups. validate >> /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:43 >> 2015-05-07 14:02:45.226 3942 DEBUG heat.engine.parameter_groups >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] ['OS::stack_id'] validate >> /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:44 >> 2015-05-07 14:02:45.233 3942 INFO heat.engine.resource >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Server >> "my_instance" >> 2015-05-07 14:02:45.385 3942 DEBUG heat.common.keystoneclient >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Using stack domain >> 9ce7896d79914b68aef34d397fd3fde4 __init__ >> /usr/lib/python2.7/site-packages/heat/common/heat_keystoneclient.py:115 >> 2015-05-07 14:02:45.405 3942 INFO urllib3.connectionpool >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection >> (1): 192.168.5.33 >> 2015-05-07 14:02:45.448 3942 DEBUG urllib3.connectionpool >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET >> /v2/fac261cd98974411a9b2e977cd9ec876/os-keypairs/userkey HTTP/1.1" 200 674 >> _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 >> 2015-05-07 14:02:45.486 3942 DEBUG glanceclient.common.http >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] curl -i -X GET -H 'User-Agent: >> python-glanceclient' -H 'Content-Type: application/octet-stream' -H >> 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'X-Auth-Token: >> {SHA1}bc08e056cd78f34e9ada3dd99ceae37d33daf3f0' >> http://192.168.5.33:9292/v1/images/detail?limit=20&name=cirros >> log_curl_request >> /usr/lib/python2.7/site-packages/glanceclient/common/http.py:122 >> 2015-05-07 14:02:45.488 3942 INFO urllib3.connectionpool >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection >> (1): 192.168.5.33 >> 2015-05-07 14:02:45.770 3942 DEBUG urllib3.connectionpool >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET >> /v1/images/detail?limit=20&name=cirros HTTP/1.1" 200 481 _make_request >> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 >> 2015-05-07 14:02:45.771 3942 DEBUG glanceclient.common.http >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] >> HTTP/1.1 200 OK >> date: Thu, 07 May 2015 12:02:45 GMT >> content-length: 481 >> content-type: application/json; charset=UTF-8 >> x-openstack-request-id: req-c62b407b-0d8f-43c6-a8be-df02015dc846 >> >> {"images": [{"status": "active", "deleted_at": null, "name": "cirros", >> "deleted": false, "container_format": "bare", "created_at": >> "2015-05-06T14:27:54", "disk_format": "qcow2", "updated_at": >> "2015-05-06T15:01:15", "min_disk": 0, "protected": false, "id": >> "d80b5a24-2567-438f-89f8-b381a6716887", "min_ram": 0, "checksum": >> "133eae9fb1c98f45894a4e60d8736619", "owner": >> "3740df0f18754509a252738385d375b9", "is_public": true, "virtual_size": >> null, "properties": {}, "size": 13200896}]} >> log_http_response >> /usr/lib/python2.7/site-packages/glanceclient/common/http.py:135 >> 2015-05-07 14:02:46.049 3942 DEBUG keystoneclient.auth.identity.v3 >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Making authentication request >> to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref >> /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 >> 2015-05-07 14:02:46.052 3942 INFO urllib3.connectionpool >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection >> (1): 192.168.5.33 >> 2015-05-07 14:02:46.247 3942 DEBUG urllib3.connectionpool >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/auth/tokens HTTP/1.1" >> 201 8275 _make_request >> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 >> 2015-05-07 14:02:46.250 3942 DEBUG keystoneclient.session >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] REQ: curl -i -X POST >> http://192.168.5.33:35357/v3/OS-TRUST/trusts -H "User-Agent: >> python-keystoneclient" -H "Content-Type: application/json" -H >> "X-Auth-Token: TOKEN_REDACTED" -d '{"trust": {"impersonation": true, >> "project_id": "fac261cd98974411a9b2e977cd9ec876", "trustor_user_id": >> "24f6dac3f1444d89884c1b1977bb0d87", "roles": [{"name": >> "heat_stack_owner"}], "trustee_user_id": >> "4f7e2e76441e483982fb863ed02fe63e"}}' _http_log_request >> /usr/lib/python2.7/site-packages/keystoneclient/session.py:155 >> 2015-05-07 14:02:46.252 3942 INFO urllib3.connectionpool >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection >> (1): 192.168.5.33 >> 2015-05-07 14:02:46.381 3942 DEBUG urllib3.connectionpool >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/OS-TRUST/trusts >> HTTP/1.1" 201 717 _make_request >> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 >> 2015-05-07 14:02:46.383 3942 DEBUG keystoneclient.session >> [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] RESP: [201] {'content-length': >> '717', 'vary': 'X-Auth-Token', 'server': 'Apache/2.4.6 (CentOS)', >> 'connection': 'close', 'date': 'Thu, 07 May 2015 12:02:46 GMT', >> 'content-type': 'application/json'} >> RESP BODY: {"trust": {"impersonation": true, "roles_links": {"self": " >> http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f/roles", >> "previous": null, "next": null}, "deleted_at": null, "trustor_user_id": >> "24f6dac3f1444d89884c1b1977bb0d87", "links": {"self": " >> http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f"}, >> "roles": [{"id": "cf346090e5a042ebac674bcfe14f4076", "links": {"self": " >> http://192.168.5.33:35357/v3/roles/cf346090e5a042ebac674bcfe14f4076"}, >> "name": "heat_stack_owner"}], "remaining_uses": null, "expires_at": null, >> "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e", "project_id": >> "fac261cd98974411a9b2e977cd9ec876", "id": >> "96e0e32b7c504d3f8f9b82fcc2658e5f"}} >> _http_log_response >> /usr/lib/python2.7/site-packages/keystoneclient/session.py:182 >> 2015-05-07 14:02:46.454 3942 DEBUG heat.engine.stack_lock [-] Engine >> b3d3b0e1-5c47-4912-94e9-1b75159b9b10 acquired lock on stack >> bb7d16c4-b73f-428d-9dbd-5b089748f374 acquire >> /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:72 >> 2015-05-07 14:02:46.456 3942 INFO oslo.messaging._drivers.impl_rabbit [-] >> Connecting to AMQP server on 192.168.5.33:5672 >> 2015-05-07 14:02:46.474 3942 DEBUG keystoneclient.auth.identity.v3 [-] >> Making authentication request to http://192.168.5.33:35357/v3/auth/tokens >> get_auth_ref >> /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 >> 2015-05-07 14:02:46.477 3942 INFO urllib3.connectionpool [-] Starting new >> HTTP connection (1): 192.168.5.33 >> 2015-05-07 14:02:46.481 3942 INFO oslo.messaging._drivers.impl_rabbit [-] >> Connected to AMQP server on 192.168.5.33:5672 >> 2015-05-07 14:02:46.666 3942 DEBUG urllib3.connectionpool [-] "POST >> /v3/auth/tokens HTTP/1.1" 401 114 _make_request >> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 >> 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.session [-] Request >> returned failure status: 401 request >> /usr/lib/python2.7/site-packages/keystoneclient/session.py:345 >> 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.v3.client [-] Authorization >> failed. get_raw_token_from_identity_service >> /usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:267 >> 2015-05-07 14:02:46.673 3942 DEBUG heat.engine.stack_lock [-] Engine >> b3d3b0e1-5c47-4912-94e9-1b75159b9b10 released lock on stack >> bb7d16c4-b73f-428d-9dbd-5b089748f374 release >> /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:122 >> 2015-05-07 14:03:33.623 3942 INFO oslo.messaging._drivers.impl_rabbit >> [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connecting to AMQP server on >> 192.168.5.33:5672 >> 2015-05-07 14:03:33.645 3942 INFO oslo.messaging._drivers.impl_rabbit >> [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connected to AMQP server on >> 192.168.5.33:5672 >> >> >> >> >> >> 2015-05-07 11:45 GMT+02:00 Steven Hardy : >> >>> On Tue, May 05, 2015 at 01:18:17PM +0200, ICHIBA Sara wrote: >>> > hello, >>> > >>> > here is my HOT template, it's very basic: >>> > >>> > heat_template_version: 2013-05-23 >>> > >>> > description: Simple template to deploy a single compute instance >>> > >>> > resources: >>> > A my_instance: >>> > A A A type: OS::Nova::Server >>> > A A A properties: >>> > A A A A A image: Cirros 0.3.3 >>> > A A A A A flavor: m1.small >>> > A A A A A key_name: userkey >>> > A A A A A networks: >>> > A A A A A A A - network: fdf2bb77-a828-401d-969a-736a8028950f >>> > >>> > for the logs please find them attached. >>> >>> These logs are a little confusing - it looks like you failed to create >>> the >>> stack due to some validation errors, then tried again and did a >>> stack-check >>> and a stack resume? >>> >>> Can you please set debug = True in the [DEFAULT] section of your >>> heat.conf, >>> restart heat-engine and try again please? >>> >>> Also, some basic checks are: >>> >>> 1. When the stack is CREATE_IN_PROGRESS, what does nova list show for the >>> instance? >>> >>> 2. Is it possible to boot an instance using nova boot, using the same >>> arguments (image, flavor, key etc) that you specify in the heat template? >>> >>> I suspect that Heat is not actually the problem here, and that some part >>> of >>> Nova is either misconfigured or not running, but I can't prove that >>> without >>> seeing the nova CLI output and/or the nova logs. >>> >>> Steve >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Thu May 7 12:26:04 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Thu, 7 May 2015 14:26:04 +0200 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: <20150507094544.GA31444@t430slt.redhat.com> Message-ID: 2015-05-07 14:08 GMT+02:00 ICHIBA Sara : > Actually, Nova is working I just spawned a VM with the same flavor and > image. and when I try to do the same with heat it fails. below some logs: > > > nova-compute.log > > 2015-05-07 13:58:56.208 3928 AUDIT nova.compute.manager > [req-b57bfc38-9d77-48fc-8185-474d1f9076a6 None] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Starting instance... > 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Attempting claim: memory 512 MB, disk > 1 GB > 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total memory: 3791 MB, used: 512.00 MB > 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] memory limit: 5686.50 MB, free: > 5174.50 MB > 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total disk: 13 GB, used: 0.00 GB > 2015-05-07 13:58:56.378 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] disk limit not specified, defaulting > to unlimited > 2015-05-07 13:58:56.395 3928 AUDIT nova.compute.claims [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Claim successful > 2015-05-07 13:58:56.590 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 13:58:56.787 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 13:58:57.269 3928 INFO nova.virt.libvirt.driver [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Creating image > 2015-05-07 13:59:27.642 3928 INFO oslo.messaging._drivers.impl_rabbit [-] > Connecting to AMQP server on 192.168.5.33:5672 > 2015-05-07 13:59:27.661 3928 AUDIT nova.compute.resource_tracker [-] > Auditing locally available compute resources > 2015-05-07 13:59:27.702 3928 INFO oslo.messaging._drivers.impl_rabbit [-] > Connected to AMQP server on 192.168.5.33:5672 > 2015-05-07 13:59:27.800 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Total > physical ram (MB): 3791, total allocated virtual ram (MB): 1024 > 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Free > disk (GB): 12 > 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] Total > usable vcpus: 4, total allocated vcpus: 0 > 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] PCI > stats: [] > 2015-05-07 13:59:28.101 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 13:59:28.101 3928 INFO nova.compute.resource_tracker [-] > Compute_service record updated for > localhost.localdomain:localhost.localdomain > 2015-05-07 13:59:47.110 3928 WARNING nova.virt.disk.vfs.guestfs [-] Failed > to close augeas aug_close: do_aug_close: you must call 'aug-init' first to > initialize Augeas > 2015-05-07 13:59:51.364 3928 INFO nova.compute.manager [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] VM Started (Lifecycle Event) > 2015-05-07 13:59:51.384 3928 INFO nova.virt.libvirt.driver [-] [instance: > 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Instance spawned successfully. > 2015-05-07 14:00:28.264 3928 AUDIT nova.compute.resource_tracker [-] > Auditing locally available compute resources > 2015-05-07 14:00:29.007 3928 AUDIT nova.compute.resource_tracker [-] Total > physical ram (MB): 3791, total allocated virtual ram (MB): 1024 > 2015-05-07 14:00:29.008 3928 AUDIT nova.compute.resource_tracker [-] Free > disk (GB): 12 > 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] Total > usable vcpus: 4, total allocated vcpus: 1 > 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] PCI > stats: [] > 2015-05-07 14:00:29.048 3928 INFO nova.scheduler.client.report [-] > Compute_service record updated for ('localhost.localdomain', > 'localhost.localdomain') > 2015-05-07 14:00:29.048 3928 INFO nova.compute.resource_tracker [-] > Compute_service record updated for > localhost.localdomain:localhost.localdomain > > > > > heat-engine.log > > 2015-05-07 14:02:45.177 3942 INFO heat.engine.service > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Creating stack > my_first_stack > 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension > EntryPoint.parse('AWSTemplateFormatVersion.2010-09-09 = > heat.engine.cfn.template:CfnTemplate') _load_plugins > /usr/lib/python2.7/site-packages/stevedore/extension.py:156 > 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension > EntryPoint.parse('heat_template_version.2013-05-23 = > heat.engine.hot.template:HOTemplate20130523') _load_plugins > /usr/lib/python2.7/site-packages/stevedore/extension.py:156 > 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension > EntryPoint.parse('HeatTemplateFormatVersion.2012-12-12 = > heat.engine.cfn.template:HeatTemplate') _load_plugins > /usr/lib/python2.7/site-packages/stevedore/extension.py:156 > 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension > EntryPoint.parse('heat_template_version.2014-10-16 = > heat.engine.hot.template:HOTemplate20141016') _load_plugins > /usr/lib/python2.7/site-packages/stevedore/extension.py:156 > 2015-05-07 14:02:45.224 3942 DEBUG heat.engine.parameter_groups > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] > __init__ > /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:31 > 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] > __init__ > /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:32 > 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Parameter > Groups. validate > /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:43 > 2015-05-07 14:02:45.226 3942 DEBUG heat.engine.parameter_groups > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] ['OS::stack_id'] validate > /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:44 > 2015-05-07 14:02:45.233 3942 INFO heat.engine.resource > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Server > "my_instance" > 2015-05-07 14:02:45.385 3942 DEBUG heat.common.keystoneclient > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Using stack domain > 9ce7896d79914b68aef34d397fd3fde4 __init__ > /usr/lib/python2.7/site-packages/heat/common/heat_keystoneclient.py:115 > 2015-05-07 14:02:45.405 3942 INFO urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection > (1): 192.168.5.33 > 2015-05-07 14:02:45.448 3942 DEBUG urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET > /v2/fac261cd98974411a9b2e977cd9ec876/os-keypairs/userkey HTTP/1.1" 200 674 > _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:45.486 3942 DEBUG glanceclient.common.http > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] curl -i -X GET -H 'User-Agent: > python-glanceclient' -H 'Content-Type: application/octet-stream' -H > 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'X-Auth-Token: > {SHA1}bc08e056cd78f34e9ada3dd99ceae37d33daf3f0' > http://192.168.5.33:9292/v1/images/detail?limit=20&name=cirros > log_curl_request > /usr/lib/python2.7/site-packages/glanceclient/common/http.py:122 > 2015-05-07 14:02:45.488 3942 INFO urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection > (1): 192.168.5.33 > 2015-05-07 14:02:45.770 3942 DEBUG urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET > /v1/images/detail?limit=20&name=cirros HTTP/1.1" 200 481 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:45.771 3942 DEBUG glanceclient.common.http > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] > HTTP/1.1 200 OK > date: Thu, 07 May 2015 12:02:45 GMT > content-length: 481 > content-type: application/json; charset=UTF-8 > x-openstack-request-id: req-c62b407b-0d8f-43c6-a8be-df02015dc846 > > {"images": [{"status": "active", "deleted_at": null, "name": "cirros", > "deleted": false, "container_format": "bare", "created_at": > "2015-05-06T14:27:54", "disk_format": "qcow2", "updated_at": > "2015-05-06T15:01:15", "min_disk": 0, "protected": false, "id": > "d80b5a24-2567-438f-89f8-b381a6716887", "min_ram": 0, "checksum": > "133eae9fb1c98f45894a4e60d8736619", "owner": > "3740df0f18754509a252738385d375b9", "is_public": true, "virtual_size": > null, "properties": {}, "size": 13200896}]} > log_http_response > /usr/lib/python2.7/site-packages/glanceclient/common/http.py:135 > 2015-05-07 14:02:46.049 3942 DEBUG keystoneclient.auth.identity.v3 > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Making authentication request > to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref > /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 > 2015-05-07 14:02:46.052 3942 INFO urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection > (1): 192.168.5.33 > 2015-05-07 14:02:46.247 3942 DEBUG urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/auth/tokens HTTP/1.1" > 201 8275 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:46.250 3942 DEBUG keystoneclient.session > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] REQ: curl -i -X POST > http://192.168.5.33:35357/v3/OS-TRUST/trusts -H "User-Agent: > python-keystoneclient" -H "Content-Type: application/json" -H > "X-Auth-Token: TOKEN_REDACTED" -d '{"trust": {"impersonation": true, > "project_id": "fac261cd98974411a9b2e977cd9ec876", "trustor_user_id": > "24f6dac3f1444d89884c1b1977bb0d87", "roles": [{"name": > "heat_stack_owner"}], "trustee_user_id": > "4f7e2e76441e483982fb863ed02fe63e"}}' _http_log_request > /usr/lib/python2.7/site-packages/keystoneclient/session.py:155 > 2015-05-07 14:02:46.252 3942 INFO urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection > (1): 192.168.5.33 > 2015-05-07 14:02:46.381 3942 DEBUG urllib3.connectionpool > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/OS-TRUST/trusts > HTTP/1.1" 201 717 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:46.383 3942 DEBUG keystoneclient.session > [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] RESP: [201] {'content-length': > '717', 'vary': 'X-Auth-Token', 'server': 'Apache/2.4.6 (CentOS)', > 'connection': 'close', 'date': 'Thu, 07 May 2015 12:02:46 GMT', > 'content-type': 'application/json'} > RESP BODY: {"trust": {"impersonation": true, "roles_links": {"self": " > http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f/roles", > "previous": null, "next": null}, "deleted_at": null, "trustor_user_id": > "24f6dac3f1444d89884c1b1977bb0d87", "links": {"self": " > http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f"}, > "roles": [{"id": "cf346090e5a042ebac674bcfe14f4076", "links": {"self": " > http://192.168.5.33:35357/v3/roles/cf346090e5a042ebac674bcfe14f4076"}, > "name": "heat_stack_owner"}], "remaining_uses": null, "expires_at": null, > "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e", "project_id": > "fac261cd98974411a9b2e977cd9ec876", "id": > "96e0e32b7c504d3f8f9b82fcc2658e5f"}} > _http_log_response > /usr/lib/python2.7/site-packages/keystoneclient/session.py:182 > 2015-05-07 14:02:46.454 3942 DEBUG heat.engine.stack_lock [-] Engine > b3d3b0e1-5c47-4912-94e9-1b75159b9b10 acquired lock on stack > bb7d16c4-b73f-428d-9dbd-5b089748f374 acquire > /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:72 > 2015-05-07 14:02:46.456 3942 INFO oslo.messaging._drivers.impl_rabbit [-] > Connecting to AMQP server on 192.168.5.33:5672 > 2015-05-07 14:02:46.474 3942 DEBUG keystoneclient.auth.identity.v3 [-] > Making authentication request to http://192.168.5.33:35357/v3/auth/tokens > get_auth_ref > /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 > 2015-05-07 14:02:46.477 3942 INFO urllib3.connectionpool [-] Starting new > HTTP connection (1): 192.168.5.33 > 2015-05-07 14:02:46.481 3942 INFO oslo.messaging._drivers.impl_rabbit [-] > Connected to AMQP server on 192.168.5.33:5672 > 2015-05-07 14:02:46.666 3942 DEBUG urllib3.connectionpool [-] "POST > /v3/auth/tokens HTTP/1.1" 401 114 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.session [-] Request > returned failure status: 401 request > /usr/lib/python2.7/site-packages/keystoneclient/session.py:345 > 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.v3.client [-] Authorization > failed. get_raw_token_from_identity_service > /usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:267 > 2015-05-07 14:02:46.673 3942 DEBUG heat.engine.stack_lock [-] Engine > b3d3b0e1-5c47-4912-94e9-1b75159b9b10 released lock on stack > bb7d16c4-b73f-428d-9dbd-5b089748f374 release > /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:122 > 2015-05-07 14:03:33.623 3942 INFO oslo.messaging._drivers.impl_rabbit > [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connecting to AMQP server on > 192.168.5.33:5672 > 2015-05-07 14:03:33.645 3942 INFO oslo.messaging._drivers.impl_rabbit > [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connected to AMQP server on > 192.168.5.33:5672 > > > > > > 2015-05-07 11:45 GMT+02:00 Steven Hardy : > >> On Tue, May 05, 2015 at 01:18:17PM +0200, ICHIBA Sara wrote: >> > hello, >> > >> > here is my HOT template, it's very basic: >> > >> > heat_template_version: 2013-05-23 >> > >> > description: Simple template to deploy a single compute instance >> > >> > resources: >> > A my_instance: >> > A A A type: OS::Nova::Server >> > A A A properties: >> > A A A A A image: Cirros 0.3.3 >> > A A A A A flavor: m1.small >> > A A A A A key_name: userkey >> > A A A A A networks: >> > A A A A A A A - network: fdf2bb77-a828-401d-969a-736a8028950f >> > >> > for the logs please find them attached. >> >> These logs are a little confusing - it looks like you failed to create the >> stack due to some validation errors, then tried again and did a >> stack-check >> and a stack resume? >> >> Can you please set debug = True in the [DEFAULT] section of your >> heat.conf, >> restart heat-engine and try again please? >> >> Also, some basic checks are: >> >> 1. When the stack is CREATE_IN_PROGRESS, what does nova list show for the >> instance? >> >> 2. Is it possible to boot an instance using nova boot, using the same >> arguments (image, flavor, key etc) that you specify in the heat template? >> >> I suspect that Heat is not actually the problem here, and that some part >> of >> Nova is either misconfigured or not running, but I can't prove that >> without >> seeing the nova CLI output and/or the nova logs. >> >> Steve >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: heat-api.log Type: application/octet-stream Size: 15383632 bytes Desc: not available URL: From tooyama at virtualtech.jp Thu May 7 13:03:37 2015 From: tooyama at virtualtech.jp (=?iso-2022-jp?B?VlRKIBskQjFzOzNNTko/GyhC?=) Date: Thu, 7 May 2015 22:03:37 +0900 Subject: [Rdo-list] Just a little mistake on the OpenStack Dashboard In-Reply-To: <554B52D5.8060004@redhat.com> References: <554B3A55.4070703@virtualtech.jp> <554B5249.8050508@redhat.com> <554B52D5.8060004@redhat.com> Message-ID: <44CA3503-FDEF-48DF-A104-AFD59A86B39F@virtualtech.jp> Hello Thank you follow-up. Bug 1218627 is... This is the very thing that I have been looking for. I appreciate it. Youhei Tooyama H27/05/07 20:56?Rich Bowen ??????: > Matthias, there was an image attached much further down in the email, and it appears that it was the same issue as https://bugzilla.redhat.com/show_bug.cgi?id=1218627 > > --Rich > > >> On 05/07/2015 07:53 AM, Matthias Runge wrote: >>> On 07/05/15 12:11, Youhei Tooyama wrote: >>> Hi >>> I'm try the OpenStack Kilo using the RDO packstack. >>> This is Good Work on the My Server. >>> Thanks! >> >> Congrats! Glad to hear. >> >>> But,Just a little mistake. >>> What is This? >> Uhm, >> what is what? >> >> You see an issue with Horizon? What happens or what do you see? What >> would you expect? >> >> Matthias >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ichi.sara at gmail.com Thu May 7 14:16:06 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Thu, 7 May 2015 16:16:06 +0200 Subject: [Rdo-list] Fwd: [heat]: stack stays interminably under the status create in progress In-Reply-To: <7C1824C61EE769448FCE74CD83F0CB4F5835B093@US70TWXCHMBA11.zam.alcatel-lucent.com> References: <20150507094544.GA31444@t430slt.redhat.com> <7C1824C61EE769448FCE74CD83F0CB4F5835B093@US70TWXCHMBA11.zam.alcatel-lucent.com> Message-ID: I just received this email from John and applied what he suggested and it worked. He's right, packstack is using keystone v2 while heat try to use keystone v3. Well, for those of you running through the same problem , here is a hint on how to solve it. ---------- Forwarded message ---------- From: Haller, John H (John) Date: 2015-05-07 15:56 GMT+02:00 Subject: RE: [Rdo-list] [heat]: stack stays interminably under the status create in progress To: ICHIBA Sara We had a problem using waitcondition, and it turned out that Keystone V3 was not completely enabled. Not sure if this will help your problem or not, but give it a try. If it does solve it, I?m guessing there is some missing configuration in packstack, and feel free to pass this on to the list. $ heat-keystone-setup-domain \ --stack-user-domain-name heat_user_domain \ --stack-domain-admin heat_domain_admin \ --stack-domain-admin-password heat_domain_password Please update your heat.conf with the following in [DEFAULT] stack_user_domain_id=UUID of heat_user_domain stack_domain_admin=heat_domain_admin stack_domain_admin_password=heat_domain_password Restart the Heat API service and Heat engine service Regards, John Haller Alcatel-Lucent Naperville, Illinois 60563 *From:* rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] *On Behalf Of *ICHIBA Sara *Sent:* Thursday, May 07, 2015 7:41 AM *To:* Steven Hardy; rdo-list at redhat.com *Subject:* Re: [Rdo-list] [heat]: stack stays interminably under the status create in progress I found this https://bugs.launchpad.net/heat/+bug/1405110 . Apparently i'm not the only one to have this problem 2015-05-07 14:26 GMT+02:00 ICHIBA Sara : 2015-05-07 14:08 GMT+02:00 ICHIBA Sara : Actually, Nova is working I just spawned a VM with the same flavor and image. and when I try to do the same with heat it fails. below some logs: nova-compute.log 2015-05-07 13:58:56.208 3928 AUDIT nova.compute.manager [req-b57bfc38-9d77-48fc-8185-474d1f9076a6 None] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Starting instance... 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Attempting claim: memory 512 MB, disk 1 GB 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total memory: 3791 MB, used: 512.00 MB 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] memory limit: 5686.50 MB, free: 5174.50 MB 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total disk: 13 GB, used: 0.00 GB 2015-05-07 13:58:56.378 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] disk limit not specified, defaulting to unlimited 2015-05-07 13:58:56.395 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Claim successful 2015-05-07 13:58:56.590 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:58:56.787 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:58:57.269 3928 INFO nova.virt.libvirt.driver [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Creating image 2015-05-07 13:59:27.642 3928 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 13:59:27.661 3928 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2015-05-07 13:59:27.702 3928 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 13:59:27.800 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 3791, total allocated virtual ram (MB): 1024 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 12 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 4, total allocated vcpus: 0 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] PCI stats: [] 2015-05-07 13:59:28.101 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:59:28.101 3928 INFO nova.compute.resource_tracker [-] Compute_service record updated for localhost.localdomain:localhost.localdomain 2015-05-07 13:59:47.110 3928 WARNING nova.virt.disk.vfs.guestfs [-] Failed to close augeas aug_close: do_aug_close: you must call 'aug-init' first to initialize Augeas 2015-05-07 13:59:51.364 3928 INFO nova.compute.manager [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] VM Started (Lifecycle Event) 2015-05-07 13:59:51.384 3928 INFO nova.virt.libvirt.driver [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Instance spawned successfully. 2015-05-07 14:00:28.264 3928 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2015-05-07 14:00:29.007 3928 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 3791, total allocated virtual ram (MB): 1024 2015-05-07 14:00:29.008 3928 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 12 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 4, total allocated vcpus: 1 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] PCI stats: [] 2015-05-07 14:00:29.048 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 14:00:29.048 3928 INFO nova.compute.resource_tracker [-] Compute_service record updated for localhost.localdomain:localhost.localdomain heat-engine.log 2015-05-07 14:02:45.177 3942 INFO heat.engine.service [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Creating stack my_first_stack 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('AWSTemplateFormatVersion.2010-09-09 = heat.engine.cfn.template:CfnTemplate') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('heat_template_version.2013-05-23 = heat.engine.hot.template:HOTemplate20130523') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('HeatTemplateFormatVersion.2012-12-12 = heat.engine.cfn.template:HeatTemplate') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('heat_template_version.2014-10-16 = heat.engine.hot.template:HOTemplate20141016') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.224 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] __init__ /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:31 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] __init__ /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:32 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Parameter Groups. validate /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:43 2015-05-07 14:02:45.226 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] ['OS::stack_id'] validate /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:44 2015-05-07 14:02:45.233 3942 INFO heat.engine.resource [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Server "my_instance" 2015-05-07 14:02:45.385 3942 DEBUG heat.common.keystoneclient [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Using stack domain 9ce7896d79914b68aef34d397fd3fde4 __init__ /usr/lib/python2.7/site-packages/heat/common/heat_keystoneclient.py:115 2015-05-07 14:02:45.405 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:45.448 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET /v2/fac261cd98974411a9b2e977cd9ec876/os-keypairs/userkey HTTP/1.1" 200 674 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:45.486 3942 DEBUG glanceclient.common.http [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] curl -i -X GET -H 'User-Agent: python-glanceclient' -H 'Content-Type: application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'X-Auth-Token: {SHA1}bc08e056cd78f34e9ada3dd99ceae37d33daf3f0' http://192.168.5.33:9292/v1/images/detail?limit=20&name=cirros log_curl_request /usr/lib/python2.7/site-packages/glanceclient/common/http.py:122 2015-05-07 14:02:45.488 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:45.770 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET /v1/images/detail?limit=20&name=cirros HTTP/1.1" 200 481 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:45.771 3942 DEBUG glanceclient.common.http [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] HTTP/1.1 200 OK date: Thu, 07 May 2015 12:02:45 GMT content-length: 481 content-type: application/json; charset=UTF-8 x-openstack-request-id: req-c62b407b-0d8f-43c6-a8be-df02015dc846 {"images": [{"status": "active", "deleted_at": null, "name": "cirros", "deleted": false, "container_format": "bare", "created_at": "2015-05-06T14:27:54", "disk_format": "qcow2", "updated_at": "2015-05-06T15:01:15", "min_disk": 0, "protected": false, "id": "d80b5a24-2567-438f-89f8-b381a6716887", "min_ram": 0, "checksum": "133eae9fb1c98f45894a4e60d8736619", "owner": "3740df0f18754509a252738385d375b9", "is_public": true, "virtual_size": null, "properties": {}, "size": 13200896}]} log_http_response /usr/lib/python2.7/site-packages/glanceclient/common/http.py:135 2015-05-07 14:02:46.049 3942 DEBUG keystoneclient.auth.identity.v3 [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Making authentication request to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 2015-05-07 14:02:46.052 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.247 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/auth/tokens HTTP/1.1" 201 8275 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.250 3942 DEBUG keystoneclient.session [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] REQ: curl -i -X POST http://192.168.5.33:35357/v3/OS-TRUST/trusts -H "User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H "X-Auth-Token: TOKEN_REDACTED" -d '{"trust": {"impersonation": true, "project_id": "fac261cd98974411a9b2e977cd9ec876", "trustor_user_id": "24f6dac3f1444d89884c1b1977bb0d87", "roles": [{"name": "heat_stack_owner"}], "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e"}}' _http_log_request /usr/lib/python2.7/site-packages/keystoneclient/session.py:155 2015-05-07 14:02:46.252 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.381 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/OS-TRUST/trusts HTTP/1.1" 201 717 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.383 3942 DEBUG keystoneclient.session [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] RESP: [201] {'content-length': '717', 'vary': 'X-Auth-Token', 'server': 'Apache/2.4.6 (CentOS)', 'connection': 'close', 'date': 'Thu, 07 May 2015 12:02:46 GMT', 'content-type': 'application/json'} RESP BODY: {"trust": {"impersonation": true, "roles_links": {"self": " http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f/roles", "previous": null, "next": null}, "deleted_at": null, "trustor_user_id": "24f6dac3f1444d89884c1b1977bb0d87", "links": {"self": " http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f"}, "roles": [{"id": "cf346090e5a042ebac674bcfe14f4076", "links": {"self": " http://192.168.5.33:35357/v3/roles/cf346090e5a042ebac674bcfe14f4076"}, "name": "heat_stack_owner"}], "remaining_uses": null, "expires_at": null, "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e", "project_id": "fac261cd98974411a9b2e977cd9ec876", "id": "96e0e32b7c504d3f8f9b82fcc2658e5f"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneclient/session.py:182 2015-05-07 14:02:46.454 3942 DEBUG heat.engine.stack_lock [-] Engine b3d3b0e1-5c47-4912-94e9-1b75159b9b10 acquired lock on stack bb7d16c4-b73f-428d-9dbd-5b089748f374 acquire /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:72 2015-05-07 14:02:46.456 3942 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 14:02:46.474 3942 DEBUG keystoneclient.auth.identity.v3 [-] Making authentication request to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 2015-05-07 14:02:46.477 3942 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.481 3942 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 14:02:46.666 3942 DEBUG urllib3.connectionpool [-] "POST /v3/auth/tokens HTTP/1.1" 401 114 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.session [-] Request returned failure status: 401 request /usr/lib/python2.7/site-packages/keystoneclient/session.py:345 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.v3.client [-] Authorization failed. get_raw_token_from_identity_service /usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:267 2015-05-07 14:02:46.673 3942 DEBUG heat.engine.stack_lock [-] Engine b3d3b0e1-5c47-4912-94e9-1b75159b9b10 released lock on stack bb7d16c4-b73f-428d-9dbd-5b089748f374 release /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:122 2015-05-07 14:03:33.623 3942 INFO oslo.messaging._drivers.impl_rabbit [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 14:03:33.645 3942 INFO oslo.messaging._drivers.impl_rabbit [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 11:45 GMT+02:00 Steven Hardy : On Tue, May 05, 2015 at 01:18:17PM +0200, ICHIBA Sara wrote: > hello, > > here is my HOT template, it's very basic: > > heat_template_version: 2013-05-23 > > description: Simple template to deploy a single compute instance > > resources: > A my_instance: > A A A type: OS::Nova::Server > A A A properties: > A A A A A image: Cirros 0.3.3 > A A A A A flavor: m1.small > A A A A A key_name: userkey > A A A A A networks: > A A A A A A A - network: fdf2bb77-a828-401d-969a-736a8028950f > > for the logs please find them attached. These logs are a little confusing - it looks like you failed to create the stack due to some validation errors, then tried again and did a stack-check and a stack resume? Can you please set debug = True in the [DEFAULT] section of your heat.conf, restart heat-engine and try again please? Also, some basic checks are: 1. When the stack is CREATE_IN_PROGRESS, what does nova list show for the instance? 2. Is it possible to boot an instance using nova boot, using the same arguments (image, flavor, key etc) that you specify in the heat template? I suspect that Heat is not actually the problem here, and that some part of Nova is either misconfigured or not running, but I can't prove that without seeing the nova CLI output and/or the nova logs. Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at buskey.name Thu May 7 14:29:47 2015 From: tom at buskey.name (Tom Buskey) Date: Thu, 7 May 2015 10:29:47 -0400 Subject: [Rdo-list] packaging lifecycles In-Reply-To: References: Message-ID: There are a number of fixes that have been put into git for RDO-juno but the rpms have not been updated. Is there going to be another build of the rpms to get these changes so yum update/packstack applies them? I've resorted to patching individual files. On Wed, May 6, 2015 at 6:39 PM, Ha?kel wrote: > Sorry, something fell on my keyboard and sent the message while typing it. > > As stated, we follow upstream release cycle: > https://wiki.openstack.org/wiki/Releases > > And if you want to help in maintaining RDO, you're more than welcome > https://www.rdoproject.org/packaging/rdo-packaging.html > Our long-term goal is to fully open up RDO contribution process, and > most of it is done. > RDO Juno already includes features contributed by non-redhatters. > > We hold a packaging meeting every wedsnesday (notification sent on the > list 2 days prior), > feel free to join us. > > Regards, > H. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu May 7 19:11:08 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 07 May 2015 15:11:08 -0400 Subject: [Rdo-list] Fwd: BOF/Meetup space at Summit? In-Reply-To: <8CB4MFDP_554b820e8fa83_37813f87b74cd320664633_sprut@zendesk.com> References: <8CB4MFDP_554b820e8fa83_37813f87b74cd320664633_sprut@zendesk.com> Message-ID: <554BB8CC.2070308@redhat.com> FYI, we have been assigned a slot for our BoF session. It will be Thursday, 9:50am - 10:30am, and is listed in the schedule at https://www.openstack.org/summit/vancouver-2015/schedule/ Note that this is during regular conference sessions, so be sure to check that time period on the schedule, and try to make it to this event if at all possible. Thanks! -------- Forwarded Message -------- ... May 7, 10:17 Hi Rich, Thanks! Your BoF session will be on Thursday from 9:50am - 10:30am in East Building, Room 10. This session will be added to the official schedule in the next couple of days. From pmyers at redhat.com Thu May 7 19:13:37 2015 From: pmyers at redhat.com (Perry Myers) Date: Thu, 07 May 2015 15:13:37 -0400 Subject: [Rdo-list] Fwd: BOF/Meetup space at Summit? In-Reply-To: <554BB8CC.2070308@redhat.com> References: <8CB4MFDP_554b820e8fa83_37813f87b74cd320664633_sprut@zendesk.com> <554BB8CC.2070308@redhat.com> Message-ID: <554BB961.2030800@redhat.com> On 05/07/2015 03:11 PM, Rich Bowen wrote: > FYI, we have been assigned a slot for our BoF session. It will be > Thursday, 9:50am - 10:30am, and is listed in the schedule at > https://www.openstack.org/summit/vancouver-2015/schedule/ > > Note that this is during regular conference sessions, so be sure to > check that time period on the schedule, and try to make it to this event > if at all possible. Awesome! This will be much better than the impromptu meeting we had on the floor in Paris :) Thanks for chasing down getting an official BoF for RDO :) Perry > Thanks! > > > -------- Forwarded Message -------- > ... > > May 7, 10:17 > > Hi Rich, > > Thanks! > > Your BoF session will be on Thursday from 9:50am - 10:30am in East > Building, Room 10. This session will be added to the official schedule > in the next couple of days. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From Tim.Bell at cern.ch Thu May 7 19:26:26 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 7 May 2015 19:26:26 +0000 Subject: [Rdo-list] Fwd: BOF/Meetup space at Summit? In-Reply-To: <554BB961.2030800@redhat.com> References: <8CB4MFDP_554b820e8fa83_37813f87b74cd320664633_sprut@zendesk.com> <554BB8CC.2070308@redhat.com> <554BB961.2030800@redhat.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5010A158932@CERNXCHG44.cern.ch> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Perry Myers > Sent: 07 May 2015 21:14 > To: Rich Bowen; rdo-list at redhat.com > Subject: Re: [Rdo-list] Fwd: BOF/Meetup space at Summit? > > On 05/07/2015 03:11 PM, Rich Bowen wrote: > > FYI, we have been assigned a slot for our BoF session. It will be > > Thursday, 9:50am - 10:30am, and is listed in the schedule at > > https://www.openstack.org/summit/vancouver-2015/schedule/ > > > > Note that this is during regular conference sessions, so be sure to > > check that time period on the schedule, and try to make it to this > > event if at all possible. > > Awesome! This will be much better than the impromptu meeting we had on the > floor in Paris :) > > Thanks for chasing down getting an official BoF for RDO :) > Great... do we get chairs too :-) ? Tim > Perry > > > Thanks! > > > > > > -------- Forwarded Message -------- > > ... > > > > May 7, 10:17 > > > > Hi Rich, > > > > Thanks! > > > > Your BoF session will be on Thursday from 9:50am - 10:30am in East > > Building, Room 10. This session will be added to the official schedule > > in the next couple of days. > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Thu May 7 19:28:41 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 07 May 2015 15:28:41 -0400 Subject: [Rdo-list] Fwd: BOF/Meetup space at Summit? In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5010A158932@CERNXCHG44.cern.ch> References: <8CB4MFDP_554b820e8fa83_37813f87b74cd320664633_sprut@zendesk.com> <554BB8CC.2070308@redhat.com> <554BB961.2030800@redhat.com> <5D7F9996EA547448BC6C54C8C5AAF4E5010A158932@CERNXCHG44.cern.ch> Message-ID: <554BBCE9.2000502@redhat.com> On 05/07/2015 03:26 PM, Tim Bell wrote: >> >-----Original Message----- >> >From:rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On >> >Behalf Of Perry Myers >> >Sent: 07 May 2015 21:14 >> >To: Rich Bowen;rdo-list at redhat.com >> >Subject: Re: [Rdo-list] Fwd: BOF/Meetup space at Summit? >> > >> >On 05/07/2015 03:11 PM, Rich Bowen wrote: >>> > >FYI, we have been assigned a slot for our BoF session. It will be >>> > >Thursday, 9:50am - 10:30am, and is listed in the schedule at >>> > >https://www.openstack.org/summit/vancouver-2015/schedule/ >>> > > >>> > >Note that this is during regular conference sessions, so be sure to >>> > >check that time period on the schedule, and try to make it to this >>> > >event if at all possible. >> > >> >Awesome! This will be much better than the impromptu meeting we had on the >> >floor in Paris:) >> > >> >Thanks for chasing down getting an official BoF for RDO:) >> > > Great... do we get chairs too:-) ? Yeah! An actual room with chairs, and a projector and mics. We're big-time now! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Thu May 7 19:33:07 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 07 May 2015 15:33:07 -0400 Subject: [Rdo-list] Fwd: [openstack-community] Vancouver Summit registration closes May 16 In-Reply-To: <8D068FCD-BE13-4AF7-BC11-48C1E9EF1A11@openstack.org> References: <8D068FCD-BE13-4AF7-BC11-48C1E9EF1A11@openstack.org> Message-ID: <554BBDF3.4050702@redhat.com> I'm sure that if you're going, you're already registered, but, just in case ... -------- Forwarded Message -------- Subject: [openstack-community] Vancouver Summit registration closes May 16 Date: Thu, 7 May 2015 11:19:13 -0500 From: Allison Price To: community at lists.openstack.org Hi everyone, If you???re planning to attend the May OpenStack Summit in Vancouver - it???s time to register! Prices will increase May 13 at 12am PT. Online registration will officially close on Saturday, May 16 - so don???t miss out! The Vancouver Summit will feature OpenStack users including Walmart, FICO and eBay Inc. REGISTER HERE! Important Summit links: * Format & Passes * Schedule * Summit FAQ * Sponsors Please visit openstack.org/summit for full Summit details, and contact summit at openstack.org if you have any questions. Cheers, Allison Allison Price OpenStack Marketing allison at openstack.org -------------- next part -------------- _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community From Jan.van.Eldik at cern.ch Thu May 7 19:33:45 2015 From: Jan.van.Eldik at cern.ch (Jan van Eldik) Date: Thu, 7 May 2015 21:33:45 +0200 Subject: [Rdo-list] Fwd: BOF/Meetup space at Summit? In-Reply-To: <554BBCE9.2000502@redhat.com> References: <8CB4MFDP_554b820e8fa83_37813f87b74cd320664633_sprut@zendesk.com> <554BB8CC.2070308@redhat.com> <554BB961.2030800@redhat.com> <5D7F9996EA547448BC6C54C8C5AAF4E5010A158932@CERNXCHG44.cern.ch> <554BBCE9.2000502@redhat.com> Message-ID: <554BBE19.5000901@cern.ch> >>> >On 05/07/2015 03:11 PM, Rich Bowen wrote: >>>> > >FYI, we have been assigned a slot for our BoF session. It will be >>>> > >Thursday, 9:50am - 10:30am, and is listed in the schedule at >>>> > >https://www.openstack.org/summit/vancouver-2015/schedule/ >>>> > > >>>> > >Note that this is during regular conference sessions, so be sure to >>>> > >check that time period on the schedule, and try to make it to this >>>> > >event if at all possible. >>> > >>> >Awesome! This will be much better than the impromptu meeting we had >>> on the >>> >floor in Paris:) >>> > >>> >Thanks for chasing down getting an official BoF for RDO:) >>> > >> Great... do we get chairs too:-) ? > > Yeah! An actual room with chairs, and a projector and mics. We're > big-time now! > Registered! From stdake at cisco.com Thu May 7 23:31:15 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Thu, 7 May 2015 23:31:15 +0000 Subject: [Rdo-list] [heat]: stack stays interminably under the status create in progress In-Reply-To: References: <20150507094544.GA31444@t430slt.redhat.com> Message-ID: Kilo heat works in Kolla. So this must be a configuration problem. I noticed your logs have an authentication failed issue. Recommend filing a bug against the RDO project (under community) on bugzilla.redhat.com Regards -steve From: ICHIBA Sara > Date: Thursday, May 7, 2015 at 5:41 AM To: Steven Hardy >, "rdo-list at redhat.com" > Subject: Re: [Rdo-list] [heat]: stack stays interminably under the status create in progress I found this https://bugs.launchpad.net/heat/+bug/1405110 . Apparently i'm not the only one to have this problem 2015-05-07 14:26 GMT+02:00 ICHIBA Sara >: 2015-05-07 14:08 GMT+02:00 ICHIBA Sara >: Actually, Nova is working I just spawned a VM with the same flavor and image. and when I try to do the same with heat it fails. below some logs: nova-compute.log 2015-05-07 13:58:56.208 3928 AUDIT nova.compute.manager [req-b57bfc38-9d77-48fc-8185-474d1f9076a6 None] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Starting instance... 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Attempting claim: memory 512 MB, disk 1 GB 2015-05-07 13:58:56.376 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total memory: 3791 MB, used: 512.00 MB 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] memory limit: 5686.50 MB, free: 5174.50 MB 2015-05-07 13:58:56.377 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Total disk: 13 GB, used: 0.00 GB 2015-05-07 13:58:56.378 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] disk limit not specified, defaulting to unlimited 2015-05-07 13:58:56.395 3928 AUDIT nova.compute.claims [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Claim successful 2015-05-07 13:58:56.590 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:58:56.787 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:58:57.269 3928 INFO nova.virt.libvirt.driver [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Creating image 2015-05-07 13:59:27.642 3928 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 13:59:27.661 3928 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2015-05-07 13:59:27.702 3928 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 13:59:27.800 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 3791, total allocated virtual ram (MB): 1024 2015-05-07 13:59:28.066 3928 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 12 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 4, total allocated vcpus: 0 2015-05-07 13:59:28.067 3928 AUDIT nova.compute.resource_tracker [-] PCI stats: [] 2015-05-07 13:59:28.101 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 13:59:28.101 3928 INFO nova.compute.resource_tracker [-] Compute_service record updated for localhost.localdomain:localhost.localdomain 2015-05-07 13:59:47.110 3928 WARNING nova.virt.disk.vfs.guestfs [-] Failed to close augeas aug_close: do_aug_close: you must call 'aug-init' first to initialize Augeas 2015-05-07 13:59:51.364 3928 INFO nova.compute.manager [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] VM Started (Lifecycle Event) 2015-05-07 13:59:51.384 3928 INFO nova.virt.libvirt.driver [-] [instance: 399c6150-2877-4c5f-865f-c7ac4c5c7ed5] Instance spawned successfully. 2015-05-07 14:00:28.264 3928 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2015-05-07 14:00:29.007 3928 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 3791, total allocated virtual ram (MB): 1024 2015-05-07 14:00:29.008 3928 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 12 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 4, total allocated vcpus: 1 2015-05-07 14:00:29.009 3928 AUDIT nova.compute.resource_tracker [-] PCI stats: [] 2015-05-07 14:00:29.048 3928 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-05-07 14:00:29.048 3928 INFO nova.compute.resource_tracker [-] Compute_service record updated for localhost.localdomain:localhost.localdomain heat-engine.log 2015-05-07 14:02:45.177 3942 INFO heat.engine.service [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Creating stack my_first_stack 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('AWSTemplateFormatVersion.2010-09-09 = heat.engine.cfn.template:CfnTemplate') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.194 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('heat_template_version.2013-05-23 = heat.engine.hot.template:HOTemplate20130523') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('HeatTemplateFormatVersion.2012-12-12 = heat.engine.cfn.template:HeatTemplate') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.195 3942 DEBUG stevedore.extension [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] found extension EntryPoint.parse('heat_template_version.2014-10-16 = heat.engine.hot.template:HOTemplate20141016') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156 2015-05-07 14:02:45.224 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] __init__ /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:31 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] __init__ /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:32 2015-05-07 14:02:45.225 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Parameter Groups. validate /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:43 2015-05-07 14:02:45.226 3942 DEBUG heat.engine.parameter_groups [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] ['OS::stack_id'] validate /usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py:44 2015-05-07 14:02:45.233 3942 INFO heat.engine.resource [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Validating Server "my_instance" 2015-05-07 14:02:45.385 3942 DEBUG heat.common.keystoneclient [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 None] Using stack domain 9ce7896d79914b68aef34d397fd3fde4 __init__ /usr/lib/python2.7/site-packages/heat/common/heat_keystoneclient.py:115 2015-05-07 14:02:45.405 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:45.448 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET /v2/fac261cd98974411a9b2e977cd9ec876/os-keypairs/userkey HTTP/1.1" 200 674 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:45.486 3942 DEBUG glanceclient.common.http [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] curl -i -X GET -H 'User-Agent: python-glanceclient' -H 'Content-Type: application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'X-Auth-Token: {SHA1}bc08e056cd78f34e9ada3dd99ceae37d33daf3f0' http://192.168.5.33:9292/v1/images/detail?limit=20&name=cirros log_curl_request /usr/lib/python2.7/site-packages/glanceclient/common/http.py:122 2015-05-07 14:02:45.488 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:45.770 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "GET /v1/images/detail?limit=20&name=cirros HTTP/1.1" 200 481 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:45.771 3942 DEBUG glanceclient.common.http [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] HTTP/1.1 200 OK date: Thu, 07 May 2015 12:02:45 GMT content-length: 481 content-type: application/json; charset=UTF-8 x-openstack-request-id: req-c62b407b-0d8f-43c6-a8be-df02015dc846 {"images": [{"status": "active", "deleted_at": null, "name": "cirros", "deleted": false, "container_format": "bare", "created_at": "2015-05-06T14:27:54", "disk_format": "qcow2", "updated_at": "2015-05-06T15:01:15", "min_disk": 0, "protected": false, "id": "d80b5a24-2567-438f-89f8-b381a6716887", "min_ram": 0, "checksum": "133eae9fb1c98f45894a4e60d8736619", "owner": "3740df0f18754509a252738385d375b9", "is_public": true, "virtual_size": null, "properties": {}, "size": 13200896}]} log_http_response /usr/lib/python2.7/site-packages/glanceclient/common/http.py:135 2015-05-07 14:02:46.049 3942 DEBUG keystoneclient.auth.identity.v3 [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Making authentication request to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 2015-05-07 14:02:46.052 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.247 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/auth/tokens HTTP/1.1" 201 8275 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.250 3942 DEBUG keystoneclient.session [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] REQ: curl -i -X POST http://192.168.5.33:35357/v3/OS-TRUST/trusts -H "User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H "X-Auth-Token: TOKEN_REDACTED" -d '{"trust": {"impersonation": true, "project_id": "fac261cd98974411a9b2e977cd9ec876", "trustor_user_id": "24f6dac3f1444d89884c1b1977bb0d87", "roles": [{"name": "heat_stack_owner"}], "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e"}}' _http_log_request /usr/lib/python2.7/site-packages/keystoneclient/session.py:155 2015-05-07 14:02:46.252 3942 INFO urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.381 3942 DEBUG urllib3.connectionpool [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] "POST /v3/OS-TRUST/trusts HTTP/1.1" 201 717 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.383 3942 DEBUG keystoneclient.session [req-6b9bcd97-7a13-4bc3-984c-470e6f5949b9 ] RESP: [201] {'content-length': '717', 'vary': 'X-Auth-Token', 'server': 'Apache/2.4.6 (CentOS)', 'connection': 'close', 'date': 'Thu, 07 May 2015 12:02:46 GMT', 'content-type': 'application/json'} RESP BODY: {"trust": {"impersonation": true, "roles_links": {"self": "http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f/roles", "previous": null, "next": null}, "deleted_at": null, "trustor_user_id": "24f6dac3f1444d89884c1b1977bb0d87", "links": {"self": "http://192.168.5.33:35357/v3/OS-TRUST/trusts/96e0e32b7c504d3f8f9b82fcc2658e5f"}, "roles": [{"id": "cf346090e5a042ebac674bcfe14f4076", "links": {"self": "http://192.168.5.33:35357/v3/roles/cf346090e5a042ebac674bcfe14f4076"}, "name": "heat_stack_owner"}], "remaining_uses": null, "expires_at": null, "trustee_user_id": "4f7e2e76441e483982fb863ed02fe63e", "project_id": "fac261cd98974411a9b2e977cd9ec876", "id": "96e0e32b7c504d3f8f9b82fcc2658e5f"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneclient/session.py:182 2015-05-07 14:02:46.454 3942 DEBUG heat.engine.stack_lock [-] Engine b3d3b0e1-5c47-4912-94e9-1b75159b9b10 acquired lock on stack bb7d16c4-b73f-428d-9dbd-5b089748f374 acquire /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:72 2015-05-07 14:02:46.456 3942 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 14:02:46.474 3942 DEBUG keystoneclient.auth.identity.v3 [-] Making authentication request to http://192.168.5.33:35357/v3/auth/tokens get_auth_ref /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117 2015-05-07 14:02:46.477 3942 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.5.33 2015-05-07 14:02:46.481 3942 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 14:02:46.666 3942 DEBUG urllib3.connectionpool [-] "POST /v3/auth/tokens HTTP/1.1" 401 114 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.session [-] Request returned failure status: 401 request /usr/lib/python2.7/site-packages/keystoneclient/session.py:345 2015-05-07 14:02:46.668 3942 DEBUG keystoneclient.v3.client [-] Authorization failed. get_raw_token_from_identity_service /usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:267 2015-05-07 14:02:46.673 3942 DEBUG heat.engine.stack_lock [-] Engine b3d3b0e1-5c47-4912-94e9-1b75159b9b10 released lock on stack bb7d16c4-b73f-428d-9dbd-5b089748f374 release /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:122 2015-05-07 14:03:33.623 3942 INFO oslo.messaging._drivers.impl_rabbit [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connecting to AMQP server on 192.168.5.33:5672 2015-05-07 14:03:33.645 3942 INFO oslo.messaging._drivers.impl_rabbit [req-18ad2c0b-ab52-4793-99a5-255a010d83fe ] Connected to AMQP server on 192.168.5.33:5672 2015-05-07 11:45 GMT+02:00 Steven Hardy >: On Tue, May 05, 2015 at 01:18:17PM +0200, ICHIBA Sara wrote: > hello, > > here is my HOT template, it's very basic: > > heat_template_version: 2013-05-23 > > description: Simple template to deploy a single compute instance > > resources: > A my_instance: > A A A type: OS::Nova::Server > A A A properties: > A A A A A image: Cirros 0.3.3 > A A A A A flavor: m1.small > A A A A A key_name: userkey > A A A A A networks: > A A A A A A A - network: fdf2bb77-a828-401d-969a-736a8028950f > > for the logs please find them attached. These logs are a little confusing - it looks like you failed to create the stack due to some validation errors, then tried again and did a stack-check and a stack resume? Can you please set debug = True in the [DEFAULT] section of your heat.conf, restart heat-engine and try again please? Also, some basic checks are: 1. When the stack is CREATE_IN_PROGRESS, what does nova list show for the instance? 2. Is it possible to boot an instance using nova boot, using the same arguments (image, flavor, key etc) that you specify in the heat template? I suspect that Heat is not actually the problem here, and that some part of Nova is either misconfigured or not running, but I can't prove that without seeing the nova CLI output and/or the nova logs. Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at berendt.io Fri May 8 09:05:05 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 08 May 2015 11:05:05 +0200 Subject: [Rdo-list] Fwd: BOF/Meetup space at Summit? In-Reply-To: <554BB8CC.2070308@redhat.com> References: <8CB4MFDP_554b820e8fa83_37813f87b74cd320664633_sprut@zendesk.com> <554BB8CC.2070308@redhat.com> Message-ID: <554C7C41.9060107@berendt.io> On 05/07/2015 09:11 PM, Rich Bowen wrote: > FYI, we have been assigned a slot for our BoF session. It will be > Thursday, 9:50am - 10:30am, and is listed in the schedule at > https://www.openstack.org/summit/vancouver-2015/schedule/ Use http://sched.co/3HRs to directly view the entry in the schedule. Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From bnemec at redhat.com Fri May 8 14:42:26 2015 From: bnemec at redhat.com (Ben Nemec) Date: Fri, 08 May 2015 09:42:26 -0500 Subject: [Rdo-list] Moving Docs builds of RDO-Manager In-Reply-To: <554B2CCF.2070705@redhat.com> References: <554B2CCF.2070705@redhat.com> Message-ID: <554CCB52.4090509@redhat.com> Sorry, forgot to come back to this yesterday. Thoughts inline. On 05/07/2015 04:13 AM, Jaromir Coufal wrote: > Hi Ben, > > I wanted to sync with you if we could coordinate the movement of > documentation from: > > https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/* > > to: > > https://repos.fedorapeople.org/repos/openstack-m/docs/* > > with following sub-directories: > * ../docs/master > * ../docs/sprint4 > * ../docs/sprint5 (doesn't exist yet) > etc. > > Since we don't have any date for testing day yet, I think we can do it > this week. True, but we do have a bunch of people working on demos for the end of the sprint, so I'd prefer to hold off until Monday. At this point one day more or less isn't going to hurt anything. > > For old sites, I don't think that we can somehow control redirects, so I > would suggest to place there temporary index.html file with information > that docs moved and direct people to new location (example: > http://paste.openstack.org/show/215975/). +1. In fact, I'd suggest we replace every html file in the old location with such a redirect. That way people (like me:-) who always have the docs open and just refresh them periodically will get sent to the new location automatically. So if there are no objections I'll look into updating all the build targets on Monday and get the redirects in place. -Ben From jcoufal at redhat.com Fri May 8 14:48:34 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Fri, 08 May 2015 16:48:34 +0200 Subject: [Rdo-list] Moving Docs builds of RDO-Manager In-Reply-To: <554CCB52.4090509@redhat.com> References: <554B2CCF.2070705@redhat.com> <554CCB52.4090509@redhat.com> Message-ID: <554CCCC2.8060801@redhat.com> On 08/05/15 16:42, Ben Nemec wrote: > Sorry, forgot to come back to this yesterday. Thoughts inline. > > On 05/07/2015 04:13 AM, Jaromir Coufal wrote: >> Hi Ben, >> >> I wanted to sync with you if we could coordinate the movement of >> documentation from: >> >> https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/* >> >> to: >> >> https://repos.fedorapeople.org/repos/openstack-m/docs/* >> >> with following sub-directories: >> * ../docs/master >> * ../docs/sprint4 >> * ../docs/sprint5 (doesn't exist yet) >> etc. >> >> Since we don't have any date for testing day yet, I think we can do it >> this week. > > True, but we do have a bunch of people working on demos for the end of > the sprint, so I'd prefer to hold off until Monday. At this point one > day more or less isn't going to hurt anything. > >> >> For old sites, I don't think that we can somehow control redirects, so I >> would suggest to place there temporary index.html file with information >> that docs moved and direct people to new location (example: >> http://paste.openstack.org/show/215975/). > > +1. In fact, I'd suggest we replace every html file in the old location > with such a redirect. That way people (like me:-) who always have the > docs open and just refresh them periodically will get sent to the new > location automatically. > > So if there are no objections I'll look into updating all the build > targets on Monday and get the redirects in place. > > -Ben +1 completely agree. Thanks Ben -- J From whayutin at redhat.com Fri May 8 16:13:06 2015 From: whayutin at redhat.com (whayutin) Date: Fri, 08 May 2015 12:13:06 -0400 Subject: [Rdo-list] [CI] trystack, nodes are unreachable via ping Message-ID: <1431101586.2890.43.camel@redhat.com> CI is down atm -------------- next part -------------- +--------------------------------------+-------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------------------+ | 0e0a6b53-3934-4ad2-b6ed-a1f107bc8b27 | rdo-pksk-2ix85-rhos-ci-7-compute-1 | ACTIVE | - | Running | default=172.31.50.234, 8.21.28.141; packstack_int=172.31.40.226; packstack_pub=172.31.30.114 | | 0f785e34-1620-47bd-89fb-59be77ac0b6b | rdo-pksk-2ix85-rhos-ci-7-compute-2 | ACTIVE | - | Running | default=172.31.50.233, 8.21.28.139; packstack_int=172.31.40.225; packstack_pub=172.31.30.113 | | e9c5a700-48c1-4f75-972b-ed9ec103db7e | rdo-pksk-2ix85-rhos-ci-7-controller | ACTIVE | - | Running | default=172.31.50.232, 8.21.28.137; packstack_int=172.31.40.224; packstack_pub=172.31.30.112 | | 0c97c602-c5f3-456c-a3c9-0c96aae3b070 | rdo-pksk-2ix85-rhos-ci-7-network | ACTIVE | - | Running | default=172.31.50.235, 8.21.28.147; packstack_int=172.31.40.227; packstack_pub=172.31.30.115 | | cf2b06ae-15d1-4969-ae4d-a489ff4fed4a | rdo-pksk-2ix85-rhos-ci-7-tester | ACTIVE | - | Running | default=172.31.50.236, 8.21.28.149; packstack_int=172.31.40.228; packstack_pub=172.31.30.116 | | d18220c5-6598-4594-a87d-ec2265e2eabb | slave04-permanent | ACTIVE | - | Running | default=172.31.50.62, 8.21.28.38 | +--------------------------------------+-------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------------------+ [whayutin at unknown28D244B4CE31 khaleesi-settings]$ ping 8.21.28.141 PING 8.21.28.141 (8.21.28.141) 56(84) bytes of data. From 8.21.28.141 icmp_seq=1 Destination Host Unreachable From 8.21.28.141 icmp_seq=5 Destination Host Unreachable ^C --- 8.21.28.141 ping statistics --- 9 packets transmitted, 0 received, +2 errors, 100% packet loss, time 8002ms pipe 4 [whayutin at unknown28D244B4CE31 khaleesi-settings]$ ping 8.21.28.137 PING 8.21.28.137 (8.21.28.137) 56(84) bytes of data. From 8.21.28.35 icmp_seq=1 Destination Host Unreachable ^C --- 8.21.28.137 ping statistics --- 4 packets transmitted, 0 received, +1 errors, 100% packet loss, time 3000ms pipe 4 From aortega at redhat.com Fri May 8 16:17:06 2015 From: aortega at redhat.com (Alvaro Lopez Ortega) Date: Fri, 8 May 2015 18:17:06 +0200 Subject: [Rdo-list] [CI] trystack, nodes are unreachable via ping In-Reply-To: <1431101586.2890.43.camel@redhat.com> References: <1431101586.2890.43.camel@redhat.com> Message-ID: <1BAA6324-3FBA-49EF-B7DE-998D83B0C5D4@redhat.com> Copying Dan Radez. He?s currently investigating the issue. Best, Alvaro > On 08 May 2015, at 18:13, whayutin wrote: > > CI is down atm > > From dradez at redhat.com Fri May 8 18:50:34 2015 From: dradez at redhat.com (Dan Radez) Date: Fri, 08 May 2015 14:50:34 -0400 Subject: [Rdo-list] [CI] trystack, nodes are unreachable via ping In-Reply-To: <1BAA6324-3FBA-49EF-B7DE-998D83B0C5D4@redhat.com> References: <1431101586.2890.43.camel@redhat.com> <1BAA6324-3FBA-49EF-B7DE-998D83B0C5D4@redhat.com> Message-ID: <554D057A.8070805@redhat.com> On 05/08/2015 12:17 PM, Alvaro Lopez Ortega wrote: > Copying Dan Radez. He?s currently investigating the issue. > > Best, > Alvaro > >> On 08 May 2015, at 18:13, whayutin wrote: >> >> CI is down atm >> >> > Things are looking recovered. I'm not sure if the issue was that the cleanup scripts were failing because someone used a non-UTF8 char or if it was the ipv6 thing again I'm going to try and disable the ipv6 in creating a network so ppl don't try it Dan From dradez at redhat.com Fri May 8 18:57:46 2015 From: dradez at redhat.com (Dan Radez) Date: Fri, 08 May 2015 14:57:46 -0400 Subject: [Rdo-list] [CI] trystack, nodes are unreachable via ping In-Reply-To: <1BAA6324-3FBA-49EF-B7DE-998D83B0C5D4@redhat.com> References: <1431101586.2890.43.camel@redhat.com> <1BAA6324-3FBA-49EF-B7DE-998D83B0C5D4@redhat.com> Message-ID: <554D072A.2060907@redhat.com> On 05/08/2015 12:17 PM, Alvaro Lopez Ortega wrote: > Copying Dan Radez. He?s currently investigating the issue. > > Best, > Alvaro > >> On 08 May 2015, at 18:13, whayutin wrote: >> >> CI is down atm >> >> > Looks like the indication that ipv6 is even available has been removed I edited /usr/share/openstack_dashboard/dashboards/project/networks/workflows.py search for enable_ipv6 and set the default on the get_attr to False There's probablly a more proper place to do this, but this was quick Dan From marius at remote-lab.net Fri May 8 19:29:48 2015 From: marius at remote-lab.net (Marius Cornea) Date: Fri, 8 May 2015 21:29:48 +0200 Subject: [Rdo-list] Repository URL Message-ID: Hi all, On Centos7 'sudo yum install -y https://rdoproject.org/repos/rdo-release.rpm' results in the following rdo-release.repo file: [openstack-juno] name=OpenStack Juno Repository baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-Derived from Red Hat Enterprise Linux 7.1 (Source)/ enabled=1 skip_if_unavailable=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno Looks like the baseurl is wrongly built. I'm running it on a CentOS7 generic cloud image. Thanks, Marius From christian at berendt.io Fri May 8 19:33:17 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 08 May 2015 21:33:17 +0200 Subject: [Rdo-list] Repository URL In-Reply-To: References: Message-ID: <554D0F7D.5060005@berendt.io> On 05/08/2015 09:29 PM, Marius Cornea wrote: > Looks like the baseurl is wrongly built. I'm running it on a CentOS7 > generic cloud image. The kilo repository is not yet ready. Use http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm if you want to test the kilo release. HTH, Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From ak at cloudssky.com Fri May 8 20:44:11 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 8 May 2015 22:44:11 +0200 Subject: [Rdo-list] Repository URL In-Reply-To: References: Message-ID: Marius, check /etc/redhat-release, I'd to adapt it like this: [centos at gate ~]$ cat /etc/redhat-release CentOS Linux release 7.1.1503 (Core) instead this: [centos at centos1 ~]$ cat /etc/redhat-release Derived from Red Hat Enterprise Linux 7.1 (Source) On Fri, May 8, 2015 at 9:29 PM, Marius Cornea wrote: > Hi all, > > On Centos7 'sudo yum install -y > https://rdoproject.org/repos/rdo-release.rpm' results in the following > rdo-release.repo file: > > [openstack-juno] > name=OpenStack Juno Repository > baseurl= > http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-Derived > from Red Hat Enterprise Linux 7.1 (Source)/ > enabled=1 > skip_if_unavailable=0 > gpgcheck=1 > gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno > > Looks like the baseurl is wrongly built. I'm running it on a CentOS7 > generic cloud image. > > Thanks, > Marius > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Sat May 9 16:57:14 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sat, 9 May 2015 16:57:14 +0000 Subject: [Rdo-list] Repository URL In-Reply-To: References: Message-ID: This is a known problem in CentOS 7.1. They fixed it in their docker images which come out of CI/CD IIRC. I thought their cloud images were coming out of CI/CD so this problem should be fixed and an image available but I could be mistaken. Recommend asking the centos list. Regards -steve On 5/8/15, 12:29 PM, "Marius Cornea" wrote: >Hi all, > >On Centos7 'sudo yum install -y >https://rdoproject.org/repos/rdo-release.rpm' results in the following >rdo-release.repo file: > >[openstack-juno] >name=OpenStack Juno Repository >baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel- >Derived >from Red Hat Enterprise Linux 7.1 (Source)/ >enabled=1 >skip_if_unavailable=0 >gpgcheck=1 >gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno > >Looks like the baseurl is wrongly built. I'm running it on a CentOS7 >generic cloud image. > >Thanks, >Marius > >_______________________________________________ >Rdo-list mailing list >Rdo-list at redhat.com >https://www.redhat.com/mailman/listinfo/rdo-list > >To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Sat May 9 21:27:40 2015 From: apevec at gmail.com (Alan Pevec) Date: Sat, 9 May 2015 23:27:40 +0200 Subject: [Rdo-list] Repository URL In-Reply-To: References: Message-ID: 2015-05-09 18:57 GMT+02:00 Steven Dake (stdake) : > This is a known problem in CentOS 7.1. They fixed it in their docker > images which come out of CI/CD IIRC. I thought their cloud images were > coming out of CI/CD so this problem should be fixed and an image available > but I could be mistaken. Recommend asking the centos list. Yes, centos 7.1 images have been updated, or you can yum update to get centos-release-7-1.1503.el7.centos.2.8: http://lists.centos.org/pipermail/centos-announce/2015-April/021010.html Cheers, Alan From stdake at cisco.com Sun May 10 03:04:24 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sun, 10 May 2015 03:04:24 +0000 Subject: [Rdo-list] nova-novncproxy is DOA with current state of repos in Icehouse Message-ID: See: https://bugzilla.redhat.com/show_bug.cgi?id=1220081 Regards, -steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Sun May 10 12:45:51 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sun, 10 May 2015 14:45:51 +0200 Subject: [Rdo-list] packaging lifecycles In-Reply-To: References: Message-ID: @Tom: could you point out these packages ? I CC'ed Alan, he'll be able to push the pending updates. Regards, H. From moreira.belmiro.email.lists at gmail.com Sun May 10 14:17:03 2015 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Sun, 10 May 2015 16:17:03 +0200 Subject: [Rdo-list] LXC containers with RDO Message-ID: Hi, I started to deploy a cell in OpenStack nova to use LXC containers and I just found this RedHat article (https://access.redhat.com/articles/1365153) that mentions that "libvirt-lxc" is deprecated in RHL 7.1. Will the OpenStack nova libvirt LXC driver continue to be supported in RDO? thanks, Belmiro -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Sun May 10 23:10:36 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Mon, 11 May 2015 01:10:36 +0200 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: Message-ID: Steve, Thanks for your kind advice. I'm trying to go first through the quick start for magnum with devstack on ubuntu and I'm also following this guide to create a bay with 2 nodes: http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/dev/dev-quickstart.rst I got somehow far, but by running this step to run the service tp provide a discoverable endpoint for the redis sentinels in the cluster: magnum service-create --manifest ./redis-sentinel-service.yaml --bay testbay I'm getting: ERROR: Invalid resource state. (HTTP 409) In the console, I see: 2015-05-10 22:19:44.010 4967 INFO oslo_messaging._drivers.impl_rabbit [-] Connected to AMQP server on 127.0.0.1:5672 2015-05-10 22:19:44.050 4967 WARNING wsme.api [-] Client-side error: Invalid resource state. 127.0.0.1 - - [10/May/2015 22:19:44] "POST /v1/rcs HTTP/1.1" 409 115 The testbay is running with 2 nodes properly: ubuntu at magnum:~/kubernetes/examples/redis$ magnum bay-list | 4fa480a7-2d96-4a3e-876b-1c59d67257d6 | testbay | 2 | CREATE_COMPLETE | Any ideas, where I could dig for the problem? By the way after running "magnum pod-create .." the status shows "failed" ubuntu at magnum:~/kubernetes/examples/redis/v1beta3$ magnum pod-create --manifest ./redis-master.yaml --bay testbay +--------------+---------------------------------------------------------------------+ | Property | Value | +--------------+---------------------------------------------------------------------+ | status | failed | And the pod-list shows: ubuntu at magnum:~$ magnum pod-list +--------------------------------------+--------------+ | uuid | name | +--------------------------------------+--------------+ | 8d6977c1-a88f-45ee-be6c-fd869874c588 | redis-master | I tried also to set the status to running in the pod database table, but it didn't help. P.S.: I tried also to run the whole thing on fedora 21 with devstack, but I got more problems as on Ubuntu. Many thanks in advance for your help! Arash On Mon, May 4, 2015 at 12:54 AM, Steven Dake (stdake) wrote: > Boris, > > Feel free to try out my Magnum packages here. They work in containers, > not sure about CentOS. I?m not certain the systemd files are correct (I > didn?t test that part) but the dependencies are correct: > > https://copr.fedoraproject.org/coprs/sdake/openstack-magnum/ > > NB you will have to run through the quickstart configuration guide here: > > > https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-manual-devstack.rst > > *Regards* > *-steve* > > From: Boris Derzhavets > Date: Sunday, May 3, 2015 at 11:20 AM > To: Arash Kaffamanesh > Cc: "rdo-list at redhat.com" > > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on > Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 > > Arash, > > Please, disregard this notice :- > > >You wrote :- > > >> What I noticed here, if I associate a floating ip to a VM with 2 > interfaces, then I'll lose the > >> connectivity >to the instance and Kilo > > Different types of VMs in yours and mine environments. > > Boris. > > ------------------------------ > Date: Sun, 3 May 2015 16:51:54 +0200 > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on > Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 > From: ak at cloudssky.com > To: bderzhavets at hotmail.com > CC: apevec at gmail.com; rdo-list at redhat.com > > Boris, thanks for your kind feedback. > > I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was > installed on bare metal. > The installation was successful by the first run. > > The network looks like this: > https://cloudssky.com/.galleries/images/kilo-virt-setup.png > > For this setup I added the latest CentOS cloud image to glance, ran an > instance (controller), enabled root login, > added ifcfg-eth1 to the instance, created a snapshot from the controller, > added the repos to this instance, yum updated, > rebooted and spawn the network and compute1 vm nodes from that snapshot. > (To be able to ssh into the VMs over 20.0.1.0 network, I created the gate > VM with a floating ip assigned and installed OpenVPN > on it.) > > What I noticed here, if I associate a floating ip to a VM with 2 > interfaces, then I'll lose the connectivity to the instance and Kilo > becomes crazy (the AIO controller on bare metal lose somehow its br-ex > interface, but I didn't try to reproduce it again). > > The packstack file was created in interactive mode with: > > packstack --answer-file= --> press enter > > I accepted most default values and selected trove and heat to be > installed. > > The answers are on pastebin: > > http://pastebin.com/SYp8Qf7d > > The generated packstack file is here: > > http://pastebin.com/XqJuvQxf > The br-ex interfaces and changes to eth0 are created on network and > compute nodes correctly (output below). > And one nice thing for me coming from Havana was to see how easy has got > to create an image in Horizon > by uploading an image file (in my case rancheros.iso and centos.qcow2 > worked like a charm). > Now its time to discover Ironic, Trove and Manila and if someone has some > tips or guidelines on how to test these > new exciting things or has any news about Murano or Magnum on RDO, then > I'll be more lucky and excited > as I'm now about Kilo :-) > Thanks! > Arash > --- > Some outputs here: > [root at controller ~(keystone_admin)]# nova hypervisor-list > +----+---------------------+-------+---------+ > | ID | Hypervisor hostname | State | Status | > +----+---------------------+-------+---------+ > | 1 | compute1.novalocal | up | enabled | > > +----+---------------------+-------+---------+ > [root at network ~]# ovs-vsctl show > 436a6114-d489-4160-b469-f088d66bd752 > Bridge br-tun > fail_mode: secure > Port "vxlan-14000212" > Interface "vxlan-14000212" > type: vxlan > options: {df_default="true", in_key=flow, > local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} > Port br-tun > Interface br-tun > type: internal > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Bridge br-int > fail_mode: secure > Port br-int > Interface br-int > type: internal > Port int-br-ex > Interface int-br-ex > type: patch > options: {peer=phy-br-ex} > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port "eth0" > Interface "eth0" > > ovs_version: "2.3.1" > > > [root at compute~]# ovs-vsctl show > 8123433e-b477-4ef5-88aa-721487a4bd58 > Bridge br-int > fail_mode: secure > Port int-br-ex > Interface int-br-ex > type: patch > options: {peer=phy-br-ex} > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Port br-int > Interface br-int > type: internal > Bridge br-tun > fail_mode: secure > Port br-tun > Interface br-tun > type: internal > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Port "vxlan-14000213" > Interface "vxlan-14000213" > type: vxlan > options: {df_default="true", in_key=flow, > local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} > Bridge br-ex > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port "eth0" > Interface "eth0" > Port br-ex > Interface br-ex > type: internal > > ovs_version: "2.3.1" > > > > > > > On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets > wrote: > > Thank you once again it really works. > > [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list > +----+----------------------------------------+-------+---------+ > | ID | Hypervisor hostname | State | Status | > +----+----------------------------------------+-------+---------+ > | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | > | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | > +----+----------------------------------------+-------+---------+ > > [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers > ip-192-169-142-137.ip.secureserver.net > > +--------------------------------------+-------------------+---------------+----------------------------------------+ > | ID | Name | Hypervisor ID > | Hypervisor Hostname | > > +--------------------------------------+-------------------+---------------+----------------------------------------+ > | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 > | ip-192-169-142-137.ip.secureserver.net | > | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 > | ip-192-169-142-137.ip.secureserver.net | > > +--------------------------------------+-------------------+---------------+----------------------------------------+ > > with only one issue:- > > during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= > during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 > > and finally it results mess in ml2_vxlan_endpoints table. I had manually > update > ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on > both nodes > afterwards VMs on compute node obtained access to meta-data server. > > I also believe that synchronized delete records from tables > "compute_nodes && services" > ( along with disabling nova-compute on Controller) could turn AIO host > into real Controller. > > Boris. > > ------------------------------ > Date: Fri, 1 May 2015 22:22:41 +0200 > Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on > Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 > From: ak at cloudssky.com > To: bderzhavets at hotmail.com > CC: apevec at gmail.com; rdo-list at redhat.com > > I got the compute node working by adding the delorean-kilo.repo on > compute node, > yum updating the compute node, rebooted and extended the packstack file > from the first AIO > install with the IP of compute node and ran packstack again with > NetworkManager enabled > and did a second yum update on compute node before the 3rd packstack run, > and now it works :-) > > In short, for RC2 we have to force by hand to get the nova-compute > running on compute node, > before running packstack from controller again from an existing AIO > install. > > Now I have 2 compute nodes (controller AIO with compute + 2nd compute) > and could spawn a > 3rd cirros instance which landed on 2nd compute node. > ssh'ing into the instances over the floating ip works fine too. > > Before running packstack again, I set: > > EXCLUDE_SERVERS= > > [root at csky01 ~(keystone_osx)]# virsh list --all > Id Name Status > ---------------------------------------------------- > 2 instance-00000001 laufend --> means running in German > > 3 instance-00000002 laufend --> means running in German > > > [root at csky06 ~]# virsh list --all > Id Name Status > ---------------------------------------------------- > 2 instance-00000003 laufend --> means running in German > > > == Nova managed services == > > +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ > | Id | Binary | Host | Zone | Status | State | > Updated_at | Disabled Reason | > > +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ > | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | > 2015-05-01T19:46:42.000000 | - | > | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | > 2015-05-01T19:46:42.000000 | - | > | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | > 2015-05-01T19:46:42.000000 | - | > | 4 | nova-compute | csky01.csg.net | nova | enabled | up | > 2015-05-01T19:46:40.000000 | - | > | 5 | nova-cert | csky01.csg.net | internal | enabled | up | > 2015-05-01T19:46:42.000000 | - | > | 6 | nova-compute | csky06.csg.net | nova | enabled | up | > 2015-05-01T19:46:38.000000 | - | > > +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ > > > On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets > wrote: > > Ran packstack --debug --answer-file=./answer-fileRC2.txt > 192.169.142.137_nova.pp.log.gz attached > > Boris > > ------------------------------ > From: bderzhavets at hotmail.com > To: apevec at gmail.com > Date: Fri, 1 May 2015 01:44:17 -0400 > CC: rdo-list at redhat.com > Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute > Node when testing delorean RC2 or CI repo on CentOS 7.1 > > Follow instructions > https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html > packstack fails :- > > Applying 192.169.142.127_nova.pp > Applying 192.169.142.137_nova.pp > 192.169.142.127_nova.pp: [ DONE ] > 192.169.142.137_nova.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp > Error: Could not start Service[nova-compute]: Execution of > '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for > openstack-nova-compute.service failed. See 'systemctl status > openstack-nova-compute.service' and 'journalctl -xn' for details. > You will find full trace in log > /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log > > In both cases (RC2 or CI repos) on compute node 192.169.142.137 > /var/log/nova/nova-compute.log > reports :- > > 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 > seconds... > 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on > localhost:5672 > 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit > [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 > is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. > > Seems like it is looking for AMQP Server at wrong host . Should be > 192.169.142.127 > On 192.169.142.127 :- > > [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 > ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* > LISTEN 14506/beam.smp > tcp6 0 0 :::5672 > :::* LISTEN 14506/beam.smp > > [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 > -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m > comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT > -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m > comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT > > Answer-file is attached > > Thanks. > Boris > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Mon May 11 00:04:14 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Mon, 11 May 2015 00:04:14 +0000 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: Message-ID: Arash, The short of it is Magnum 2015.1.0 is DOA. Four commits have hit the repository in the last hour to fix these problems. Now Magnum works with v1beta3 of the kubernetes 0.15 v1betav3 examples with the exception of the service object. We are actively working on that problem upstream ? I?ll update when its fixed. To see my run check out: http://ur1.ca/kc613 -> http://paste.fedoraproject.org/220479/13022911 To upgrade and see everything working but the service object, you will have to remove your openstack-magnum package if using my COPR repo or git pull on your Magnum repo if using devstack. Boris - interested to hear the feedback on a CentOS distro operation once we get that service bug fixed. Regards -steve From: Arash Kaffamanesh > Date: Sunday, May 10, 2015 at 4:10 PM To: Steven Dake > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Steve, Thanks for your kind advice. I'm trying to go first through the quick start for magnum with devstack on ubuntu and I'm also following this guide to create a bay with 2 nodes: http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/dev/dev-quickstart.rst I got somehow far, but by running this step to run the service tp provide a discoverable endpoint for the redis sentinels in the cluster: magnum service-create --manifest ./redis-sentinel-service.yaml --bay testbay I'm getting: ERROR: Invalid resource state. (HTTP 409) In the console, I see: 2015-05-10 22:19:44.010 4967 INFO oslo_messaging._drivers.impl_rabbit [-] Connected to AMQP server on 127.0.0.1:5672 2015-05-10 22:19:44.050 4967 WARNING wsme.api [-] Client-side error: Invalid resource state. 127.0.0.1 - - [10/May/2015 22:19:44] "POST /v1/rcs HTTP/1.1" 409 115 The testbay is running with 2 nodes properly: ubuntu at magnum:~/kubernetes/examples/redis$ magnum bay-list | 4fa480a7-2d96-4a3e-876b-1c59d67257d6 | testbay | 2 | CREATE_COMPLETE | Any ideas, where I could dig for the problem? By the way after running "magnum pod-create .." the status shows "failed" ubuntu at magnum:~/kubernetes/examples/redis/v1beta3$ magnum pod-create --manifest ./redis-master.yaml --bay testbay +--------------+---------------------------------------------------------------------+ | Property | Value | +--------------+---------------------------------------------------------------------+ | status | failed | And the pod-list shows: ubuntu at magnum:~$ magnum pod-list +--------------------------------------+--------------+ | uuid | name | +--------------------------------------+--------------+ | 8d6977c1-a88f-45ee-be6c-fd869874c588 | redis-master | I tried also to set the status to running in the pod database table, but it didn't help. P.S.: I tried also to run the whole thing on fedora 21 with devstack, but I got more problems as on Ubuntu. Many thanks in advance for your help! Arash On Mon, May 4, 2015 at 12:54 AM, Steven Dake (stdake) > wrote: Boris, Feel free to try out my Magnum packages here. They work in containers, not sure about CentOS. I?m not certain the systemd files are correct (I didn?t test that part) but the dependencies are correct: https://copr.fedoraproject.org/coprs/sdake/openstack-magnum/ NB you will have to run through the quickstart configuration guide here: https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-manual-devstack.rst Regards -steve From: Boris Derzhavets > Date: Sunday, May 3, 2015 at 11:20 AM To: Arash Kaffamanesh > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Arash, Please, disregard this notice :- >You wrote :- >> What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the >> connectivity >to the instance and Kilo Different types of VMs in yours and mine environments. Boris. ________________________________ Date: Sun, 3 May 2015 16:51:54 +0200 Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Boris, thanks for your kind feedback. I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal. The installation was successful by the first run. The network looks like this: https://cloudssky.com/.galleries/images/kilo-virt-setup.png For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login, added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated, rebooted and spawn the network and compute1 vm nodes from that snapshot. (To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN on it.) What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilo becomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again). The packstack file was created in interactive mode with: packstack --answer-file= --> press enter I accepted most default values and selected trove and heat to be installed. The answers are on pastebin: http://pastebin.com/SYp8Qf7d The generated packstack file is here: http://pastebin.com/XqJuvQxf The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below). And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm). Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited as I'm now about Kilo :-) Thanks! Arash --- Some outputs here: [root at controller ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | compute1.novalocal | up | enabled | +----+---------------------+-------+---------+ [root at network ~]# ovs-vsctl show 436a6114-d489-4160-b469-f088d66bd752 Bridge br-tun fail_mode: secure Port "vxlan-14000212" Interface "vxlan-14000212" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" ovs_version: "2.3.1" [root at compute~]# ovs-vsctl show 8123433e-b477-4ef5-88aa-721487a4bd58 Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-14000213" Interface "vxlan-14000213" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets > wrote: Thank you once again it really works. [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list +----+----------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+----------------------------------------+-------+---------+ | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | +----+----------------------------------------+-------+---------+ [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net +--------------------------------------+-------------------+---------------+----------------------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+----------------------------------------+ | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net | | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net | +--------------------------------------+-------------------+---------------+----------------------------------------+ with only one issue:- during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 and finally it results mess in ml2_vxlan_endpoints table. I had manually update ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes afterwards VMs on compute node obtained access to meta-data server. I also believe that synchronized delete records from tables "compute_nodes && services" ( along with disabling nova-compute on Controller) could turn AIO host into real Controller. Boris. ________________________________ Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node, yum updating the compute node, rebooted and extended the packstack file from the first AIO install with the IP of compute node and ran packstack again with NetworkManager enabled and did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node, before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a 3rd cirros instance which landed on 2nd compute node. ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets > wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris ________________________________ From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Mon May 11 07:04:20 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 11 May 2015 09:04:20 +0200 Subject: [Rdo-list] packaging lifecycles In-Reply-To: References: Message-ID: > @Tom: could you point out these packages ? I CC'ed Alan, he'll be able to push the pending updates. Yeah, we have a queue of pending updates for Juno, Kilo GA was a priority. I'll get them out asap, after reviewing their CI results. Cheers, Alan From bderzhavets at hotmail.com Mon May 11 08:16:42 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 11 May 2015 04:16:42 -0400 Subject: [Rdo-list] LXC containers with RDO In-Reply-To: References: Message-ID: I am afraid to provide info not matching your needs. However, Docker 1.5 Hypervisor would work on RDO Kilo Compute node. Details of setup are pretty much the same as on AIO host ( it was written for RC2 Delorean version, in meantime follow https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test ) :- http://bderzhavets.blogspot.com/2015/05/running-nova-docker-on-openstack-rdo.html Setup on RDO Kilo Compute nodes was tested, I just don't have exact instructions, but difference is only tuning Glance settings on Controller vs AIO setup. Nova Docker Driver setup on Compute node is performed as described and works smoothly. Boris. Date: Sun, 10 May 2015 16:17:03 +0200 From: moreira.belmiro.email.lists at gmail.com To: rdo-list at redhat.com Subject: [Rdo-list] LXC containers with RDO Hi,I started to deploy a cell in OpenStack nova to use LXC containers and I just foundthis RedHat article (https://access.redhat.com/articles/1365153) that mentions that "libvirt-lxc" is deprecated in RHL 7.1. Will the OpenStack nova libvirt LXC driver continue to be supported in RDO? thanks,Belmiro _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon May 11 09:12:18 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 11 May 2015 11:12:18 +0200 Subject: [Rdo-list] LXC containers with RDO In-Reply-To: References: Message-ID: <20150511091218.GP30897@tesla.redhat.com> On Sun, May 10, 2015 at 04:17:03PM +0200, Belmiro Moreira wrote: > Hi, > I started to deploy a cell in OpenStack nova to use LXC containers and > I just found this RedHat article > (https://access.redhat.com/articles/1365153) that mentions that > "libvirt-lxc" is deprecated in RHL 7.1. > > Will the OpenStack nova libvirt LXC driver continue to be supported in > RDO? As of now, libvirt-lxc is still in upstream Nova tree. And, this is what RDO carries too, so it should be in Nova packages carried in RDO. And, about being "supported" -- like any other RDO packages, it's community-supported. On a related note, you can see the hypervisor support matrix for different hypervisors libvirt supports, including LXC: http://docs.openstack.org/developer/nova/support-matrix.html -- /kashyap From ihrachys at redhat.com Mon May 11 11:28:22 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 11 May 2015 13:28:22 +0200 Subject: [Rdo-list] Defects in Kilo RC2 Packaging In-Reply-To: References: Message-ID: <55509256.7070200@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/03/2015 07:15 PM, Steven Dake (stdake) wrote: > Hi, > > I recently ported Kolla (OpenStack running in containers ? > http://github.com/stackforge/kolla) and found the following > defects: > > [...] > 2. Neutron for whatever reason depends on a file fwwas_driver.ini > which has been removed from the master of neutron. But the agents > will exit if its not in the config directory. I used juno?s > version of fwaas_driver.ini to get the agents to stop exiting. The file is now packaged into openstack-neutron-fwaas package. Though I wonder why the dependency. I thought no services now depend directly on the file, and instead read it implicitly using --config-dir in case it's present [1]. Can you elaborate? [1]: https://github.com/openstack-packages/neutron-fwaas/blob/rpm-master/open stack-neutron-fwaas.spec#L86 Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVUJJWAAoJEC5aWaUY1u57L44IAIVvMM3F/NCkrEFybE3muVwA kM0QZk5Ke6oZC/63x3dGBtF+6VbcltaJ7gJuBZShN2VXa1REB7/fzJs0rP7Y7mUj sVKo3+YgQywQFZJ0MfikWHQdAwOXtbs9WHXWeYMYa57Epb+udOLMfwFoi+Y+bUA7 eN/9AKbHzdtcFoQQ4pTs9IJG3RPdI3DndVX/BJNY5goQALMyk7I/Z+hHvnnnwYAS zpzPQsMYtRLVLz+EAVRfvR48zTQhg90AliXY+Bxs8ENSVVQEwWaGvGyK1xE97ANQ tjgg99tM6xoL6nWACSMrlrr8I6ENC78rSQ8L/jtYLdNkbv+NK7rdiT+LFRU9bdQ= =LcDo -----END PGP SIGNATURE----- From hguemar at fedoraproject.org Mon May 11 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 11 May 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150511150003.5138660A94F1@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-05-13 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From jslagle at redhat.com Mon May 11 16:49:39 2015 From: jslagle at redhat.com (James Slagle) Date: Mon, 11 May 2015 12:49:39 -0400 Subject: [Rdo-list] rdo-manager now using openstack-puppet-modules Message-ID: <20150511164939.GP10040@teletran-1.redhat.com> Hi, I just wanted to make everyone aware that rdo-manager is now using openstack-puppet-modules[0] instead of git cloning from github. Please keep this in mind if you're expecting to see updates to the puppet modules...those updates will need to make their way into openstack-puppet-modules before they're available to rdo-manager. The change will be live shortly once the current-passed-ci symlink is updated[1]. We're pulling o-p-m from delorean kilo[2] currently. [0] https://review.gerrithub.io/#/c/232065/ [1] http://trunk-mgt.rdoproject.org/repos/current-passed-ci/ [2] http://trunk.rdoproject.org/kilo/centos7/latest-RDO-kilo-CI/ -- -- James Slagle -- From ak at cloudssky.com Mon May 11 18:05:47 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Mon, 11 May 2015 20:05:47 +0200 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: Message-ID: Steve, Thanks! I pulled magnum from git on devstack, dropped the magnum db, created a new one and tried to create a bay, now I'm getting "went to status error due to unknown" as below. Nova and magnum bay-list list shows: ubuntu at magnum:~/devstack$ nova list +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+ | 797b6057-1ddf-4fe3-8688-b63e5e9109b4 | te-h5yvoiptrmx3-0-4w4j2ltnob7a-kube_node-vg7rojnafrub | ERROR | - | NOSTATE | testbay-6kij6pvui3p7-fixed_network-46mvxv7yfjzw=10.0.0.5, 2001:db8::f | | c0b56f08-8a4d-428a-aee1-b29ca6e68163 | testbay-6kij6pvui3p7-kube_master-z3lifgrrdxie | ACTIVE | - | Running | testbay-6kij6pvui3p7-fixed_network-46mvxv7yfjzw=10.0.0.3, 2001:db8::d | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+ ubuntu at magnum:~/devstack$ magnum bay-list +--------------------------------------+---------+------------+---------------+ | uuid | name | node_count | status | +--------------------------------------+---------+------------+---------------+ | 87e36c44-a884-4cb4-91cc-c7ae320f33b4 | testbay | 2 | CREATE_FAILED | +--------------------------------------+---------+------------+---------------+ e3a65b05f", "flannel_network_subnetlen": "24", "fixed_network_cidr": " 10.0.0.0/24", "OS::stack_id": "d0246d48-23e0-4aa0-87e0-052b2ca363e8", "OS::stack_name": "testbay-6kij6pvui3p7", "master_flavor": "m1.small", "external_network_id": "e3e2a633-1638-4c11-a994-7179a24e826e", "portal_network_cidr": "10.254.0.0/16", "docker_volume_size": "5", "ssh_key_name": "testkey", "kube_allow_priv": "true", "number_of_minions": "2", "flannel_use_vxlan": "false", "flannel_network_cidr": "10.100.0.0/16", "server_flavor": "m1.medium", "dns_nameserver": "8.8.8.8", "server_image": "fedora-21-atomic-3"}, "id": "d0246d48-23e0-4aa0-87e0-052b2ca363e8", "outputs": [{"output_value": ["2001:db8::f", "2001:db8::e"], "description": "No description given", "output_key": "kube_minions_external"}, {"output_value": ["10.0.0.5", "10.0.0.4"], "description": "No description given", "output_key": "kube_minions"}, {"output_value": "2001:db8::d", "description": "No description given", "output_key": "kube_master"}], "template_description": "This template will boot a Kubernetes cluster with one or more minions (as specified by the number_of_minions parameter, which defaults to \"2\").\n"}} log_http_response /usr/local/lib/python2.7/dist-packages/heatclient/common/http.py:141 2015-05-11 17:31:15.968 30006 ERROR magnum.conductor.handlers.bay_k8s_heat [-] Unable to create bay, stack_id: d0246d48-23e0-4aa0-87e0-052b2ca363e8, reason: Resource CREATE failed: ResourceUnknownStatus: Resource failed - Unknown status FAILED due to "Resource CREATE failed: ResourceUnknownStatus: Resource failed - Unknown status FAILED due to "Resource CREATE failed: ResourceInError: Went to status error due to "Unknown""" Any Idea? Thanks! -Arash On Mon, May 11, 2015 at 2:04 AM, Steven Dake (stdake) wrote: > Arash, > > The short of it is Magnum 2015.1.0 is DOA. > > Four commits have hit the repository in the last hour to fix these > problems. Now Magnum works with v1beta3 of the kubernetes 0.15 v1betav3 > examples with the exception of the service object. We are actively working > on that problem upstream ? I?ll update when its fixed. > > To see my run check out: > > http://ur1.ca/kc613 -> http://paste.fedoraproject.org/220479/13022911 > > To upgrade and see everything working but the service object, you will > have to remove your openstack-magnum package if using my COPR repo or git > pull on your Magnum repo if using devstack. > > Boris - interested to hear the feedback on a CentOS distro operation > once we get that service bug fixed. > > Regards > -steve > > > From: Arash Kaffamanesh > Date: Sunday, May 10, 2015 at 4:10 PM > To: Steven Dake > > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on > Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 > > Steve, > > Thanks for your kind advice. > > I'm trying to go first through the quick start for magnum with devstack > on ubuntu and I'm also > following this guide to create a bay with 2 nodes: > > > http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/dev/dev-quickstart.rst > > I got somehow far, but by running this step to run the service tp > provide a discoverable endpoint for the redis sentinels in the cluster: > > magnum service-create --manifest ./redis-sentinel-service.yaml --bay testbay > > > I'm getting: > > > ERROR: Invalid resource state. (HTTP 409) > > > In the console, I see: > > > 2015-05-10 22:19:44.010 4967 INFO oslo_messaging._drivers.impl_rabbit [-] Connected to AMQP server on 127.0.0.1:5672 > > 2015-05-10 22:19:44.050 4967 WARNING wsme.api [-] Client-side error: Invalid resource state. > > 127.0.0.1 - - [10/May/2015 22:19:44] "POST /v1/rcs HTTP/1.1" 409 115 > > > The testbay is running with 2 nodes properly: > > ubuntu at magnum:~/kubernetes/examples/redis$ magnum bay-list > > | 4fa480a7-2d96-4a3e-876b-1c59d67257d6 | testbay | 2 | CREATE_COMPLETE | > > > Any ideas, where I could dig for the problem? > > > By the way after running "magnum pod-create .." the status shows "failed" > > > ubuntu at magnum:~/kubernetes/examples/redis/v1beta3$ magnum pod-create --manifest ./redis-master.yaml --bay testbay > > +--------------+---------------------------------------------------------------------+ > > | Property | Value | > > +--------------+---------------------------------------------------------------------+ > > | status | failed | > > > And the pod-list shows: > > ubuntu at magnum:~$ magnum pod-list > > +--------------------------------------+--------------+ > > | uuid | name | > > +--------------------------------------+--------------+ > > | 8d6977c1-a88f-45ee-be6c-fd869874c588 | redis-master | > > > I tried also to set the status to running in the pod database table, but it didn't help. > > P.S.: I tried also to run the whole thing on fedora 21 with devstack, but I got more problems as on Ubuntu. > > > Many thanks in advance for your help! > > Arash > > > > On Mon, May 4, 2015 at 12:54 AM, Steven Dake (stdake) > wrote: > >> Boris, >> >> Feel free to try out my Magnum packages here. They work in containers, >> not sure about CentOS. I?m not certain the systemd files are correct (I >> didn?t test that part) but the dependencies are correct: >> >> https://copr.fedoraproject.org/coprs/sdake/openstack-magnum/ >> >> NB you will have to run through the quickstart configuration guide here: >> >> >> https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-manual-devstack.rst >> >> *Regards* >> *-steve* >> >> From: Boris Derzhavets >> Date: Sunday, May 3, 2015 at 11:20 AM >> To: Arash Kaffamanesh >> Cc: "rdo-list at redhat.com" >> >> Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on >> Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 >> >> Arash, >> >> Please, disregard this notice :- >> >> >You wrote :- >> >> >> What I noticed here, if I associate a floating ip to a VM with 2 >> interfaces, then I'll lose the >> >> connectivity >to the instance and Kilo >> >> Different types of VMs in yours and mine environments. >> >> Boris. >> >> ------------------------------ >> Date: Sun, 3 May 2015 16:51:54 +0200 >> Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on >> Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 >> From: ak at cloudssky.com >> To: bderzhavets at hotmail.com >> CC: apevec at gmail.com; rdo-list at redhat.com >> >> Boris, thanks for your kind feedback. >> >> I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was >> installed on bare metal. >> The installation was successful by the first run. >> >> The network looks like this: >> https://cloudssky.com/.galleries/images/kilo-virt-setup.png >> >> For this setup I added the latest CentOS cloud image to glance, ran an >> instance (controller), enabled root login, >> added ifcfg-eth1 to the instance, created a snapshot from the controller, >> added the repos to this instance, yum updated, >> rebooted and spawn the network and compute1 vm nodes from that snapshot. >> (To be able to ssh into the VMs over 20.0.1.0 network, I created the gate >> VM with a floating ip assigned and installed OpenVPN >> on it.) >> >> What I noticed here, if I associate a floating ip to a VM with 2 >> interfaces, then I'll lose the connectivity to the instance and Kilo >> becomes crazy (the AIO controller on bare metal lose somehow its br-ex >> interface, but I didn't try to reproduce it again). >> >> The packstack file was created in interactive mode with: >> >> packstack --answer-file= --> press enter >> >> I accepted most default values and selected trove and heat to be >> installed. >> >> The answers are on pastebin: >> >> http://pastebin.com/SYp8Qf7d >> >> The generated packstack file is here: >> >> http://pastebin.com/XqJuvQxf >> The br-ex interfaces and changes to eth0 are created on network and >> compute nodes correctly (output below). >> And one nice thing for me coming from Havana was to see how easy has got >> to create an image in Horizon >> by uploading an image file (in my case rancheros.iso and centos.qcow2 >> worked like a charm). >> Now its time to discover Ironic, Trove and Manila and if someone has some >> tips or guidelines on how to test these >> new exciting things or has any news about Murano or Magnum on RDO, then >> I'll be more lucky and excited >> as I'm now about Kilo :-) >> Thanks! >> Arash >> --- >> Some outputs here: >> [root at controller ~(keystone_admin)]# nova hypervisor-list >> +----+---------------------+-------+---------+ >> | ID | Hypervisor hostname | State | Status | >> +----+---------------------+-------+---------+ >> | 1 | compute1.novalocal | up | enabled | >> >> +----+---------------------+-------+---------+ >> [root at network ~]# ovs-vsctl show >> 436a6114-d489-4160-b469-f088d66bd752 >> Bridge br-tun >> fail_mode: secure >> Port "vxlan-14000212" >> Interface "vxlan-14000212" >> type: vxlan >> options: {df_default="true", in_key=flow, >> local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} >> Port br-tun >> Interface br-tun >> type: internal >> Port patch-int >> Interface patch-int >> type: patch >> options: {peer=patch-tun} >> Bridge br-int >> fail_mode: secure >> Port br-int >> Interface br-int >> type: internal >> Port int-br-ex >> Interface int-br-ex >> type: patch >> options: {peer=phy-br-ex} >> Port patch-tun >> Interface patch-tun >> type: patch >> options: {peer=patch-int} >> Bridge br-ex >> Port br-ex >> Interface br-ex >> type: internal >> Port phy-br-ex >> Interface phy-br-ex >> type: patch >> options: {peer=int-br-ex} >> Port "eth0" >> Interface "eth0" >> >> ovs_version: "2.3.1" >> >> >> [root at compute~]# ovs-vsctl show >> 8123433e-b477-4ef5-88aa-721487a4bd58 >> Bridge br-int >> fail_mode: secure >> Port int-br-ex >> Interface int-br-ex >> type: patch >> options: {peer=phy-br-ex} >> Port patch-tun >> Interface patch-tun >> type: patch >> options: {peer=patch-int} >> Port br-int >> Interface br-int >> type: internal >> Bridge br-tun >> fail_mode: secure >> Port br-tun >> Interface br-tun >> type: internal >> Port patch-int >> Interface patch-int >> type: patch >> options: {peer=patch-tun} >> Port "vxlan-14000213" >> Interface "vxlan-14000213" >> type: vxlan >> options: {df_default="true", in_key=flow, >> local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} >> Bridge br-ex >> Port phy-br-ex >> Interface phy-br-ex >> type: patch >> options: {peer=int-br-ex} >> Port "eth0" >> Interface "eth0" >> Port br-ex >> Interface br-ex >> type: internal >> >> ovs_version: "2.3.1" >> >> >> >> >> >> >> On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets < >> bderzhavets at hotmail.com> wrote: >> >> Thank you once again it really works. >> >> [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list >> +----+----------------------------------------+-------+---------+ >> | ID | Hypervisor hostname | State | Status | >> +----+----------------------------------------+-------+---------+ >> | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | >> | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | >> +----+----------------------------------------+-------+---------+ >> >> [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers >> ip-192-169-142-137.ip.secureserver.net >> >> +--------------------------------------+-------------------+---------------+----------------------------------------+ >> | ID | Name | Hypervisor >> ID | Hypervisor Hostname | >> >> +--------------------------------------+-------------------+---------------+----------------------------------------+ >> | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | >> 2 | ip-192-169-142-137.ip.secureserver.net | >> | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | >> 2 | ip-192-169-142-137.ip.secureserver.net | >> >> +--------------------------------------+-------------------+---------------+----------------------------------------+ >> >> with only one issue:- >> >> during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= >> during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 >> >> and finally it results mess in ml2_vxlan_endpoints table. I had manually >> update >> ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on >> both nodes >> afterwards VMs on compute node obtained access to meta-data server. >> >> I also believe that synchronized delete records from tables >> "compute_nodes && services" >> ( along with disabling nova-compute on Controller) could turn AIO host >> into real Controller. >> >> Boris. >> >> ------------------------------ >> Date: Fri, 1 May 2015 22:22:41 +0200 >> Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on >> Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 >> From: ak at cloudssky.com >> To: bderzhavets at hotmail.com >> CC: apevec at gmail.com; rdo-list at redhat.com >> >> I got the compute node working by adding the delorean-kilo.repo on >> compute node, >> yum updating the compute node, rebooted and extended the packstack file >> from the first AIO >> install with the IP of compute node and ran packstack again with >> NetworkManager enabled >> and did a second yum update on compute node before the 3rd packstack run, >> and now it works :-) >> >> In short, for RC2 we have to force by hand to get the nova-compute >> running on compute node, >> before running packstack from controller again from an existing AIO >> install. >> >> Now I have 2 compute nodes (controller AIO with compute + 2nd compute) >> and could spawn a >> 3rd cirros instance which landed on 2nd compute node. >> ssh'ing into the instances over the floating ip works fine too. >> >> Before running packstack again, I set: >> >> EXCLUDE_SERVERS= >> >> [root at csky01 ~(keystone_osx)]# virsh list --all >> Id Name Status >> ---------------------------------------------------- >> 2 instance-00000001 laufend --> means running in German >> >> 3 instance-00000002 laufend --> means running in German >> >> >> [root at csky06 ~]# virsh list --all >> Id Name Status >> ---------------------------------------------------- >> 2 instance-00000003 laufend --> means running in German >> >> >> == Nova managed services == >> >> +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ >> | Id | Binary | Host | Zone | Status | State | >> Updated_at | Disabled Reason | >> >> +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ >> | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | >> 2015-05-01T19:46:42.000000 | - | >> | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | >> 2015-05-01T19:46:42.000000 | - | >> | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | >> 2015-05-01T19:46:42.000000 | - | >> | 4 | nova-compute | csky01.csg.net | nova | enabled | up | >> 2015-05-01T19:46:40.000000 | - | >> | 5 | nova-cert | csky01.csg.net | internal | enabled | up | >> 2015-05-01T19:46:42.000000 | - | >> | 6 | nova-compute | csky06.csg.net | nova | enabled | up | >> 2015-05-01T19:46:38.000000 | - | >> >> +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ >> >> >> On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets < >> bderzhavets at hotmail.com> wrote: >> >> Ran packstack --debug --answer-file=./answer-fileRC2.txt >> 192.169.142.137_nova.pp.log.gz attached >> >> Boris >> >> ------------------------------ >> From: bderzhavets at hotmail.com >> To: apevec at gmail.com >> Date: Fri, 1 May 2015 01:44:17 -0400 >> CC: rdo-list at redhat.com >> Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute >> Node when testing delorean RC2 or CI repo on CentOS 7.1 >> >> Follow instructions >> https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html >> packstack fails :- >> >> Applying 192.169.142.127_nova.pp >> Applying 192.169.142.137_nova.pp >> 192.169.142.127_nova.pp: [ DONE ] >> 192.169.142.137_nova.pp: [ ERROR ] >> Applying Puppet manifests [ ERROR ] >> >> ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp >> Error: Could not start Service[nova-compute]: Execution of >> '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for >> openstack-nova-compute.service failed. See 'systemctl status >> openstack-nova-compute.service' and 'journalctl -xn' for details. >> You will find full trace in log >> /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log >> >> In both cases (RC2 or CI repos) on compute node 192.169.142.137 >> /var/log/nova/nova-compute.log >> reports :- >> >> 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit >> [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 >> seconds... >> 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit >> [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on >> localhost:5672 >> 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit >> [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 >> is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. >> >> Seems like it is looking for AMQP Server at wrong host . Should be >> 192.169.142.127 >> On 192.169.142.127 :- >> >> [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 >> ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* >> LISTEN 14506/beam.smp >> tcp6 0 0 :::5672 >> :::* LISTEN 14506/beam.smp >> >> [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 >> -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m >> comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT >> -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m >> comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT >> >> Answer-file is attached >> >> Thanks. >> Boris >> >> _______________________________________________ Rdo-list mailing list >> Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To >> unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon May 11 18:52:05 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 11 May 2015 14:52:05 -0400 Subject: [Rdo-list] OpenStack Meetups, week of May 11, 2015 Message-ID: <5550FA55.2010606@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Tue May 12 in Beijing, CN: 2015 China OpenStack ?????? - http://www.meetup.com/China-OpenStack-User-Group/events/221992827/ * Tue May 12 in Taipei, TW: ????OpenStack????????? - http://www.meetup.com/Taipei-OpenCloud-Meetup/events/222348949/ * Tue May 12 in Taipei, TW: OpenStack Taiwan User group meetup - http://www.meetup.com/OpenStack-Taiwan-User-Group/events/222363618/ * Tue May 12 in Athens, GR: High Availability in OpenStack - http://www.meetup.com/Athens-OpenStack-User-Group/events/222128761/ * Tue May 12 in Washington, DC, US: Ceph at Comcast - http://www.meetup.com/Ceph-DC/events/221843869/ * Wed May 13 in Vancouver, BC, CA: OpenStack cloud management platform + demo - http://www.meetup.com/Vancouver-OpenStack-Meetup/events/222400382/ * Wed May 13 in Berlin, DE: Heading for Vancouver - http://www.meetup.com/OpenStack-User-Group-Berlin/events/221817807/ * Wed May 13 in Buenos Aires, AR: OpenStack hands on - http://www.meetup.com/openstack-argentina/events/222305435/ * Wed May 13 in Porto Alegre, BR: 2? Hangout OpenStack Brasil 2015 - http://www.meetup.com/Openstack-Brasil/events/222182172/ * Thu May 14 in Budapest, HU: OpenStack 2015 May - http://www.meetup.com/OpenStack-Hungary-Meetup-Group/events/222396966/ * Thu May 14 in Singapore, SG: Singapore OpenStack UG meetup - http://www.meetup.com/OpenStack-Singapore/events/221538390/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rdo-info at redhat.com Mon May 11 19:20:48 2015 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 11 May 2015 19:20:48 +0000 Subject: [Rdo-list] [RDO] RDO blog roundup, May 11, 2015 Message-ID: <0000014d446c28ea-c0061cbf-1a46-4efc-b9ea-8481a2c8e9c5-000000@email.amazonses.com> rbowen started a discussion. RDO blog roundup, May 11, 2015 --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1015/rdo-blog-roundup-may-11-2015 Have a great day! From tom at buskey.name Mon May 11 20:48:48 2015 From: tom at buskey.name (Tom Buskey) Date: Mon, 11 May 2015 16:48:48 -0400 Subject: [Rdo-list] packaging lifecycles In-Reply-To: References: Message-ID: I've been trying to get Live Migration going in CentOS python-nova is definitely one I need. I had to get a repo for CentOS that provided qemu-kvm-rhev because upstream built qemu-kvm with migration disabled. There's probably more that prevents block migration buried in there. On Sun, May 10, 2015 at 8:45 AM, Ha?kel wrote: > @Tom: could you point out these packages ? I CC'ed Alan, he'll be able > to push the pending updates. > > Regards, > H. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Tue May 12 06:53:01 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 12 May 2015 02:53:01 -0400 Subject: [Rdo-list] Could you, please, clarify what is going on with google accounts at ask.openstack.org ? In-Reply-To: <0000014d446c28ea-c0061cbf-1a46-4efc-b9ea-8481a2c8e9c5-000000@email.amazonses.com> References: <0000014d446c28ea-c0061cbf-1a46-4efc-b9ea-8481a2c8e9c5-000000@email.amazonses.com> Message-ID: To avoid repeating what was already said by devs and customers , I just post the the link :- https://bugs.launchpad.net/openstack-community/+bug/1418361 Development instance you've built is gone, launching browser to askbot-dev.openstack.org results launching to production. On production ( ask.openstack.org ) :- 1. Launchpad OpenIDs don't work 2. "G+" button dissapeared from production a while ago. It looks like you don't have any control on situation, been assigned to handle the Google OpendID 2.0 issue. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Tue May 12 07:41:55 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 12 May 2015 03:41:55 -0400 Subject: [Rdo-list] Could you, please, clarify what is going on with google accounts at ask.openstack.org ? In-Reply-To: References: <0000014d446c28ea-c0061cbf-1a46-4efc-b9ea-8481a2c8e9c5-000000@email.amazonses.com>, , Message-ID: Evgeny, Any time estimates for production upgrade with "G+" enabled ? Thanks. Boris. Date: Tue, 12 May 2015 03:58:34 -0300 Subject: Re: Could you,please, clarify what is going on with google accounts at ask.openstack.org ? From: evgeny.fadeev at gmail.com To: bderzhavets at hotmail.com; marton.kiss at gmail.com CC: rdo-list at redhat.com; christian at berendt.io Hello Boris, The issue is fixed as we agreed with Marton and Stef,now it is up to Marton to deploy the update. Launchpad OpenID authentication may need to be looked at again.Mainly we were focusing on migrating Google Openid accounts to G+. If you have more questions, let us continue in this thread. Best regards,Evgeny. On Tue, May 12, 2015 at 3:53 AM, Boris Derzhavets wrote: To avoid repeating what was already said by devs and customers , I just post the the link :- https://bugs.launchpad.net/openstack-community/+bug/1418361 Development instance you've built is gone, launching browser to askbot-dev.openstack.org results launching to production. On production ( ask.openstack.org ) :- 1. Launchpad OpenIDs don't work 2. "G+" button dissapeared from production a while ago. It looks like you don't have any control on situation, been assigned to handle the Google OpendID 2.0 issue. Boris. -- AskbotValparaiso, Chileskype: evgeny-fadeev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue May 12 08:41:24 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 12 May 2015 10:41:24 +0200 Subject: [Rdo-list] packaging lifecycles In-Reply-To: References: Message-ID: <20150512084124.GA12326@tesla.redhat.com> On Mon, May 11, 2015 at 04:48:48PM -0400, Tom Buskey wrote: > I've been trying to get Live Migration going in CentOS > > python-nova is definitely one I need. > > I had to get a repo for CentOS that provided qemu-kvm-rhev Ha?kel pointed me to a build from CentOS build system on IRC. This is now called "qemu-kvm-ev": http://cbs.centos.org/koji/buildinfo?buildID=863 Until a proper repo is created for it, you might want to quickly create a local repo (using the `createrepo` tool). > because upstream built qemu-kvm with migration disabled. > > There's probably more that prevents block migration buried in there. -- /kashyap From evgeny.fadeev at gmail.com Tue May 12 06:58:34 2015 From: evgeny.fadeev at gmail.com (Evgeny Fadeev) Date: Tue, 12 May 2015 03:58:34 -0300 Subject: [Rdo-list] Could you, please, clarify what is going on with google accounts at ask.openstack.org ? In-Reply-To: References: <0000014d446c28ea-c0061cbf-1a46-4efc-b9ea-8481a2c8e9c5-000000@email.amazonses.com> Message-ID: Hello Boris, The issue is fixed as we agreed with Marton and Stef, now it is up to Marton to deploy the update. Launchpad OpenID authentication may need to be looked at again. Mainly we were focusing on migrating Google Openid accounts to G+. If you have more questions, let us continue in this thread. Best regards, Evgeny. On Tue, May 12, 2015 at 3:53 AM, Boris Derzhavets wrote: > > > To avoid repeating what was already said by devs and customers , I just > post the > the link :- > > https://bugs.launchpad.net/openstack-community/+bug/1418361 > > Development instance you've built is gone, launching browser to > askbot-dev.openstack.org results launching to production. > On production ( ask.openstack.org ) :- > > 1. Launchpad OpenIDs don't work > 2. "G+" button dissapeared from production a while ago. > > It looks like you don't have any control on situation, been assigned to > handle the Google OpendID 2.0 issue. > > Boris. > > > -- Askbot Valparaiso, Chile skype: evgeny-fadeev -------------- next part -------------- An HTML attachment was scrubbed... URL: From marton.kiss at gmail.com Tue May 12 08:06:16 2015 From: marton.kiss at gmail.com (Marton Kiss) Date: Tue, 12 May 2015 10:06:16 +0200 Subject: [Rdo-list] Could you, please, clarify what is going on with google accounts at ask.openstack.org ? In-Reply-To: References: <0000014d446c28ea-c0061cbf-1a46-4efc-b9ea-8481a2c8e9c5-000000@email.amazonses.com> Message-ID: Hi Boris, It depends only on the speed of the infra changeset approvals (and that was the reason we launched a temporary staging server). As we don't have direct access to ask.o.o servers, and everything happens through puppet, we need to follow the workflow. I was talking yesterday with Jeremy Stanley from infra side, and he knows about this issue, but they had some priority things that affected the whole gating / code review and allocated all the team resources. So we are on it. Brgds, Marton 2015-05-12 9:41 GMT+02:00 Boris Derzhavets : > Evgeny, > > Any time estimates for production upgrade with "G+" enabled ? > > Thanks. > Boris. > > ------------------------------ > Date: Tue, 12 May 2015 03:58:34 -0300 > Subject: Re: Could you,please, clarify what is going on with google > accounts at ask.openstack.org ? > From: evgeny.fadeev at gmail.com > To: bderzhavets at hotmail.com; marton.kiss at gmail.com > CC: rdo-list at redhat.com; christian at berendt.io > > > Hello Boris, > > The issue is fixed as we agreed with Marton and Stef, > now it is up to Marton to deploy the update. > > Launchpad OpenID authentication may need to be looked at again. > Mainly we were focusing on migrating Google Openid accounts to G+. > > If you have more questions, let us continue in this thread. > > Best regards, > Evgeny. > > On Tue, May 12, 2015 at 3:53 AM, Boris Derzhavets > wrote: > > > > To avoid repeating what was already said by devs and customers , I just > post the > the link :- > > https://bugs.launchpad.net/openstack-community/+bug/1418361 > > Development instance you've built is gone, launching browser to > askbot-dev.openstack.org results launching to production. > On production ( ask.openstack.org ) :- > > 1. Launchpad OpenIDs don't work > 2. "G+" button dissapeared from production a while ago. > > It looks like you don't have any control on situation, been assigned to > handle the Google OpendID 2.0 issue. > > Boris. > > > > > > -- > Askbot > Valparaiso, Chile > skype: evgeny-fadeev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.oulevey at cern.ch Tue May 12 11:44:11 2015 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Tue, 12 May 2015 13:44:11 +0200 Subject: [Rdo-list] packaging lifecycles In-Reply-To: <20150512084124.GA12326@tesla.redhat.com> References: <20150512084124.GA12326@tesla.redhat.com> Message-ID: <5551E78B.1060309@cern.ch> Hi, On 05/12/2015 10:41 AM, Kashyap Chamarthy wrote: > On Mon, May 11, 2015 at 04:48:48PM -0400, Tom Buskey wrote: >> I've been trying to get Live Migration going in CentOS >> >> python-nova is definitely one I need. >> >> I had to get a repo for CentOS that provided qemu-kvm-rhev > > Ha?kel pointed me to a build from CentOS build system on IRC. This is > now called "qemu-kvm-ev": > > http://cbs.centos.org/koji/buildinfo?buildID=863 > > Until a proper repo is created for it, you might want to quickly create > a local repo (using the `createrepo` tool). As a note, CBS provides testing repos for each tag e.g : http://cbs.centos.org/repos/virt7-kvm-common-testing/x86_64/os/ However there are testing repos and may change at any time, so the createrepo method is a bit more secure. Just mentioning this for fast prototyping/testing. -- Thomas From bnemec at redhat.com Tue May 12 16:34:49 2015 From: bnemec at redhat.com (Ben Nemec) Date: Tue, 12 May 2015 11:34:49 -0500 Subject: [Rdo-list] Moving Docs builds of RDO-Manager In-Reply-To: <554CCCC2.8060801@redhat.com> References: <554B2CCF.2070705@redhat.com> <554CCB52.4090509@redhat.com> <554CCCC2.8060801@redhat.com> Message-ID: <55522BA9.7010905@redhat.com> Okay, this should be done. The doc update job is pointed to the new location, and redirect pages were left at all the old locations. On 05/08/2015 09:48 AM, Jaromir Coufal wrote: > > > On 08/05/15 16:42, Ben Nemec wrote: >> Sorry, forgot to come back to this yesterday. Thoughts inline. >> >> On 05/07/2015 04:13 AM, Jaromir Coufal wrote: >>> Hi Ben, >>> >>> I wanted to sync with you if we could coordinate the movement of >>> documentation from: >>> >>> https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/* >>> >>> to: >>> >>> https://repos.fedorapeople.org/repos/openstack-m/docs/* >>> >>> with following sub-directories: >>> * ../docs/master >>> * ../docs/sprint4 >>> * ../docs/sprint5 (doesn't exist yet) >>> etc. >>> >>> Since we don't have any date for testing day yet, I think we can do it >>> this week. >> >> True, but we do have a bunch of people working on demos for the end of >> the sprint, so I'd prefer to hold off until Monday. At this point one >> day more or less isn't going to hurt anything. >> >>> >>> For old sites, I don't think that we can somehow control redirects, so I >>> would suggest to place there temporary index.html file with information >>> that docs moved and direct people to new location (example: >>> http://paste.openstack.org/show/215975/). >> >> +1. In fact, I'd suggest we replace every html file in the old location >> with such a redirect. That way people (like me:-) who always have the >> docs open and just refresh them periodically will get sent to the new >> location automatically. >> >> So if there are no objections I'll look into updating all the build >> targets on Monday and get the redirects in place. >> >> -Ben > > +1 completely agree. > > Thanks Ben > -- J > From ihrachys at redhat.com Tue May 12 16:51:14 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 12 May 2015 18:51:14 +0200 Subject: [Rdo-list] neutron added python-six build dependency Message-ID: <55522F82.60804@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi all, neutron just added [1] a new 'six' dependency to be able to build a source tarball. Now, delorean/kilo crashes [2] (it seems it crashes for CentOS only. I suspect python-six is available in default Fedora image, that's why). It goes like this: - - sdist rebuild tries to parse setup.cfg - - setup.cfg has a hook to import inside neutron.* - - since [1] adds a six library usage into neutron/__init__.py, sdist call now fails. I wonder what's the best way to handle it? From one side, it's probably the easiest to add python-six build dependency to neutron packages. From the other side, it's not really a neutron dependency but a delorean+neutron one. Building from tarballs should not need it. [1]: https://review.openstack.org/#/c/181277/ [2]: http://trunk.rdoproject.org/centos70/96/09/96091cb976e5f858fcd53fb079855 5020eac94b8_33cfff38/rpmbuild.log Comments? Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVUi+CAAoJEC5aWaUY1u570EAIAJPzfMrccMyykY/rnbAfhsNb 5E6KKYIpUuKKl6aodNQjxQtG6fhn5KzbNwknXrvA6saDjxWnELFX7SAjb0Y4ntma hsFnZOwSpjp0jPeFwOiVWNou2gX9W9q7u6hfBhGEY7TDhZ+w8bJ/fgtK8RHwdRXi CnVCR13cyWmqqa81atMTxBZH88JybePUtPgr1nYONRbWolwgj4R/j1oQodlCp8At 01oU69w3pggMdh0/95lnA3UMH2DfxPFXoA1bjPHjtHpuJdkr+tgDOd/CASMknMBk rQ4hE5+sSIt4+g95ju/B/fR7nSmmholwQwtLszaeGS5cDWzDXTMQ/f1+EKusHtU= =9HoC -----END PGP SIGNATURE----- From apevec at gmail.com Tue May 12 18:46:07 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 12 May 2015 20:46:07 +0200 Subject: [Rdo-list] neutron added python-six build dependency In-Reply-To: <55522F82.60804@redhat.com> References: <55522F82.60804@redhat.com> Message-ID: > neutron just added [1] a new 'six' dependency to be able to build a > source tarball. Now, delorean/kilo crashes [2] (it seems it crashes nitpick: it's Delorean Trunk i.e. Liberty ATM > for CentOS only. I suspect python-six is available in default Fedora > image, that's why). It goes like this: > > > I wonder what's the best way to handle it? From one side, it's > probably the easiest to add python-six build dependency to neutron > packages. From the other side, it's not really a neutron dependency > but a delorean+neutron one. Building from tarballs should not need it. It feels like we should formalize the list of implied build deps in Delorean[*] and add python-six to it: https://review.gerrithub.io/233215 Cheers, Alan [*] https://github.com/openstack-packages/delorean/blob/master/scripts/build_rpm.sh#L20-L21 From apevec at gmail.com Tue May 12 20:16:44 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 12 May 2015 22:16:44 +0200 Subject: [Rdo-list] neutron added python-six build dependency In-Reply-To: <55522F82.60804@redhat.com> References: <55522F82.60804@redhat.com> Message-ID: > [1]: https://review.openstack.org/#/c/181277/ In the meantime, https://bugs.launchpad.net/neutron/+bug/1454372 was filed and revert proposed https://review.openstack.org/182438 Cheers, Alan From apevec at gmail.com Tue May 12 22:13:10 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 13 May 2015 00:13:10 +0200 Subject: [Rdo-list] Test day: Thanks, and bugs In-Reply-To: <554B515F.5020309@redhat.com> References: <554B515F.5020309@redhat.com> Message-ID: 2015-05-07 13:49 GMT+02:00 Rich Bowen : > Thank you so much to everyone that participated in the test days. > > It looks like 24 tickets were opened - http://tm3.org/testdaybugs - as well > as a few that have already been closed. I've moved what was fixable ON_QA and in kilo/testing repository and I think we're good now to push testing content live, please speak up if you're aware of any blockers I might have missed! Testing repository was also verified with rdo-manager, so we're now ready for RDO Manager test day. Cheers, Alan From moreira.belmiro.email.lists at gmail.com Tue May 12 23:01:27 2015 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Wed, 13 May 2015 01:01:27 +0200 Subject: [Rdo-list] nova.conf.sample doesn't include all options available Message-ID: Hi, In RDO nova (2014.2.2) the "nova.conf.sample" doesn't include the [database] configuration group. Any reason to not have it? It would help deployers if nova.conf.sample has all configuration options available. Some options of [database] configuration group are included in nova-dist.conf, though. thanks, Belmiro -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed May 13 02:18:00 2015 From: whayutin at redhat.com (whayutin) Date: Tue, 12 May 2015 22:18:00 -0400 Subject: [Rdo-list] Test day: Thanks, and bugs In-Reply-To: References: <554B515F.5020309@redhat.com> Message-ID: <1431483480.6491.20.camel@redhat.com> On Wed, 2015-05-13 at 00:13 +0200, Alan Pevec wrote: > 2015-05-07 13:49 GMT+02:00 Rich Bowen : > > Thank you so much to everyone that participated in the test days. > > > > It looks like 24 tickets were opened - http://tm3.org/testdaybugs - as well > > as a few that have already been closed. > > I've moved what was fixable ON_QA and in kilo/testing repository > and I think we're good now to push testing content live, please speak > up if you're aware of any blockers I might have missed! > > Testing repository was also verified with rdo-manager, so we're now > ready for RDO Manager test day. With which repo.. testing or production rdo kilo? > > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ichi.sara at gmail.com Wed May 13 07:19:56 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 13 May 2015 09:19:56 +0200 Subject: [Rdo-list] [heat]error parsing template file Message-ID: hello people, I wrote a simple template in order to scale a VM in the basis of cpu utilisation. When I try to execute the heat template. I got this error: Error parsing template file:///root/simple.yaml while parsing a block mapping in "", line 38, column 3 did not find expected key in "", line 117, column 4 I checked my template over and over to see if I missed a space somwhere in those two lines. But couldn't see anything. You can find my template attached. Can you plz tell me if there's something wrong with it? In advance, thanks for responding, Sara -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: autoscaling.yaml Type: application/octet-stream Size: 9417 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: environment.yaml Type: application/octet-stream Size: 67 bytes Desc: not available URL: From ichi.sara at gmail.com Wed May 13 07:52:30 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 13 May 2015 09:52:30 +0200 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: References: Message-ID: Found some spacing errors and corrected them. Now I have other errors popping up when i execute the heat create-stack command: Just so you know. I'm newbie to heat , and I'm not that good with coding but I'm trying. [root at localhost ~(keystone_admin)]# heat stack-create lb_autoscale -f simple.yaml -e environment.yaml ERROR: 'unicode' object has no attribute 'get' Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply incoming.message)) File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch result = getattr(endpoint, method)(ctxt, **new_args) File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 69, in wrapped return func(self, ctx, *args, **kwargs) File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 645, in create_stack owner_id) File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 568, in _parse_template_and_validate_stack stack.validate() File "/usr/lib/python2.7/site-packages/heat/engine/stack.py", line 446, in validate parameter_groups.validate() File "/usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py", line 49, in validate parameters = group.get(PARAMETERS) AttributeError: 'unicode' object has no attribute 'get' 2015-05-13 9:19 GMT+02:00 ICHIBA Sara : > hello people, > > I wrote a simple template in order to scale a VM in the basis of cpu > utilisation. When I try to execute the heat template. I got this error: > > Error parsing template file:///root/simple.yaml while parsing a block > mapping > in "", line 38, column 3 > did not find expected key > in "", line 117, column 4 > I checked my template over and over to see if I missed a space somwhere in > those two lines. But couldn't see anything. You can find my template > attached. Can you plz tell me if there's something wrong with it? > > In advance, thanks for responding, > Sara > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: environment.yaml Type: application/octet-stream Size: 67 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: simple.yaml Type: application/octet-stream Size: 3569 bytes Desc: not available URL: From shardy at redhat.com Wed May 13 08:06:57 2015 From: shardy at redhat.com (Steven Hardy) Date: Wed, 13 May 2015 09:06:57 +0100 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: References: Message-ID: <20150513080656.GA24573@t430slt.redhat.com> On Wed, May 13, 2015 at 09:19:56AM +0200, ICHIBA Sara wrote: > hello people, > > I wrote a simpleA template in order to scale a VM in the basis of cpu > utilisation. When I try to execute the heat template. I got this error: > > Error parsing template file:///root/simple.yaml while parsing a block > mapping > A in "", line 38, column 3 > did not find expected key > A in "", line 117, column 4 > I checked my template over and over to see if I missed a space somwhere in > those two lines. But couldn't see anything. You can find myA template > attached. Can you plz tell me if there's something wrong with it? I think you need to attach/paste simple.yaml where the error is coming from. The files you attached appear unrelated (to each other and the error AFAICT). You may find it useful pasting your template into one of the online yaml parsers, sometimes this helps you spot the error in your syntax if the heat error doesn't make it obvious. Steve From ichi.sara at gmail.com Wed May 13 08:15:55 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 13 May 2015 10:15:55 +0200 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: <20150513080656.GA24573@t430slt.redhat.com> References: <20150513080656.GA24573@t430slt.redhat.com> Message-ID: You're right. I was mistaken. I picked the wrong file. I corrected some spacing errors but I still have these ones: ERROR: 'unicode' object has no attribute 'get' Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply incoming.message)) File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch result = getattr(endpoint, method)(ctxt, **new_args) File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 69, in wrapped return func(self, ctx, *args, **kwargs) File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 645, in create_stack owner_id) File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 568, in _parse_template_and_validate_stack stack.validate() File "/usr/lib/python2.7/site-packages/heat/engine/stack.py", line 446, in validate parameter_groups.validate() File "/usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py", line 49, in validate parameters = group.get(PARAMETERS) AttributeError: 'unicode' object has no attribute 'get' By the way, thank you for the advice. I'll try to find a suitable parser. 2015-05-13 10:06 GMT+02:00 Steven Hardy : > On Wed, May 13, 2015 at 09:19:56AM +0200, ICHIBA Sara wrote: > > hello people, > > > > I wrote a simpleA template in order to scale a VM in the basis of cpu > > utilisation. When I try to execute the heat template. I got this > error: > > > > Error parsing template file:///root/simple.yaml while parsing a block > > mapping > > A in "", line 38, column 3 > > did not find expected key > > A in "", line 117, column 4 > > I checked my template over and over to see if I missed a space > somwhere in > > those two lines. But couldn't see anything. You can find myA template > > attached. Can you plz tell me if there's something wrong with it? > > I think you need to attach/paste simple.yaml where the error is coming > from. > > The files you attached appear unrelated (to each other and the error > AFAICT). > > You may find it useful pasting your template into one of the online yaml > parsers, sometimes this helps you spot the error in your syntax if the heat > error doesn't make it obvious. > > Steve > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: environment.yaml Type: application/octet-stream Size: 67 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: simple.yaml Type: application/octet-stream Size: 3569 bytes Desc: not available URL: From shardy at redhat.com Wed May 13 08:16:41 2015 From: shardy at redhat.com (Steven Hardy) Date: Wed, 13 May 2015 09:16:41 +0100 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: References: Message-ID: <20150513081641.GB24573@t430slt.redhat.com> On Wed, May 13, 2015 at 09:52:30AM +0200, ICHIBA Sara wrote: > Found some spacing errors and corrected them. Now I have other errors > popping up when i execute the heat create-stack command: > > Just so you know. I'm newbie to heat , and I'm not that good with coding > but I'm trying. > [root at localhost ~(keystone_admin)]# heat stack-create lb_autoscale -f > simple.yaml -e environment.yaml > ERROR: 'unicode' object has no attribute 'get' > Traceback (most recent call last): > > A File > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line > 134, in _dispatch_and_reply > A A A incoming.message)) > > A File > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line > 177, in _dispatch > A A A return self._do_dispatch(endpoint, method, ctxt, args) > > A File > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line > 123, in _do_dispatch > A A A result = getattr(endpoint, method)(ctxt, **new_args) > > A File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line > 69, in wrapped > A A A return func(self, ctx, *args, **kwargs) > > A File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line > 645, in create_stack > A A A owner_id) > > A File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line > 568, in _parse_template_and_validate_stack > A A A stack.validate() > > A File "/usr/lib/python2.7/site-packages/heat/engine/stack.py", line 446, > in validate > A A A parameter_groups.validate() > > A File > "/usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py", line > 49, in validate > A A A parameters = group.get(PARAMETERS) > > AttributeError: 'unicode' object has no attribute 'get' That error isn't super-helpful (we should look at improving it, I'll raise a bug), but AFAICS the problem is you're specifying the parameters inline in the parameter_groups definition: http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#parameter-groups-section Here's how it should look: https://github.com/openstack/heat-templates/blob/master/openshift-origin/centos65/highly-available/oso_ha.yaml#L6 Unless you really need them, I'd reccomend just not using parameter groups for now, and getting things working with just the "parameters" section first. HTH, Steve From ichi.sara at gmail.com Wed May 13 08:20:30 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 13 May 2015 10:20:30 +0200 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: <20150513081641.GB24573@t430slt.redhat.com> References: <20150513081641.GB24573@t430slt.redhat.com> Message-ID: Ok, thank you for your response. I'll change it and see what it gives me. I followed some template examples which I found in github. I'm just running some tests to get used to heat and its templates 2015-05-13 10:16 GMT+02:00 Steven Hardy : > On Wed, May 13, 2015 at 09:52:30AM +0200, ICHIBA Sara wrote: > > Found some spacing errors and corrected them. Now I have other errors > > popping up when i execute the heat create-stack command: > > > > Just so you know. I'm newbie to heat , and I'm not that good with > coding > > but I'm trying. > > [root at localhost ~(keystone_admin)]# heat stack-create lb_autoscale -f > > simple.yaml -e environment.yaml > > ERROR: 'unicode' object has no attribute 'get' > > Traceback (most recent call last): > > > > A File > > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", > line > > 134, in _dispatch_and_reply > > A A A incoming.message)) > > > > A File > > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", > line > > 177, in _dispatch > > A A A return self._do_dispatch(endpoint, method, ctxt, args) > > > > A File > > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", > line > > 123, in _do_dispatch > > A A A result = getattr(endpoint, method)(ctxt, **new_args) > > > > A File "/usr/lib/python2.7/site-packages/heat/engine/service.py", > line > > 69, in wrapped > > A A A return func(self, ctx, *args, **kwargs) > > > > A File "/usr/lib/python2.7/site-packages/heat/engine/service.py", > line > > 645, in create_stack > > A A A owner_id) > > > > A File "/usr/lib/python2.7/site-packages/heat/engine/service.py", > line > > 568, in _parse_template_and_validate_stack > > A A A stack.validate() > > > > A File "/usr/lib/python2.7/site-packages/heat/engine/stack.py", line > 446, > > in validate > > A A A parameter_groups.validate() > > > > A File > > "/usr/lib/python2.7/site-packages/heat/engine/parameter_groups.py", > line > > 49, in validate > > A A A parameters = group.get(PARAMETERS) > > > > AttributeError: 'unicode' object has no attribute 'get' > > That error isn't super-helpful (we should look at improving it, I'll raise > a bug), but AFAICS the problem is you're specifying the parameters inline > in the parameter_groups definition: > > > http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#parameter-groups-section > > Here's how it should look: > > > https://github.com/openstack/heat-templates/blob/master/openshift-origin/centos65/highly-available/oso_ha.yaml#L6 > > Unless you really need them, I'd reccomend just not using parameter groups > for now, and getting things working with just the "parameters" section > first. > > HTH, > > Steve > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Wed May 13 08:31:20 2015 From: shardy at redhat.com (Steven Hardy) Date: Wed, 13 May 2015 09:31:20 +0100 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: References: <20150513081641.GB24573@t430slt.redhat.com> Message-ID: <20150513083119.GC24573@t430slt.redhat.com> On Wed, May 13, 2015 at 10:20:30AM +0200, ICHIBA Sara wrote: > Ok, thank you for your response. I'll change it and see what it gives me. > I followed some template examples which I found in github. I'm just > running some tests to get used to heat and its templates Where on github? If it's an official repo we should fix it. I've raised an upstream bug[1] so we can improve the error message, and possibly the user-guide documentation[2] and official heat example templates[3] Parameter groups are documented in the HOT spec[4], but it seems like we can improve things elsewhere. [1] https://bugs.launchpad.net/heat/+bug/1454559 [2] http://docs.openstack.org/user-guide/enduser/hot-guide/hot_hello_world.html#input-parameters [3] https://github.com/openstack/heat-templates [4] http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#parameter-groups-section From ichi.sara at gmail.com Wed May 13 08:51:01 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 13 May 2015 10:51:01 +0200 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: <20150513083119.GC24573@t430slt.redhat.com> References: <20150513081641.GB24573@t430slt.redhat.com> <20150513083119.GC24573@t430slt.redhat.com> Message-ID: Oh, sorry again. It's not even on github. I was mistaken. Probably I confused the github repo with this tuto as I was looking at both. Anyway, I took the parameter_groups from it not from github. Next time i'll pay more attention to what I say. You were right, once I removed the parameter_groups and corrected some parameters. My template was parsed correctly without errors. 2015-05-13 10:31 GMT+02:00 Steven Hardy : > On Wed, May 13, 2015 at 10:20:30AM +0200, ICHIBA Sara wrote: > > Ok, thank you for your response. I'll change it and see what it gives > me. > > I followed some template examples which I found in github. I'm just > > running some tests to get used to heat and its templates > > Where on github? If it's an official repo we should fix it. > > I've raised an upstream bug[1] so we can improve the error message, and > possibly the user-guide documentation[2] and official heat example > templates[3] > > Parameter groups are documented in the HOT spec[4], but it seems like we > can improve things elsewhere. > > [1] https://bugs.launchpad.net/heat/+bug/1454559 > [2] > http://docs.openstack.org/user-guide/enduser/hot-guide/hot_hello_world.html#input-parameters > [3] https://github.com/openstack/heat-templates > [4] > http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#parameter-groups-section > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: simple.yaml Type: application/octet-stream Size: 3301 bytes Desc: not available URL: From jcoufal at redhat.com Wed May 13 10:01:03 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Wed, 13 May 2015 12:01:03 +0200 Subject: [Rdo-list] Moving Docs builds of RDO-Manager In-Reply-To: <55522BA9.7010905@redhat.com> References: <554B2CCF.2070705@redhat.com> <554CCB52.4090509@redhat.com> <554CCCC2.8060801@redhat.com> <55522BA9.7010905@redhat.com> Message-ID: <555320DF.1000200@redhat.com> Thanks Ben. We also have redirect from rdoproject.org domain, so the official URL for docs is: http://docs.rdoproject.org/rdo-manager/master If you have bookmark for the docs, now is the time for its update ;) -- Jarda On 12/05/15 18:34, Ben Nemec wrote: > Okay, this should be done. The doc update job is pointed to the new > location, and redirect pages were left at all the old locations. > > On 05/08/2015 09:48 AM, Jaromir Coufal wrote: >> >> >> On 08/05/15 16:42, Ben Nemec wrote: >>> Sorry, forgot to come back to this yesterday. Thoughts inline. >>> >>> On 05/07/2015 04:13 AM, Jaromir Coufal wrote: >>>> Hi Ben, >>>> >>>> I wanted to sync with you if we could coordinate the movement of >>>> documentation from: >>>> >>>> https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/* >>>> >>>> to: >>>> >>>> https://repos.fedorapeople.org/repos/openstack-m/docs/* >>>> >>>> with following sub-directories: >>>> * ../docs/master >>>> * ../docs/sprint4 >>>> * ../docs/sprint5 (doesn't exist yet) >>>> etc. >>>> >>>> Since we don't have any date for testing day yet, I think we can do it >>>> this week. >>> >>> True, but we do have a bunch of people working on demos for the end of >>> the sprint, so I'd prefer to hold off until Monday. At this point one >>> day more or less isn't going to hurt anything. >>> >>>> >>>> For old sites, I don't think that we can somehow control redirects, so I >>>> would suggest to place there temporary index.html file with information >>>> that docs moved and direct people to new location (example: >>>> http://paste.openstack.org/show/215975/). >>> >>> +1. In fact, I'd suggest we replace every html file in the old location >>> with such a redirect. That way people (like me:-) who always have the >>> docs open and just refresh them periodically will get sent to the new >>> location automatically. >>> >>> So if there are no objections I'll look into updating all the build >>> targets on Monday and get the redirects in place. >>> >>> -Ben >> >> +1 completely agree. >> >> Thanks Ben >> -- J >> From ichi.sara at gmail.com Wed May 13 10:14:33 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 13 May 2015 12:14:33 +0200 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: References: <20150513081641.GB24573@t430slt.redhat.com> <20150513083119.GC24573@t430slt.redhat.com> Message-ID: I have an other issue with heat. Now that the template is being parsed successfully. The creation is never complete and heat doesn't report any error. I noticed in the dashboard that heat created two of each ressource spaned at the same time. One is creation_complete and the other is in progress. The heat_engine.log is just so huge as a enabled the debug mode. But i can't see any relevant information out there 2015-05-13 10:51 GMT+02:00 ICHIBA Sara : > Oh, sorry again. It's not even on github. I was mistaken. Probably I > confused the github repo with this tuto > > as I was looking at both. Anyway, I took the parameter_groups from it not > from github. Next time i'll pay more attention to what I say. > > You were right, once I removed the parameter_groups and corrected some > parameters. My template was parsed correctly without errors. > > 2015-05-13 10:31 GMT+02:00 Steven Hardy : > >> On Wed, May 13, 2015 at 10:20:30AM +0200, ICHIBA Sara wrote: >> > Ok, thank you for your response. I'll change it and see what it >> gives me. >> > I followed some template examples which I found in github. I'm just >> > running some tests to get used to heat and its templates >> >> Where on github? If it's an official repo we should fix it. >> >> I've raised an upstream bug[1] so we can improve the error message, and >> possibly the user-guide documentation[2] and official heat example >> templates[3] >> >> Parameter groups are documented in the HOT spec[4], but it seems like we >> can improve things elsewhere. >> >> [1] https://bugs.launchpad.net/heat/+bug/1454559 >> [2] >> http://docs.openstack.org/user-guide/enduser/hot-guide/hot_hello_world.html#input-parameters >> [3] https://github.com/openstack/heat-templates >> [4] >> http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#parameter-groups-section >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: simple.yaml Type: application/octet-stream Size: 3302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: instance_creation_in_progress.PNG Type: image/png Size: 23669 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lb_autoscale2_dashboard_stack_creation.PNG Type: image/png Size: 39371 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lb_creation_complete.PNG Type: image/png Size: 12333 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: resources_lbautoscale2.PNG Type: image/png Size: 48225 bytes Desc: not available URL: From ihrachys at redhat.com Wed May 13 10:37:34 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 13 May 2015 12:37:34 +0200 Subject: [Rdo-list] nova.conf.sample doesn't include all options available In-Reply-To: References: Message-ID: <5553296E.7090008@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/13/2015 01:01 AM, Belmiro Moreira wrote: > Hi, In RDO nova (2014.2.2) the "nova.conf.sample" doesn't include > the [database] configuration group. Any reason to not have it? It > would help deployers if nova.conf.sample has all configuration > options available. > > Apparently the pre-generated sample config file included into nova package is generated without all oslo libraries passed to oslo-config-generator. You may want to report a bug against openstack-nova to fix the issue. Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVUyluAAoJEC5aWaUY1u57FVoH/1x2cGkDU0U/OCRVeIBhIKTt UKY0sQuMjx3mOOQci1YHjbUAo7EIZuRiqABPS3U1g8On0izwirdEHsgHGuJif+Tn GHEtScNcbB582hzKyTjWwSpBNupnM/IPIsK9OzwYxVnbFLFZb6zj1dI3CUUGaH1J BOX4X8KmyLSqFTJJTSImQZVMZa4GF0EXY7/XQ+uPe9tLhjNcoIVN9t5aN8SBktnG YJYYnFY0SR/lqZ/KxdFeJsvOB9M0WECuInlVzr01wtYC+/VXpsCFbtXj0h7OVnbZ FoWFN+yfrp3Cv+oEEaaBUG1mk609lCtZCQ1ILyG16R4cMsHiqwf8RDcRdpuV0H8= =WpkC -----END PGP SIGNATURE----- From ihrachys at redhat.com Wed May 13 10:38:25 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 13 May 2015 12:38:25 +0200 Subject: [Rdo-list] neutron added python-six build dependency In-Reply-To: <55522F82.60804@redhat.com> References: <55522F82.60804@redhat.com> Message-ID: <555329A1.2090707@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 UPD: it turned it's a bug in neutron, and we should not need to add python-six to build deps once it's fixed. Ihar On 05/12/2015 06:51 PM, Ihar Hrachyshka wrote: > Hi all, > > neutron just added [1] a new 'six' dependency to be able to build > a source tarball. Now, delorean/kilo crashes [2] (it seems it > crashes for CentOS only. I suspect python-six is available in > default Fedora image, that's why). It goes like this: > > - sdist rebuild tries to parse setup.cfg - setup.cfg has a hook to > import inside neutron.* - since [1] adds a six library usage into > neutron/__init__.py, sdist call now fails. > > I wonder what's the best way to handle it? From one side, it's > probably the easiest to add python-six build dependency to neutron > packages. From the other side, it's not really a neutron > dependency but a delorean+neutron one. Building from tarballs > should not need it. > > [1]: https://review.openstack.org/#/c/181277/ [2]: > http://trunk.rdoproject.org/centos70/96/09/96091cb976e5f858fcd53fb0798 55 > > 5020eac94b8_33cfff38/rpmbuild.log > > Comments? Ihar > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVUymhAAoJEC5aWaUY1u57pycIAJN/yJHUz763q9RUyLbK0DYq fVLG60wRrBtOzDFPM8md+b2nyuM+7IjZ+spSCB4ks3ZHp2wqpD5jL4YfvgzsJULZ XRXc2nAo7X6qoWOMNsX5XvVgRfXP4u0wl0h8/r42uA7njJANzWqRZoxdPuOYgsx1 6FXKpFmxucZZN/5GdrnCGLDemt0uIM1uTmfrXkxB8sLM7nLDUaBVVAt5tn7EiYtl w5e9lDeDJuNNKvQhgUlWAyy+5rJNbRU47fDIARiVDJ+aWAty+BSH5W6AaCEOkiQz xr0NTrOCX3CT/uo+BvCwGCCzHNWKfoFKW1ncvG46GeoHlt2Gp0VWGQf2W/cT5Kg= =hJAr -----END PGP SIGNATURE----- From ihrachys at redhat.com Wed May 13 11:05:49 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 13 May 2015 13:05:49 +0200 Subject: [Rdo-list] openstack-neutron-gbp for delorean Message-ID: <5553300D.4050101@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi all, now that rpm-kilo is synced from delorean to fedora/master, I see openstack-neutron-gbp fails due to unsatisfied dependencies (it currently requires Juno neutron). I wonder whether that's a good time to move openstack-neutron-gbp development under delorean roof, and leave fedora/master package as a downstream just for koji and stuff. In that way, -gbp package would be able to receive more attention from neutron maintainers, getting packaging updates in timely manner, and with review and CI applied. I'm putting the Fedora package maintainer into CC. Robert, since you're the maintainer of the Fedora package, could you please evaluate the option to move the package into Delorean? I think it would serve both RDO and Fedora and GBP interest better. Comments? Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVUzANAAoJEC5aWaUY1u57NIoIAK8940/9xkjZUFvO0CjnPCAc kiugpBVn0OHaCWPhgV87dGTE3eO+aiP9/a8rLs3BejO1NZ581lDqswJRxEfsMInY EAa5r9Zzb7eR2EPyzBnqd180yYcSNFSGEh82DBktfmlJKn+wFmjP+FsjrmVgm+le d4uS5IedcUJJEL8YKhE2+48xnD/J/Yt8EnxOLy26iM1dEEGRKBQKrL+64PemxRw5 ae6v/lmhIVHOoZFB6qIxv2D2GlyYkXrDz+nmPuHt0kr97VkuYllwZmxcu7v7gJIG VhT4cWs4A2UgwVG9Qb2hTPOfPMUVoDimeiwtChJdFZktSfz0Vj+8t2JhqUBjBEM= =D4Vt -----END PGP SIGNATURE----- From ichi.sara at gmail.com Wed May 13 12:08:54 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 13 May 2015 14:08:54 +0200 Subject: [Rdo-list] [heat]error parsing template file In-Reply-To: References: <20150513081641.GB24573@t430slt.redhat.com> <20150513083119.GC24573@t430slt.redhat.com> Message-ID: Hello, Finaly my stack creation failed and I found this errors in my heat_engine.log. If you have any idea how to fix this. plz let me know 2015-05-13 11:58:18.764 3963 ERROR heat.engine.resource [req-16c7841f-fad0-4545-b098-3f473c1aa039 None] DB error Not found 2015-05-13 11:58:18.785 3963 ERROR heat.engine.resource [req-16c7841f-fad0-4545-b098-3f473c1aa039 None] DB error Not found 2015-05-13 11:58:18.822 3963 ERROR heat.engine.resource [req-16c7841f-fad0-4545-b098-3f473c1aa039 None] DB error Not found 2015-05-13 11:58:18.839 3963 DEBUG heat.engine.scheduler [req-16c7841f-fad0-4545-b098-3f473c1aa039 None] Task resource_action cancelled cancel /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:236 2015-05-13 11:58:18.839 3963 DEBUG heat.engine.scheduler [req-16c7841f-fad0-4545-b098-3f473c1aa039 None] Task resource_action cancelled cancel /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:236 2015-05-13 11:58:18.879 3963 INFO heat.engine.environment [req-16c7841f-fad0-4545-b098-3f473c1aa039 None] Registering OS::Heat::ScaledResource -> AWS::EC2::Instance 2015-05-13 11:58:18.880 3963 INFO heat.engine.environment [req-16c7841f-fad0-4545-b098-3f473c1aa039 None] Registering OS::Nova::Server::Cirros -> file:///root/cirros.yaml 2015-05-13 11:58:18.882 3963 INFO heat.common.urlfetch [req-16c7841f-fad0-4545-b098-3f473c1aa039 None] Fetching data from file:///root/cirros.yaml 2015-05-13 11:58:19.764 3963 DEBUG heat.engine.scheduler [-] Task stack_task from Stack "lb_autoscale2" [954257c4-b515-46d0-b479-9cd3a12557f4] running step /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:210 2015-05-13 11:58:19.764 3963 DEBUG heat.engine.scheduler [-] Task resource_action running step /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:210 2015-05-13 11:58:19.764 3963 DEBUG heat.engine.scheduler [-] Task stack_task from Stack "lb_autoscale2-group-gfujfyotohpr" [dc1fb2f2-703b-4cb8-a047-970702eb80d0] running step /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:210 2015-05-13 11:58:19.765 3963 DEBUG heat.engine.scheduler [-] Task resource_action running step /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:210 2015-05-13 11:58:19.765 3963 DEBUG heat.engine.scheduler [-] Task stack_task from Stack "lb_autoscale2-group-gfujfyotohpr-tc3jq2xezgmk-ea3acb2t3hrc" [43b03d83-8206-4838-8009-644826ddcd4f] running step /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:210 2015-05-13 11:58:19.765 3963 DEBUG heat.engine.scheduler [-] Task resource_action running step /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:210 2015-05-13 11:58:19.767 3963 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.5.33 2015-05-13 11:58:19.951 3963 DEBUG urllib3.connectionpool [-] "GET /v2/fac261cd98974411a9b2e977cd9ec876/servers/b1fb541d-7d01-495e-b595-18428eca2a80 HTTP/1.1" 200 1581 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 2015-05-13 12:54:54.064 3963 TRACE heat.engine.resource BadRequest: Expecting to find username or userId in passwordCredentials - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) 2015-05-13 12:54:54.064 3963 TRACE heat.engine.resource 2015-05-13 12:54:54.171 3963 DEBUG heat.engine.scheduler [-] Task resource_action cancelled cancel /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:236 2015-05-13 12:54:54.181 3963 INFO heat.engine.stack [-] Stack CREATE FAILED (lb_autoscale2-group-gfujfyotohpr-tc3jq2xezgmk-ea3acb2t3hrc): Resource CREATE failed: BadRequest: Expecting to find username or userId in passwordCredentials - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) 2015-05-13 12:54:54.181 3963 DEBUG heat.engine.scheduler [-] Task stack_task from Stack "lb_autoscale2-group-gfujfyotohpr-tc3jq2xezgmk-ea3acb2t3hrc" [43b03d83-8206-4838-8009-644826ddcd4f] complete step /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:216 2015-05-13 12:54:54.182 3963 INFO heat.engine.resource [-] CREATE: TemplateResource "tc3jq2xezgmk" [43b03d83-8206-4838-8009-644826ddcd4f] Stack "lb_autoscale2-group-gfujfyotohpr" [dc1fb2f2-703b-4cb8-a047-970702eb80d0] 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource Traceback (most recent call last): 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 439, in _action_recorder 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource yield 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 509, in _do_action 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource yield self.action_handler_task(action, args=handler_args) 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/scheduler.py", line 303, in wrapper 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource step = next(subtask) 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 483, in action_handler_task 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource while not check(handler_data): 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/stack_resource.py", line 223, in check_create_complete 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource raise exception.Error(self._nested.status_reason) 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource Error: Resource CREATE failed: BadRequest: Expecting to find username or userId in passwordCredentials - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) 2015-05-13 12:54:54.182 3963 TRACE heat.engine.resource 2015-05-13 12:54:54.205 3963 DEBUG heat.engine.scheduler [-] Task resource_action cancelled cancel /usr/lib/python2.7/site-packages/heat/engine/scheduler.py:236 2015-05-13 12:14 GMT+02:00 ICHIBA Sara : > I have an other issue with heat. Now that the template is being parsed > successfully. The creation is never complete and heat doesn't report any > error. I noticed in the dashboard that heat created two of each ressource > spaned at the same time. One is creation_complete and the other is in > progress. > > The heat_engine.log is just so huge as a enabled the debug mode. But i > can't see any relevant information out there > > 2015-05-13 10:51 GMT+02:00 ICHIBA Sara : > >> Oh, sorry again. It's not even on github. I was mistaken. Probably I >> confused the github repo with this tuto >> >> as I was looking at both. Anyway, I took the parameter_groups from it not >> from github. Next time i'll pay more attention to what I say. >> >> You were right, once I removed the parameter_groups and corrected some >> parameters. My template was parsed correctly without errors. >> >> 2015-05-13 10:31 GMT+02:00 Steven Hardy : >> >>> On Wed, May 13, 2015 at 10:20:30AM +0200, ICHIBA Sara wrote: >>> > Ok, thank you for your response. I'll change it and see what it >>> gives me. >>> > I followed some template examples which I found in github. I'm just >>> > running some tests to get used to heat and its templates >>> >>> Where on github? If it's an official repo we should fix it. >>> >>> I've raised an upstream bug[1] so we can improve the error message, and >>> possibly the user-guide documentation[2] and official heat example >>> templates[3] >>> >>> Parameter groups are documented in the HOT spec[4], but it seems like we >>> can improve things elsewhere. >>> >>> [1] https://bugs.launchpad.net/heat/+bug/1454559 >>> [2] >>> http://docs.openstack.org/user-guide/enduser/hot-guide/hot_hello_world.html#input-parameters >>> [3] https://github.com/openstack/heat-templates >>> [4] >>> http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#parameter-groups-section >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcoufal at redhat.com Wed May 13 15:22:43 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Wed, 13 May 2015 17:22:43 +0200 Subject: [Rdo-list] [RDO-Manager] Testing Day for RDO-Manager - Thursday, May 14 Message-ID: <55536C43.3060206@redhat.com> Dear All, as we indicated last week, we would like to invite you all to RDO-Manager test day which is going to happen tomorrow. RDO-Manager Kilo Test Day - Part1: * May 14, 2015 * https://www.rdoproject.org/RDO-Manager_test_day_Kilo We apologize for late announcement. Given this situation we are going to have one test day tomorrow (sort of a smaller event) and you can expect bigger test day after OpenStack Summit, in the beginning of June. If you can find some time to run through the deployment flow, please come tomorrow and help us getting RDO-Manager as solid as possible. Thanks! See you tomorrow -- Jarda Quick Links: * Test Day Wiki: https://www.rdoproject.org/RDO-Manager_test_day_Kilo * Notes: https://etherpad.openstack.org/p/rdo-manager_kilo_test_day * RDO-Manager Website: http://www.rdoproject.org/RDO-Manager * RDO-Manager Docs: http://docs.rdoproject.org/rdo-manager/master From apevec at gmail.com Wed May 13 15:39:42 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 13 May 2015 17:39:42 +0200 Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-05-13) Message-ID: ======================================== #rdo: RDO packaging meeting (2015-05-13) ======================================== Meeting started by apevec at 15:04:08 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-05-13/rdo.2015-05-13-15.04.log.html . Meeting summary --------------- * roll call (apevec, 15:04:19) * RDO Kilo GA (apevec, 15:04:56) * EL7 is GA, Fedora coming (incomplete repo) (apevec, 15:05:13) * ACTION: apevec complete RDO Kilo Fedora repo (apevec, 15:05:34) * LINK: https://rdoproject.org/repos/rdo-release.rpm points to kilo now. I need to update the Quickstart page. (rbowen, 15:06:07) * ACTION: apevec and eggmaster to get rdo update CI working for Kilo (apevec, 15:06:43) * ACTION: number80 list pending Fedora pkg reviews (number80, 15:07:36) * RDO Manager test day (apevec, 15:08:19) * ACTION: jcoufal to post rdo manager testday announcement on rdo-list (apevec, 15:11:59) * RDO Meetup at Summit (apevec, 15:12:34) * LINK: https://etherpad.openstack.org/p/RDO_Vancouver (rbowen, 15:13:10) * ACTION: apevec to restart RDO/Fedora thread on rdo-list (apevec, 15:17:33) * ACTION: apevec to start "Should RDO == Delorean? on rdo-list (apevec, 15:20:12) * ACTION: rbowen - Follow up on Meetup agenda to attach a name to each suggested topic (rbowen, 15:25:55) * ACTION: rbowen to push rdo tshirt photo on rdo-list (apevec, 15:26:55) * open floor (apevec, 15:27:17) * LINK: http://tm3.org/rdobugs (rbowen, 15:29:33) * LINK: https://www.rdoproject.org/RDO-Manager_test_day_Kilo (jcoufal, 15:34:42) Meeting ended at 15:36:09 UTC. Action Items ------------ * apevec complete RDO Kilo Fedora repo * apevec and eggmaster to get rdo update CI working for Kilo * number80 list pending Fedora pkg reviews * jcoufal to post rdo manager testday announcement on rdo-list * apevec to restart RDO/Fedora thread on rdo-list * apevec to start "Should RDO == Delorean? on rdo-list * rbowen - Follow up on Meetup agenda to attach a name to each suggested topic * rbowen to push rdo tshirt photo on rdo-list Action Items, by person ----------------------- * apevec * apevec complete RDO Kilo Fedora repo * apevec and eggmaster to get rdo update CI working for Kilo * apevec to restart RDO/Fedora thread on rdo-list * apevec to start "Should RDO == Delorean? on rdo-list * eggmaster * apevec and eggmaster to get rdo update CI working for Kilo * jcoufal * jcoufal to post rdo manager testday announcement on rdo-list * number80 * number80 list pending Fedora pkg reviews * rbowen * rbowen - Follow up on Meetup agenda to attach a name to each suggested topic * rbowen to push rdo tshirt photo on rdo-list * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (70) * rbowen (36) * kashyap (19) * number80 (15) * jcoufal (13) * mburned (12) * eggmaster (3) * zodbot (3) * ryansb (2) * chandankumar (1) * jpena (1) * aortega (1) * jruzicka (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From john.haller at alcatel-lucent.com Wed May 13 16:39:07 2015 From: john.haller at alcatel-lucent.com (Haller, John H (John)) Date: Wed, 13 May 2015 16:39:07 +0000 Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-05-13) In-Reply-To: References: Message-ID: <7C1824C61EE769448FCE74CD83F0CB4F5836428B@US70TWXCHMBA11.zam.alcatel-lucent.com> > Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-05- > 13) > * RDO Kilo GA (apevec, 15:04:56) > * EL7 is GA, Fedora coming (incomplete repo) (apevec, 15:05:13) > * ACTION: apevec complete RDO Kilo Fedora repo (apevec, 15:05:34) Looking at the el7 repository, there are 3 packages issued in December: openstack-utils-2014.2-1 python-cinderclient-1.1.1-1 python-cinderclient-doc-1.1.1-1 For openstack-utils, it looks like a new release tag (and corresponding RPM name bump) is needed, there are some changes made in February and April which look useful: https://github.com/redhat-openstack/openstack-utils Commits since December: openstack-status: list nova instances for all tenants (April 29) openstack-status: list tuskar and ironic services (April 29) openstack-status: list all nova optional services (February 3) For python-cinderclient, there was a 1.2.0 released 22 days ago, followed by a 1.2.1 6 days ago, see release notes: http://docs.openstack.org/developer/python-cinderclient/ All of these look like Kilo-related changes Regards, John Haller From apevec at gmail.com Wed May 13 18:04:05 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 13 May 2015 20:04:05 +0200 Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-05-13) In-Reply-To: <7C1824C61EE769448FCE74CD83F0CB4F5836428B@US70TWXCHMBA11.zam.alcatel-lucent.com> References: <7C1824C61EE769448FCE74CD83F0CB4F5836428B@US70TWXCHMBA11.zam.alcatel-lucent.com> Message-ID: > For openstack-utils, it looks like a new release tag (and corresponding RPM name bump) is needed, there are some changes made in February and April which look useful: > https://github.com/redhat-openstack/openstack-utils > Commits since December: > openstack-status: list nova instances for all tenants (April 29) > openstack-status: list tuskar and ironic services (April 29) > openstack-status: list all nova optional services (February 3) 2014.2-1 was latest in Rawhide: http://pkgs.fedoraproject.org/cgit/openstack-utils.git/log/ and upstream https://github.com/redhat-openstack/openstack-utils/releases P?draig, please push new release, I'll rebuild EL7 in CBS. > For python-cinderclient, there was a 1.2.0 released 22 days ago, followed by a 1.2.1 6 days ago, see release notes: > http://docs.openstack.org/developer/python-cinderclient/ > > All of these look like Kilo-related changes Yeah, something is not right, stable/kilo global-requirements have python-cinderclient>=1.1.0,<1.2.0 and 1.1.1 is latest on stable/kilo branch https://github.com/openstack/python-cinderclient/commits/stable/kilo hence I didn't update, assuming 1.2.x is Liberty. Please report this upstream! Cheers, Alan From john.haller at alcatel-lucent.com Wed May 13 19:56:17 2015 From: john.haller at alcatel-lucent.com (Haller, John H (John)) Date: Wed, 13 May 2015 19:56:17 +0000 Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-05-13) In-Reply-To: References: <7C1824C61EE769448FCE74CD83F0CB4F5836428B@US70TWXCHMBA11.zam.alcatel-lucent.com> Message-ID: <7C1824C61EE769448FCE74CD83F0CB4F5836473F@US70TWXCHMBA11.zam.alcatel-lucent.com> > Yeah, something is not right, stable/kilo global-requirements have > python-cinderclient>=1.1.0,<1.2.0 > and 1.1.1 is latest on stable/kilo branch > https://github.com/openstack/python-cinderclient/commits/stable/kilo > hence I didn't update, assuming 1.2.x is Liberty. > Please report this upstream! > > Cheers, > Alan 1.2.0 is liberty: http://lists.openstack.org/pipermail/openstack-dev/2015-April/062187.html However, many/most of the commits look like they are part of what should have gone into Kilo, as they are really for Liberty. I think someone forgot to make a Kilo python-cinderclient branch. I filed bug report 145818 on Cinder, it looks like some of the 1.2.0 changes will have to be backported to 1.1.X. The last Cinderclient tag came in September 2014, it's like Kilo didn't exist for Cinderclient. The 1.1.1 version is actually Juno, not sure if there is room for a Kilo release stream in the numbering plan. Regards, John Haller From stdake at cisco.com Wed May 13 20:55:37 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Wed, 13 May 2015 20:55:37 +0000 Subject: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 In-Reply-To: References: Message-ID: Arash, If your installing on devstack, please mail the openstack-dev maiilng list and place [magnum] in the mailing list header. This list is more targeted around rdo. Regards -steve From: Arash Kaffamanesh > Date: Monday, May 11, 2015 at 11:05 AM To: Steven Dake > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Steve, Thanks! I pulled magnum from git on devstack, dropped the magnum db, created a new one and tried to create a bay, now I'm getting "went to status error due to unknown" as below. Nova and magnum bay-list list shows: ubuntu at magnum:~/devstack$ nova list +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+ | 797b6057-1ddf-4fe3-8688-b63e5e9109b4 | te-h5yvoiptrmx3-0-4w4j2ltnob7a-kube_node-vg7rojnafrub | ERROR | - | NOSTATE | testbay-6kij6pvui3p7-fixed_network-46mvxv7yfjzw=10.0.0.5, 2001:db8::f | | c0b56f08-8a4d-428a-aee1-b29ca6e68163 | testbay-6kij6pvui3p7-kube_master-z3lifgrrdxie | ACTIVE | - | Running | testbay-6kij6pvui3p7-fixed_network-46mvxv7yfjzw=10.0.0.3, 2001:db8::d | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+ ubuntu at magnum:~/devstack$ magnum bay-list +--------------------------------------+---------+------------+---------------+ | uuid | name | node_count | status | +--------------------------------------+---------+------------+---------------+ | 87e36c44-a884-4cb4-91cc-c7ae320f33b4 | testbay | 2 | CREATE_FAILED | +--------------------------------------+---------+------------+---------------+ e3a65b05f", "flannel_network_subnetlen": "24", "fixed_network_cidr": "10.0.0.0/24", "OS::stack_id": "d0246d48-23e0-4aa0-87e0-052b2ca363e8", "OS::stack_name": "testbay-6kij6pvui3p7", "master_flavor": "m1.small", "external_network_id": "e3e2a633-1638-4c11-a994-7179a24e826e", "portal_network_cidr": "10.254.0.0/16", "docker_volume_size": "5", "ssh_key_name": "testkey", "kube_allow_priv": "true", "number_of_minions": "2", "flannel_use_vxlan": "false", "flannel_network_cidr": "10.100.0.0/16", "server_flavor": "m1.medium", "dns_nameserver": "8.8.8.8", "server_image": "fedora-21-atomic-3"}, "id": "d0246d48-23e0-4aa0-87e0-052b2ca363e8", "outputs": [{"output_value": ["2001:db8::f", "2001:db8::e"], "description": "No description given", "output_key": "kube_minions_external"}, {"output_value": ["10.0.0.5", "10.0.0.4"], "description": "No description given", "output_key": "kube_minions"}, {"output_value": "2001:db8::d", "description": "No description given", "output_key": "kube_master"}], "template_description": "This template will boot a Kubernetes cluster with one or more minions (as specified by the number_of_minions parameter, which defaults to \"2\").\n"}} log_http_response /usr/local/lib/python2.7/dist-packages/heatclient/common/http.py:141 2015-05-11 17:31:15.968 30006 ERROR magnum.conductor.handlers.bay_k8s_heat [-] Unable to create bay, stack_id: d0246d48-23e0-4aa0-87e0-052b2ca363e8, reason: Resource CREATE failed: ResourceUnknownStatus: Resource failed - Unknown status FAILED due to "Resource CREATE failed: ResourceUnknownStatus: Resource failed - Unknown status FAILED due to "Resource CREATE failed: ResourceInError: Went to status error due to "Unknown""" Any Idea? Thanks! -Arash On Mon, May 11, 2015 at 2:04 AM, Steven Dake (stdake) > wrote: Arash, The short of it is Magnum 2015.1.0 is DOA. Four commits have hit the repository in the last hour to fix these problems. Now Magnum works with v1beta3 of the kubernetes 0.15 v1betav3 examples with the exception of the service object. We are actively working on that problem upstream ? I?ll update when its fixed. To see my run check out: http://ur1.ca/kc613 -> http://paste.fedoraproject.org/220479/13022911 To upgrade and see everything working but the service object, you will have to remove your openstack-magnum package if using my COPR repo or git pull on your Magnum repo if using devstack. Boris - interested to hear the feedback on a CentOS distro operation once we get that service bug fixed. Regards -steve From: Arash Kaffamanesh > Date: Sunday, May 10, 2015 at 4:10 PM To: Steven Dake > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Steve, Thanks for your kind advice. I'm trying to go first through the quick start for magnum with devstack on ubuntu and I'm also following this guide to create a bay with 2 nodes: http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/dev/dev-quickstart.rst I got somehow far, but by running this step to run the service tp provide a discoverable endpoint for the redis sentinels in the cluster: magnum service-create --manifest ./redis-sentinel-service.yaml --bay testbay I'm getting: ERROR: Invalid resource state. (HTTP 409) In the console, I see: 2015-05-10 22:19:44.010 4967 INFO oslo_messaging._drivers.impl_rabbit [-] Connected to AMQP server on 127.0.0.1:5672 2015-05-10 22:19:44.050 4967 WARNING wsme.api [-] Client-side error: Invalid resource state. 127.0.0.1 - - [10/May/2015 22:19:44] "POST /v1/rcs HTTP/1.1" 409 115 The testbay is running with 2 nodes properly: ubuntu at magnum:~/kubernetes/examples/redis$ magnum bay-list | 4fa480a7-2d96-4a3e-876b-1c59d67257d6 | testbay | 2 | CREATE_COMPLETE | Any ideas, where I could dig for the problem? By the way after running "magnum pod-create .." the status shows "failed" ubuntu at magnum:~/kubernetes/examples/redis/v1beta3$ magnum pod-create --manifest ./redis-master.yaml --bay testbay +--------------+---------------------------------------------------------------------+ | Property | Value | +--------------+---------------------------------------------------------------------+ | status | failed | And the pod-list shows: ubuntu at magnum:~$ magnum pod-list +--------------------------------------+--------------+ | uuid | name | +--------------------------------------+--------------+ | 8d6977c1-a88f-45ee-be6c-fd869874c588 | redis-master | I tried also to set the status to running in the pod database table, but it didn't help. P.S.: I tried also to run the whole thing on fedora 21 with devstack, but I got more problems as on Ubuntu. Many thanks in advance for your help! Arash On Mon, May 4, 2015 at 12:54 AM, Steven Dake (stdake) > wrote: Boris, Feel free to try out my Magnum packages here. They work in containers, not sure about CentOS. I?m not certain the systemd files are correct (I didn?t test that part) but the dependencies are correct: https://copr.fedoraproject.org/coprs/sdake/openstack-magnum/ NB you will have to run through the quickstart configuration guide here: https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-manual-devstack.rst Regards -steve From: Boris Derzhavets > Date: Sunday, May 3, 2015 at 11:20 AM To: Arash Kaffamanesh > Cc: "rdo-list at redhat.com" > Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Arash, Please, disregard this notice :- >You wrote :- >> What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the >> connectivity >to the instance and Kilo Different types of VMs in yours and mine environments. Boris. ________________________________ Date: Sun, 3 May 2015 16:51:54 +0200 Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Boris, thanks for your kind feedback. I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal. The installation was successful by the first run. The network looks like this: https://cloudssky.com/.galleries/images/kilo-virt-setup.png For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login, added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated, rebooted and spawn the network and compute1 vm nodes from that snapshot. (To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN on it.) What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilo becomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again). The packstack file was created in interactive mode with: packstack --answer-file= --> press enter I accepted most default values and selected trove and heat to be installed. The answers are on pastebin: http://pastebin.com/SYp8Qf7d The generated packstack file is here: http://pastebin.com/XqJuvQxf The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below). And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm). Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited as I'm now about Kilo :-) Thanks! Arash --- Some outputs here: [root at controller ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | compute1.novalocal | up | enabled | +----+---------------------+-------+---------+ [root at network ~]# ovs-vsctl show 436a6114-d489-4160-b469-f088d66bd752 Bridge br-tun fail_mode: secure Port "vxlan-14000212" Interface "vxlan-14000212" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" ovs_version: "2.3.1" [root at compute~]# ovs-vsctl show 8123433e-b477-4ef5-88aa-721487a4bd58 Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-14000213" Interface "vxlan-14000213" type: vxlan options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"} Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets > wrote: Thank you once again it really works. [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list +----+----------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+----------------------------------------+-------+---------+ | 1 | ip-192-169-142-127.ip.secureserver.net | up | enabled | | 2 | ip-192-169-142-137.ip.secureserver.net | up | enabled | +----+----------------------------------------+-------+---------+ [root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net +--------------------------------------+-------------------+---------------+----------------------------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+----------------------------------------+ | 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net | | 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net | +--------------------------------------+-------------------+---------------+----------------------------------------+ with only one issue:- during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF= during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 and finally it results mess in ml2_vxlan_endpoints table. I had manually update ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes afterwards VMs on compute node obtained access to meta-data server. I also believe that synchronized delete records from tables "compute_nodes && services" ( along with disabling nova-compute on Controller) could turn AIO host into real Controller. Boris. ________________________________ Date: Fri, 1 May 2015 22:22:41 +0200 Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com I got the compute node working by adding the delorean-kilo.repo on compute node, yum updating the compute node, rebooted and extended the packstack file from the first AIO install with the IP of compute node and ran packstack again with NetworkManager enabled and did a second yum update on compute node before the 3rd packstack run, and now it works :-) In short, for RC2 we have to force by hand to get the nova-compute running on compute node, before running packstack from controller again from an existing AIO install. Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a 3rd cirros instance which landed on 2nd compute node. ssh'ing into the instances over the floating ip works fine too. Before running packstack again, I set: EXCLUDE_SERVERS= [root at csky01 ~(keystone_osx)]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000001 laufend --> means running in German 3 instance-00000002 laufend --> means running in German [root at csky06 ~]# virsh list --all Id Name Status ---------------------------------------------------- 2 instance-00000003 laufend --> means running in German == Nova managed services == +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 2 | nova-conductor | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 3 | nova-scheduler | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 4 | nova-compute | csky01.csg.net | nova | enabled | up | 2015-05-01T19:46:40.000000 | - | | 5 | nova-cert | csky01.csg.net | internal | enabled | up | 2015-05-01T19:46:42.000000 | - | | 6 | nova-compute | csky06.csg.net | nova | enabled | up | 2015-05-01T19:46:38.000000 | - | +----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+ On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets > wrote: Ran packstack --debug --answer-file=./answer-fileRC2.txt 192.169.142.137_nova.pp.log.gz attached Boris ________________________________ From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 1 May 2015 01:44:17 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1 Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html packstack fails :- Applying 192.169.142.127_nova.pp Applying 192.169.142.137_nova.pp 192.169.142.127_nova.pp: [ DONE ] 192.169.142.137_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log reports :- 2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds... 2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672 2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds. Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127 On 192.169.142.127 :- [root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672 ==> tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 14506/beam.smp tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp [root at ip-192-169-142-127 ~]# iptables-save | grep 5672 -A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT -A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT Answer-file is attached Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Wed May 13 21:15:05 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 13 May 2015 23:15:05 +0200 Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-05-13) In-Reply-To: <7C1824C61EE769448FCE74CD83F0CB4F5836473F@US70TWXCHMBA11.zam.alcatel-lucent.com> References: <7C1824C61EE769448FCE74CD83F0CB4F5836428B@US70TWXCHMBA11.zam.alcatel-lucent.com> <7C1824C61EE769448FCE74CD83F0CB4F5836473F@US70TWXCHMBA11.zam.alcatel-lucent.com> Message-ID: > I filed bug report 145818 on Cinder, it looks like some of the 1.2.0 changes will have to be backported to 1.1.X. > The last Cinderclient tag came in September 2014, it's like Kilo didn't exist for Cinderclient. Yeah, OpenStack releases make whooshing noise as they go by. https://bugs.launchpad.net/cinder/+bug/1454818 got resolved so upstream: marked Invalid, "use pip"... I've added comment, but it's not just cinderclient, heatclient is in the same situation: https://review.openstack.org/182672 > The 1.1.1 version is actually Juno, not sure if there is room for a Kilo release stream in the numbering plan. There is, 1.1.X tags/releases could be pushed to stable/kilo branch, there isn't stable/juno for cinderclient. This should be discussed at design summit: http://lists.openstack.org/pipermail/openstack-dev/2015-April/062953.html so we'll see what will come out of that. Cheers, Alan From jcoufal at redhat.com Thu May 14 09:27:20 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Thu, 14 May 2015 11:27:20 +0200 Subject: [Rdo-list] [RDO-Manager] Merged not CI'ed change of repositories which is causing deployment failure Message-ID: <55546A78.9040408@redhat.com> Yesterday there was merged change of repositories in the instack-undercloud: https://review.gerrithub.io/#/c/233196. This change sis not CI'ed (I just checked with Atilla) and it is breaking current deployment of overcloud. Issue is version of tripleo-heat-templates: ORIGINAL: http://trunk-mgt.rdoproject.org/repos/current-passed-ci/ 0.8.4-post37 NEW: http://trunk-mgt.rdoproject.org/centos-kilo/current-passed-ci/ 0.8.5-post1 new version doesn't contain this patch apparently: https://github.com/rdo-management/tripleo-heat-templates/commit/55619c68ba7523dcccb228bd788f49f8496791cf [stack at instack ~]$ rpm -q openstack-tripleo-heat-templates openstack-tripleo-heat-templates-0.8.5-post1.el7.centos.noarch [stack at instack ~]$ less /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/controller.yaml the line 15 from the patch is missing. Until this packaging issue is fixed the deployment will be blocked. -- Jarda From trown at redhat.com Thu May 14 10:00:47 2015 From: trown at redhat.com (John Trowbridge) Date: Thu, 14 May 2015 06:00:47 -0400 Subject: [Rdo-list] [RDO-Manager] Merged not CI'ed change of repositories which is causing deployment failure In-Reply-To: <55546A78.9040408@redhat.com> References: <55546A78.9040408@redhat.com> Message-ID: <5554724F.8000503@redhat.com> On 05/14/2015 05:27 AM, Jaromir Coufal wrote: > Yesterday there was merged change of repositories in the > instack-undercloud: https://review.gerrithub.io/#/c/233196. This change > sis not CI'ed (I just checked with Atilla) and it is breaking current > deployment of overcloud. > The change to instack-undercloud did pass CI. The problem was the corresponding change to CI to use the new repo for promotion did not go through. There is a wedge if we want to change the repo location, as it is changed in three places (Delorean itself, instack-undercloud, and khalessi). We got the first two changed, and then could not get the change to khalessi passing and merged. This change could only happen at the beginning of a sprint, because the risk of exactly this, and should have happened last sprint when upstream kilo releases started getting cut. In any case the below issue is resloved. > Issue is version of tripleo-heat-templates: > ORIGINAL: http://trunk-mgt.rdoproject.org/repos/current-passed-ci/ > 0.8.4-post37 > NEW: http://trunk-mgt.rdoproject.org/centos-kilo/current-passed-ci/ > 0.8.5-post1 > 0.8.6-dev5 is now the version of T-H-T in that repo. > new version doesn't contain this patch apparently: > https://github.com/rdo-management/tripleo-heat-templates/commit/55619c68ba7523dcccb228bd788f49f8496791cf > > > [stack at instack ~]$ rpm -q openstack-tripleo-heat-templates > openstack-tripleo-heat-templates-0.8.5-post1.el7.centos.noarch > [stack at instack ~]$ less > /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/controller.yaml > > > the line 15 from the patch is missing. > > Until this packaging issue is fixed the deployment will be blocked. > > -- Jarda Respectfully, John Trowbridge From hbrock at redhat.com Thu May 14 14:33:07 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Thu, 14 May 2015 16:33:07 +0200 Subject: [Rdo-list] [RDO-Manager] Testing Day for RDO-Manager - Thursday, May 14 In-Reply-To: <55536C43.3060206@redhat.com> References: <55536C43.3060206@redhat.com> Message-ID: <20150514143307.GG11084@redhat.com> On Wed, May 13, 2015 at 05:22:43PM +0200, Jaromir Coufal wrote: > Dear All, > > as we indicated last week, we would like to invite you all to RDO-Manager > test day which is going to happen tomorrow. > > RDO-Manager Kilo Test Day - Part1: > * May 14, 2015 * > > https://www.rdoproject.org/RDO-Manager_test_day_Kilo > > We apologize for late announcement. Given this situation we are going to > have one test day tomorrow (sort of a smaller event) and you can expect > bigger test day after OpenStack Summit, in the beginning of June. > > If you can find some time to run through the deployment flow, please come > tomorrow and help us getting RDO-Manager as solid as possible. Thanks! > > See you tomorrow > -- Jarda I apologize to Jarda and to anyone who tried testing today. An ill-advised repository config change late last night broke us and we are just now getting the damage cleaned up. We'll be back again after Vancouver -- I apologize for the inconvenience. --Hugh -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == RDO Manager: Install, configure, and scale OpenStack == == http://rdoproject.org == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From hguemar at fedoraproject.org Thu May 14 16:07:12 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 14 May 2015 18:07:12 +0200 Subject: [Rdo-list] [RDO] Packages reviews Message-ID: Hi folks, as I'm currently listing and reviewing pending reviews, I found out new reviews that were off radar. If you have currently pending reviews for RDO *and* RDO manager, please answer this thread with links. In the future, I encourage you to announce them on this list using the tag [package-review]. That will help us to identify them faster and process them in a timely fashion, especially when we have to sponsor new packagers. --- In the long term, I think about having trackers in bugzilla for openstack packaging (reviews/new packages, etc.), and will write-up few proposals to improve the not-so-perfect current process that we could discuss here. Regards, H. From rk at theep.net Thu May 14 19:39:09 2015 From: rk at theep.net (Robert Kukura) Date: Thu, 14 May 2015 15:39:09 -0400 Subject: [Rdo-list] openstack-neutron-gbp for delorean In-Reply-To: <5553300D.4050101@redhat.com> References: <5553300D.4050101@redhat.com> Message-ID: <5554F9DD.60008@theep.net> Hi Ihar, GBP development currently trails the main OpenStack development cycle a bit. Hopefully this lag will shrink substantially for Liberty. We expect to have a kilo-gbp-3 release out any day that will make sense to package for RDO and Fedora Kilo versions, and the final Kilo GBP release should follow soon after. Unfortunately, whats currently packaged still requires Juno. Can you explain what exactly it means to "move openstack-neutron-gbp development under delorean roof"? Does this mean that the Fedora packages would be generated from the RDO packages? I think that would be fine. If you will be at the OpenStack summit next week, we can discuss this there if you like. -Bob On 5/13/15 7:05 AM, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi all, > > now that rpm-kilo is synced from delorean to fedora/master, I see > openstack-neutron-gbp fails due to unsatisfied dependencies (it > currently requires Juno neutron). > > I wonder whether that's a good time to move openstack-neutron-gbp > development under delorean roof, and leave fedora/master package as a > downstream just for koji and stuff. > > In that way, -gbp package would be able to receive more attention from > neutron maintainers, getting packaging updates in timely manner, and > with review and CI applied. > > I'm putting the Fedora package maintainer into CC. > > Robert, since you're the maintainer of the Fedora package, could you > please evaluate the option to move the package into Delorean? I think > it would serve both RDO and Fedora and GBP interest better. > > Comments? > Ihar > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQEcBAEBCAAGBQJVUzANAAoJEC5aWaUY1u57NIoIAK8940/9xkjZUFvO0CjnPCAc > kiugpBVn0OHaCWPhgV87dGTE3eO+aiP9/a8rLs3BejO1NZ581lDqswJRxEfsMInY > EAa5r9Zzb7eR2EPyzBnqd180yYcSNFSGEh82DBktfmlJKn+wFmjP+FsjrmVgm+le > d4uS5IedcUJJEL8YKhE2+48xnD/J/Yt8EnxOLy26iM1dEEGRKBQKrL+64PemxRw5 > ae6v/lmhIVHOoZFB6qIxv2D2GlyYkXrDz+nmPuHt0kr97VkuYllwZmxcu7v7gJIG > VhT4cWs4A2UgwVG9Qb2hTPOfPMUVoDimeiwtChJdFZktSfz0Vj+8t2JhqUBjBEM= > =D4Vt > -----END PGP SIGNATURE----- From hguemar at fedoraproject.org Thu May 14 20:05:32 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 14 May 2015 22:05:32 +0200 Subject: [Rdo-list] openstack-neutron-gbp for delorean In-Reply-To: <5554F9DD.60008@theep.net> References: <5553300D.4050101@redhat.com> <5554F9DD.60008@theep.net> Message-ID: 2015-05-14 21:39 GMT+02:00 Robert Kukura : > Hi Ihar, > > GBP development currently trails the main OpenStack development cycle a bit. > Hopefully this lag will shrink substantially for Liberty. We expect to have > a kilo-gbp-3 release out any day that will make sense to package for RDO and > Fedora Kilo versions, and the final Kilo GBP release should follow soon > after. Unfortunately, whats currently packaged still requires Juno. > Thanks Robert for the heads-up. I'd rather package as early as possible GBP release, so that we could test it before the final release. So when kilo-gbp-3 is out, please update the package in rawhide and ping either apevec or I for RHEL/CentOS builds. > Can you explain what exactly it means to "move openstack-neutron-gbp > development under delorean roof"? Does this mean that the Fedora packages > would be generated from the RDO packages? I think that would be fine. If you > will be at the OpenStack summit next week, we can discuss this there if you > like. > > -Bob > > Nope, Delorean is our platform to build packages from master which allows us to find out and then fix early packaging issues before final releases. If you're going to Vancouver, please consider attending the RDO meetup ! https://etherpad.openstack.org/p/RDO_Vancouver If you have questions about Delorean, feel free to ask us, Derek -Delorean lead- will be there in Vancouver too. Regards, H. From jcoufal at redhat.com Fri May 15 09:56:28 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Fri, 15 May 2015 11:56:28 +0200 Subject: [Rdo-list] [RDO-Manager] Testing Day for RDO-Manager - Thursday, May 14 In-Reply-To: <55536C43.3060206@redhat.com> References: <55536C43.3060206@redhat.com> Message-ID: <5555C2CC.4020500@redhat.com> Hi All, I would love to thank everybody who participated (despite the fact that we had slight complications in the morning) and helped with testing RDO-Manager deploying RDO Kilo. Cheers -- Jarda On 13/05/15 17:22, Jaromir Coufal wrote: > Dear All, > > as we indicated last week, we would like to invite you all to > RDO-Manager test day which is going to happen tomorrow. > > RDO-Manager Kilo Test Day - Part1: > * May 14, 2015 * > > https://www.rdoproject.org/RDO-Manager_test_day_Kilo > > We apologize for late announcement. Given this situation we are going to > have one test day tomorrow (sort of a smaller event) and you can expect > bigger test day after OpenStack Summit, in the beginning of June. > > If you can find some time to run through the deployment flow, please > come tomorrow and help us getting RDO-Manager as solid as possible. Thanks! > > See you tomorrow > -- Jarda > > > Quick Links: > * Test Day Wiki: https://www.rdoproject.org/RDO-Manager_test_day_Kilo > * Notes: https://etherpad.openstack.org/p/rdo-manager_kilo_test_day > * RDO-Manager Website: http://www.rdoproject.org/RDO-Manager > * RDO-Manager Docs: http://docs.rdoproject.org/rdo-manager/master > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Fri May 15 11:50:57 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 15 May 2015 13:50:57 +0200 Subject: [Rdo-list] openstack-neutron-gbp for delorean In-Reply-To: References: <5553300D.4050101@redhat.com> <5554F9DD.60008@theep.net> Message-ID: > If you have questions about Delorean, feel free to ask us, Derek > -Delorean lead- will > be there in Vancouver too. And we have docs! Please, before asking Derek, read https://www.rdoproject.org/packaging/rdo-packaging.html#master-pkg-guide and let us know or even send PR if something is missing or unclear. Cheers, Alan From whayutin at redhat.com Fri May 15 13:00:19 2015 From: whayutin at redhat.com (whayutin) Date: Fri, 15 May 2015 09:00:19 -0400 Subject: [Rdo-list] [CI] [Khaleesi] Agenda for rdo meeting at summit Message-ID: <1431694819.2791.24.camel@redhat.com> Greetings, I've added an agenda item to the rdo meeting for developing community governance around khaleesi and openstack ci as it relates to rdo and osp. https://etherpad.openstack.org/p/RDO_Vancouver * RDO/OSP Openstack CI [khaleesi] Governance * develop and adopt community rules for submissions and review * develop and adopt a set of best practices. * develop public documentation Anyone involved in CI should do there best to attend. Thanks! From ihrachys at redhat.com Fri May 15 14:33:34 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 15 May 2015 16:33:34 +0200 Subject: [Rdo-list] openstack-neutron-gbp for delorean In-Reply-To: <5554F9DD.60008@theep.net> References: <5553300D.4050101@redhat.com> <5554F9DD.60008@theep.net> Message-ID: <555603BE.8080902@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/14/2015 09:39 PM, Robert Kukura wrote: > Hi Ihar, > > GBP development currently trails the main OpenStack development > cycle a bit. Hopefully this lag will shrink substantially for > Liberty. We expect to have a kilo-gbp-3 release out any day that > will make sense to package for RDO and Fedora Kilo versions, and > the final Kilo GBP release should follow soon after. Unfortunately, > whats currently packaged still requires Juno. > > Can you explain what exactly it means to "move > openstack-neutron-gbp development under delorean roof"? Does this > mean that the Fedora packages would be generated from the RDO > packages? I think that would be fine. If you will be at the > OpenStack summit next week, we can discuss this there if you like. It's not yet clear what will be the story of official Fedora packages if/when we exclusively switch to Delorean. It may end up as service packages (like neutron) being dropped from Fedora. (While Fedora RDO repos will still be available from RDO project.) Delorean does not exactly build releases but tracks branches (though I think there should be a way to generate a repo with particular release based on a tag; Derek?..) I will be in Vancouver, so we can chat there, and I will do my best to explain what Delorean is about. Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVVgO+AAoJEC5aWaUY1u57oWkH/RvdpU17egd6pC1W3AlmFAvF 6biOZPhXU+wZ+aseF7RJlAm7NFSYtDMNXwMv7HWhnP1oZXxxXGKCorWfX55V3kgK K7SILSftvmM0JYGfFVtEoqK13zXcI8O0FfRO41S22ZNX+lsIwq9sF3Ngbl+8D7T8 JIGn2MJ8ES3HHOKcydH6Xk7sqsezoKnOfLmYHRWY319Lfloy+lD5Z3cnEUT0h7mA MLCNqqg+ksd47ZlQfsNN+d3GibjfxDmBy5dtfr9EuiFp0Lz77Um8kozKekKPD1YV 64kyFmWXXWBaSaOmlrn5fvw7CF8k/VFl+ZVXHGnIM+Y4deMjHTWeP6K730TuUXM= =Peir -----END PGP SIGNATURE----- From apevec at gmail.com Fri May 15 16:40:37 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 15 May 2015 18:40:37 +0200 Subject: [Rdo-list] openstack-neutron-gbp for delorean In-Reply-To: <555603BE.8080902@redhat.com> References: <5553300D.4050101@redhat.com> <5554F9DD.60008@theep.net> <555603BE.8080902@redhat.com> Message-ID: > It's not yet clear what will be the story of official Fedora packages > if/when we exclusively switch to Delorean. It may end up as service > packages (like neutron) being dropped from Fedora. (While Fedora RDO > repos will still be available from RDO project.) That's premature, let's keep this discussion in separate thread (I plan to update with +/- points for possible solutions) For now, dist-git for official RDO builds is in Fedora, GBP packages are already there and need to be updated in Fedora master branch to Kilo. > Delorean does not exactly build releases but tracks branches (though I > think there should be a way to generate a repo with particular release > based on a tag; Derek?..) Nope. I had locally patched Delorean to build from exactly RC tags but that doesn't make sense with Delorean design, which is continuous builds from a branch. Cheers, Alan From marius at remote-lab.net Sat May 16 12:34:57 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 16 May 2015 14:34:57 +0200 Subject: [Rdo-list] Upgrade from Juno to Kilo Message-ID: Hi all, Are there any docs that describe steps for upgrading from Juno to Kilo ? Thanks, Marius From bderzhavets at hotmail.com Sat May 16 14:06:20 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 16 May 2015 10:06:20 -0400 Subject: [Rdo-list] Upgrade from Juno to Kilo In-Reply-To: References: Message-ID: You are not the first person raising up this question, you may view https://ask.openstack.org/en/question/66214/migration-from-older-releases-to-kilo/ Personally , I share alfredcs at yahoo.com feed (3) Boris. > From: marius at remote-lab.net > Date: Sat, 16 May 2015 14:34:57 +0200 > To: rdo-list at redhat.com > Subject: [Rdo-list] Upgrade from Juno to Kilo > > Hi all, > > Are there any docs that describe steps for upgrading from Juno to Kilo ? > > Thanks, > Marius > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Sat May 16 15:29:04 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sat, 16 May 2015 17:29:04 +0200 Subject: [Rdo-list] Upgrade from Juno to Kilo In-Reply-To: References: Message-ID: There isn't but anyone's welcome to start one on the wiki ! Regards, H. From hguemar at fedoraproject.org Mon May 18 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 18 May 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150518150003.3738660A94F1@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-05-20 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rdo-info at redhat.com Mon May 18 21:00:38 2015 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 18 May 2015 21:00:38 +0000 Subject: [Rdo-list] [RDO] RDO blog roundup, week of May 18, 2016 Message-ID: <0000014d68d412d2-f3687d34-2573-4c31-a17c-18d474856ce1-000000@email.amazonses.com> rbowen started a discussion. RDO blog roundup, week of May 18, 2016 --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1016/rdo-blog-roundup-week-of-may-18-2016 Have a great day! From ichi.sara at gmail.com Tue May 19 07:48:58 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 09:48:58 +0200 Subject: [Rdo-list] [Neutron] router can't ping external gateway Message-ID: Hey people, I have an issue with my networking. I connected my openstack to an external network I did all the changes required. But still my router can't reach the external gateway. =====ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.5.33 NETMASK=255.255.255.0 ONBOOT=yes GATEWAY=192.168.5.1 DNS1=8.8.8.8 DNS2=192.168.5.1 ====ifcfg-eth0 DEVICE=eth0 HWADDR=00:0c:29:a2:b1:b9 ONBOOT=yes TYPE=OVSPort NM_CONTROLLED=yes DEVICETYPE=ovs OVS_BRIDGE=br-ex ======[root at localhost ~(keystone_admin)]# ovs-vsctl show 19de58db-509d-4de8-bd88-9222019b13f1 Bridge br-int fail_mode: secure Port "tap8652132e-b8" tag: 1 Interface "tap8652132e-b8" type: internal Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port "qg-5f8ebe30-40" Interface "qg-5f8ebe30-40" type: internal Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal Bridge br-tun Port "vxlan-c0a80520" Interface "vxlan-c0a80520" type: vxlan options: {df_default="true", in_key=flow, local_ip="192.168.5.33", out_key=flow, remote_ip="192.168.5.32"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} ovs_version: "2.3.1" =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. 64 bytes from 192.168.5.1: icmp_seq=1 ttl=64 time=1.76 ms 64 bytes from 192.168.5.1: icmp_seq=2 ttl=64 time=1.88 ms 64 bytes from 192.168.5.1: icmp_seq=3 ttl=64 time=1.45 ms ^C --- 192.168.5.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms [root at localhost ~(keystone_admin)]# ======[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 14: qg-5f8ebe30-40: mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fec2:1b5e/64 scope link valid_lft forever preferred_lft forever [root at localhost ~(keystone_admin)]# ======[root at localhost ~(keystone_admin)]# ip r default via 192.168.5.1 dev br-ex default via 192.168.4.1 dev eth1 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth1 scope link metric 1003 169.254.0.0/16 dev br-ex scope link metric 1005 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 [root at localhost ~(keystone_admin)]# ======[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r default via 192.168.5.1 dev qg-5f8ebe30-40 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src 192.168.5.70 [root at localhost ~(keystone_admin)]# ======[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. ^C --- 192.168.5.1 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 3999ms any hints?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Tue May 19 08:47:29 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 19 May 2015 04:47:29 -0400 Subject: [Rdo-list] [Neutron] router can't ping external gateway In-Reply-To: References: Message-ID: There is one thing , which I clearly see . It is qrouter-namespace misconfiguration. There is no qr-xxxxx bridge attached to br-int Picture , in general, should look like this ubuntu at ubuntu-System:~$ sudo ip netns exec qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 ubuntu at ubuntu-System:~$ sudo ip netns exec qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) I would also advise you to post a question also on ask.openstack.org Boris. Date: Tue, 19 May 2015 09:48:58 +0200 From: ichi.sara at gmail.com To: rdo-list at redhat.com Subject: [Rdo-list] [Neutron] router can't ping external gateway Hey people, I have an issue with my networking. I connected my openstack to an external network I did all the changes required. But still my router can't reach the external gateway. =====ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.5.33 NETMASK=255.255.255.0 ONBOOT=yes GATEWAY=192.168.5.1 DNS1=8.8.8.8 DNS2=192.168.5.1 ====ifcfg-eth0 DEVICE=eth0 HWADDR=00:0c:29:a2:b1:b9 ONBOOT=yes TYPE=OVSPort NM_CONTROLLED=yes DEVICETYPE=ovs OVS_BRIDGE=br-ex ======[root at localhost ~(keystone_admin)]# ovs-vsctl show 19de58db-509d-4de8-bd88-9222019b13f1 Bridge br-int fail_mode: secure Port "tap8652132e-b8" tag: 1 Interface "tap8652132e-b8" type: internal Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port "qg-5f8ebe30-40" Interface "qg-5f8ebe30-40" type: internal Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal Bridge br-tun Port "vxlan-c0a80520" Interface "vxlan-c0a80520" type: vxlan options: {df_default="true", in_key=flow, local_ip="192.168.5.33", out_key=flow, remote_ip="192.168.5.32"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} ovs_version: "2.3.1" =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. 64 bytes from 192.168.5.1: icmp_seq=1 ttl=64 time=1.76 ms 64 bytes from 192.168.5.1: icmp_seq=2 ttl=64 time=1.88 ms 64 bytes from 192.168.5.1: icmp_seq=3 ttl=64 time=1.45 ms ^C --- 192.168.5.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms [root at localhost ~(keystone_admin)]# ======[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 14: qg-5f8ebe30-40: mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fec2:1b5e/64 scope link valid_lft forever preferred_lft forever [root at localhost ~(keystone_admin)]# ======[root at localhost ~(keystone_admin)]# ip r default via 192.168.5.1 dev br-ex default via 192.168.4.1 dev eth1 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth1 scope link metric 1003 169.254.0.0/16 dev br-ex scope link metric 1005 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 [root at localhost ~(keystone_admin)]# ======[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r default via 192.168.5.1 dev qg-5f8ebe30-40 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src 192.168.5.70 [root at localhost ~(keystone_admin)]# ======[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. ^C --- 192.168.5.1 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 3999ms any hints?? _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Tue May 19 08:55:25 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 10:55:25 +0200 Subject: [Rdo-list] [Neutron] router can't ping external gateway In-Reply-To: References: Message-ID: You are right. I don't have any idea how I can fix this. I will post to ask.openstack.org and hope someone will help. Apparently people are busy with the openstack summit so ... 2015-05-19 10:47 GMT+02:00 Boris Derzhavets : > There is one thing , which I clearly see . It is qrouter-namespace > misconfiguration. There is no qr-xxxxx bridge attached to br-int > Picture , in general, should look like this > > ubuntu at ubuntu-System:~$ sudo ip netns exec > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > Kernel IP routing table > Destination Gateway Genmask Flags Metric Ref Use > Iface > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 > qg-a753a8f5-c8 > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 > qr-393d9f71-53 > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 > qg-a753a8f5-c8 > > ubuntu at ubuntu-System:~$ sudo ip netns exec > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > inet6 addr: ::1/128 Scope:Host > UP LOOPBACK RUNNING MTU:65536 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > inet addr:192.168.12.150 Bcast:192.168.12.255 > Mask:255.255.255.0 > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > UP BROADCAST RUNNING MTU:1500 Metric:1 > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > UP BROADCAST RUNNING MTU:1500 Metric:1 > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > I would also advise you to post a question also on ask.openstack.org > > Boris. > > > ------------------------------ > Date: Tue, 19 May 2015 09:48:58 +0200 > From: ichi.sara at gmail.com > To: rdo-list at redhat.com > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > Hey people, > I have an issue with my networking. I connected my openstack to an > external network I did all the changes required. But still my router can't > reach the external gateway. > > =====ifcfg-br-ex > DEVICE=br-ex > DEVICETYPE=ovs > TYPE=OVSBridge > BOOTPROTO=static > IPADDR=192.168.5.33 > NETMASK=255.255.255.0 > ONBOOT=yes > GATEWAY=192.168.5.1 > DNS1=8.8.8.8 > DNS2=192.168.5.1 > > > ====ifcfg-eth0 > DEVICE=eth0 > HWADDR=00:0c:29:a2:b1:b9 > ONBOOT=yes > TYPE=OVSPort > NM_CONTROLLED=yes > DEVICETYPE=ovs > OVS_BRIDGE=br-ex > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > 19de58db-509d-4de8-bd88-9222019b13f1 > Bridge br-int > fail_mode: secure > Port "tap8652132e-b8" > tag: 1 > Interface "tap8652132e-b8" > type: internal > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Bridge br-ex > Port "qg-5f8ebe30-40" > Interface "qg-5f8ebe30-40" > type: internal > Port "eth0" > Interface "eth0" > Port br-ex > Interface br-ex > type: internal > Bridge br-tun > Port "vxlan-c0a80520" > Interface "vxlan-c0a80520" > type: vxlan > options: {df_default="true", in_key=flow, > local_ip="192.168.5.33", out_key=flow, remote_ip="192.168.5.32"} > Port br-tun > Interface br-tun > type: internal > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > ovs_version: "2.3.1" > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > 64 bytes from 192.168.5.1: icmp_seq=1 ttl=64 time=1.76 ms > 64 bytes from 192.168.5.1: icmp_seq=2 ttl=64 time=1.88 ms > 64 bytes from 192.168.5.1: icmp_seq=3 ttl=64 time=1.45 ms > ^C > --- 192.168.5.1 ping statistics --- > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > [root at localhost ~(keystone_admin)]# > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 14: qg-5f8ebe30-40: mtu 1500 qdisc > noqueue state UNKNOWN > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > valid_lft forever preferred_lft forever > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip r > default via 192.168.5.1 dev br-ex > default via 192.168.4.1 dev eth1 > 169.254.0.0/16 dev eth0 scope link metric 1002 > 169.254.0.0/16 dev eth1 scope link metric 1003 > 169.254.0.0/16 dev br-ex scope link metric 1005 > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > default via 192.168.5.1 dev qg-5f8ebe30-40 > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src > 192.168.5.70 > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > ^C > --- 192.168.5.1 ping statistics --- > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > any hints?? > > > > > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Tue May 19 09:58:23 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 11:58:23 +0200 Subject: [Rdo-list] [Neutron] router can't ping external gateway In-Reply-To: References: Message-ID: can you show me your plugin.ini file? /etc/neutron/plugin.ini and the other file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 2015-05-19 10:47 GMT+02:00 Boris Derzhavets : > There is one thing , which I clearly see . It is qrouter-namespace > misconfiguration. There is no qr-xxxxx bridge attached to br-int > Picture , in general, should look like this > > ubuntu at ubuntu-System:~$ sudo ip netns exec > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > Kernel IP routing table > Destination Gateway Genmask Flags Metric Ref Use > Iface > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 > qg-a753a8f5-c8 > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 > qr-393d9f71-53 > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 > qg-a753a8f5-c8 > > ubuntu at ubuntu-System:~$ sudo ip netns exec > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > inet6 addr: ::1/128 Scope:Host > UP LOOPBACK RUNNING MTU:65536 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > inet addr:192.168.12.150 Bcast:192.168.12.255 > Mask:255.255.255.0 > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > UP BROADCAST RUNNING MTU:1500 Metric:1 > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > UP BROADCAST RUNNING MTU:1500 Metric:1 > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > I would also advise you to post a question also on ask.openstack.org > > Boris. > > > ------------------------------ > Date: Tue, 19 May 2015 09:48:58 +0200 > From: ichi.sara at gmail.com > To: rdo-list at redhat.com > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > Hey people, > I have an issue with my networking. I connected my openstack to an > external network I did all the changes required. But still my router can't > reach the external gateway. > > =====ifcfg-br-ex > DEVICE=br-ex > DEVICETYPE=ovs > TYPE=OVSBridge > BOOTPROTO=static > IPADDR=192.168.5.33 > NETMASK=255.255.255.0 > ONBOOT=yes > GATEWAY=192.168.5.1 > DNS1=8.8.8.8 > DNS2=192.168.5.1 > > > ====ifcfg-eth0 > DEVICE=eth0 > HWADDR=00:0c:29:a2:b1:b9 > ONBOOT=yes > TYPE=OVSPort > NM_CONTROLLED=yes > DEVICETYPE=ovs > OVS_BRIDGE=br-ex > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > 19de58db-509d-4de8-bd88-9222019b13f1 > Bridge br-int > fail_mode: secure > Port "tap8652132e-b8" > tag: 1 > Interface "tap8652132e-b8" > type: internal > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Bridge br-ex > Port "qg-5f8ebe30-40" > Interface "qg-5f8ebe30-40" > type: internal > Port "eth0" > Interface "eth0" > Port br-ex > Interface br-ex > type: internal > Bridge br-tun > Port "vxlan-c0a80520" > Interface "vxlan-c0a80520" > type: vxlan > options: {df_default="true", in_key=flow, > local_ip="192.168.5.33", out_key=flow, remote_ip="192.168.5.32"} > Port br-tun > Interface br-tun > type: internal > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > ovs_version: "2.3.1" > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > 64 bytes from 192.168.5.1: icmp_seq=1 ttl=64 time=1.76 ms > 64 bytes from 192.168.5.1: icmp_seq=2 ttl=64 time=1.88 ms > 64 bytes from 192.168.5.1: icmp_seq=3 ttl=64 time=1.45 ms > ^C > --- 192.168.5.1 ping statistics --- > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > [root at localhost ~(keystone_admin)]# > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 14: qg-5f8ebe30-40: mtu 1500 qdisc > noqueue state UNKNOWN > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > valid_lft forever preferred_lft forever > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip r > default via 192.168.5.1 dev br-ex > default via 192.168.4.1 dev eth1 > 169.254.0.0/16 dev eth0 scope link metric 1002 > 169.254.0.0/16 dev eth1 scope link metric 1003 > 169.254.0.0/16 dev br-ex scope link metric 1005 > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > default via 192.168.5.1 dev qg-5f8ebe30-40 > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src > 192.168.5.70 > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > ^C > --- 192.168.5.1 ping statistics --- > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > any hints?? > > > > > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Tue May 19 10:12:30 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 12:12:30 +0200 Subject: [Rdo-list] [Neutron] router can't ping external gateway In-Reply-To: References: Message-ID: ====updates I have deleted my networks, rebooted my machines and configured an other network. Now I can see the qr bridge mapped to the router but still can't ping the external gateway: ====[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r default via 192.168.5.1 dev qg-e1b584b4-db 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src 10.0.0.1 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src 192.168.5.70 ====[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 12: qg-e1b584b4-db: mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-e1b584b4-db valid_lft forever preferred_lft forever inet 192.168.5.73/32 brd 192.168.5.73 scope global qg-e1b584b4-db valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe68:83f8/64 scope link valid_lft forever preferred_lft forever 13: qr-7b330e0e-5c: mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe92:9c90/64 scope link valid_lft forever preferred_lft forever =====[root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. >From 192.168.5.70 icmp_seq=10 Destination Host Unreachable >From 192.168.5.70 icmp_seq=11 Destination Host Unreachable >From 192.168.5.70 icmp_seq=12 Destination Host Unreachable >From 192.168.5.70 icmp_seq=13 Destination Host Unreachable >From 192.168.5.70 icmp_seq=14 Destination Host Unreachable >From 192.168.5.70 icmp_seq=15 Destination Host Unreachable >From 192.168.5.70 icmp_seq=16 Destination Host Unreachable >From 192.168.5.70 icmp_seq=17 Destination Host Unreachable =====[root at localhost ~(keystone_admin)]# ovs-vsctl show 19de58db-509d-4de8-bd88-9222019b13f1 Bridge br-int fail_mode: secure Port "tap2decc1bc-bf" tag: 2 Interface "tap2decc1bc-bf" type: internal Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qr-7b330e0e-5c" tag: 2 Interface "qr-7b330e0e-5c" type: internal Port "qvo164afbd4-0c" tag: 2 Interface "qvo164afbd4-0c" Bridge br-ex Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal Port "qg-e1b584b4-db" Interface "qg-e1b584b4-db" type: internal Bridge br-tun Port br-tun Interface br-tun type: internal Port "vxlan-c0a80520" Interface "vxlan-c0a80520" type: vxlan options: {df_default="true", in_key=flow, local_ip="192.168.5.33", out_key=flow, remote_ip="192.168.5.32"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} ovs_version: "2.3.1" 2015-05-19 11:58 GMT+02:00 ICHIBA Sara : > can you show me your plugin.ini file? /etc/neutron/plugin.ini and the > other file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets : > >> There is one thing , which I clearly see . It is qrouter-namespace >> misconfiguration. There is no qr-xxxxx bridge attached to br-int >> Picture , in general, should look like this >> >> ubuntu at ubuntu-System:~$ sudo ip netns exec >> qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n >> >> Kernel IP routing table >> Destination Gateway Genmask Flags Metric Ref Use >> Iface >> 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 >> qg-a753a8f5-c8 >> 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 >> qr-393d9f71-53 >> 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 >> qg-a753a8f5-c8 >> >> ubuntu at ubuntu-System:~$ sudo ip netns exec >> qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig >> lo Link encap:Local Loopback >> inet addr:127.0.0.1 Mask:255.0.0.0 >> inet6 addr: ::1/128 Scope:Host >> UP LOOPBACK RUNNING MTU:65536 Metric:1 >> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 >> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 >> collisions:0 txqueuelen:0 >> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) >> >> qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 >> inet addr:192.168.12.150 Bcast:192.168.12.255 >> Mask:255.255.255.0 >> inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link >> UP BROADCAST RUNNING MTU:1500 Metric:1 >> RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 >> TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 >> collisions:0 txqueuelen:0 >> RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) >> >> qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 >> inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 >> inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link >> UP BROADCAST RUNNING MTU:1500 Metric:1 >> RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 >> TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 >> collisions:0 txqueuelen:0 >> RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) >> >> I would also advise you to post a question also on ask.openstack.org >> >> Boris. >> >> >> ------------------------------ >> Date: Tue, 19 May 2015 09:48:58 +0200 >> From: ichi.sara at gmail.com >> To: rdo-list at redhat.com >> Subject: [Rdo-list] [Neutron] router can't ping external gateway >> >> >> Hey people, >> I have an issue with my networking. I connected my openstack to an >> external network I did all the changes required. But still my router can't >> reach the external gateway. >> >> =====ifcfg-br-ex >> DEVICE=br-ex >> DEVICETYPE=ovs >> TYPE=OVSBridge >> BOOTPROTO=static >> IPADDR=192.168.5.33 >> NETMASK=255.255.255.0 >> ONBOOT=yes >> GATEWAY=192.168.5.1 >> DNS1=8.8.8.8 >> DNS2=192.168.5.1 >> >> >> ====ifcfg-eth0 >> DEVICE=eth0 >> HWADDR=00:0c:29:a2:b1:b9 >> ONBOOT=yes >> TYPE=OVSPort >> NM_CONTROLLED=yes >> DEVICETYPE=ovs >> OVS_BRIDGE=br-ex >> >> ======[root at localhost ~(keystone_admin)]# ovs-vsctl show >> 19de58db-509d-4de8-bd88-9222019b13f1 >> Bridge br-int >> fail_mode: secure >> Port "tap8652132e-b8" >> tag: 1 >> Interface "tap8652132e-b8" >> type: internal >> Port br-int >> Interface br-int >> type: internal >> Port patch-tun >> Interface patch-tun >> type: patch >> options: {peer=patch-int} >> Bridge br-ex >> Port "qg-5f8ebe30-40" >> Interface "qg-5f8ebe30-40" >> type: internal >> Port "eth0" >> Interface "eth0" >> Port br-ex >> Interface br-ex >> type: internal >> Bridge br-tun >> Port "vxlan-c0a80520" >> Interface "vxlan-c0a80520" >> type: vxlan >> options: {df_default="true", in_key=flow, >> local_ip="192.168.5.33", out_key=flow, remote_ip="192.168.5.32"} >> Port br-tun >> Interface br-tun >> type: internal >> Port patch-int >> Interface patch-int >> type: patch >> options: {peer=patch-tun} >> ovs_version: "2.3.1" >> >> =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 >> PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. >> 64 bytes from 192.168.5.1: icmp_seq=1 ttl=64 time=1.76 ms >> 64 bytes from 192.168.5.1: icmp_seq=2 ttl=64 time=1.88 ms >> 64 bytes from 192.168.5.1: icmp_seq=3 ttl=64 time=1.45 ms >> ^C >> --- 192.168.5.1 ping statistics --- >> 3 packets transmitted, 3 received, 0% packet loss, time 2002ms >> rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms >> [root at localhost ~(keystone_admin)]# >> >> ======[root at localhost ~(keystone_admin)]# ip netns exec >> qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> valid_lft forever preferred_lft forever >> inet6 ::1/128 scope host >> valid_lft forever preferred_lft forever >> 14: qg-5f8ebe30-40: mtu 1500 qdisc >> noqueue state UNKNOWN >> link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff >> inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 >> valid_lft forever preferred_lft forever >> inet6 fe80::f816:3eff:fec2:1b5e/64 scope link >> valid_lft forever preferred_lft forever >> [root at localhost ~(keystone_admin)]# >> >> >> ======[root at localhost ~(keystone_admin)]# ip r >> default via 192.168.5.1 dev br-ex >> default via 192.168.4.1 dev eth1 >> 169.254.0.0/16 dev eth0 scope link metric 1002 >> 169.254.0.0/16 dev eth1 scope link metric 1003 >> 169.254.0.0/16 dev br-ex scope link metric 1005 >> 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 >> 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 >> [root at localhost ~(keystone_admin)]# >> >> >> ======[root at localhost ~(keystone_admin)]# ip netns exec >> qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r >> default via 192.168.5.1 dev qg-5f8ebe30-40 >> 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src >> 192.168.5.70 >> [root at localhost ~(keystone_admin)]# >> >> >> ======[root at localhost ~(keystone_admin)]# ip netns exec >> qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 >> PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. >> ^C >> --- 192.168.5.1 ping statistics --- >> 5 packets transmitted, 0 received, 100% packet loss, time 3999ms >> >> any hints?? >> >> >> >> >> >> _______________________________________________ Rdo-list mailing list >> Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To >> unsubscribe: rdo-list-unsubscribe at redhat.com >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue May 19 10:50:45 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 19 May 2015 06:50:45 -0400 (EDT) Subject: [Rdo-list] [Neutron] router can't ping external gateway In-Reply-To: References: Message-ID: <1614651306.1136287.1432032645602.JavaMail.zimbra@redhat.com> Hi, Try to see if any of the ICMP requests leave the eth0 interface like 'tcpdump -i eth0 icmp' while pinging 192.168.5.1 from the router namespace. Thanks, Marius ----- Original Message ----- > From: "ICHIBA Sara" > To: "Boris Derzhavets" , rdo-list at redhat.com > Sent: Tuesday, May 19, 2015 12:12:30 PM > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > ====updates > > I have deleted my networks, rebooted my machines and configured an other > network. Now I can see the qr bridge mapped to the router but still can't > ping the external gateway: > > ====[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > default via 192.168.5.1 dev qg-e1b584b4-db > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src 10.0.0.1 > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src 192.168.5.70 > > ====[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 12: qg-e1b584b4-db: mtu 1500 qdisc noqueue > state UNKNOWN > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-e1b584b4-db > valid_lft forever preferred_lft forever > inet 192.168.5.73/32 brd 192.168.5.73 scope global qg-e1b584b4-db > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > valid_lft forever preferred_lft forever > 13: qr-7b330e0e-5c: mtu 1500 qdisc noqueue > state UNKNOWN > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > valid_lft forever preferred_lft forever > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > 19de58db-509d-4de8-bd88-9222019b13f1 > Bridge br-int > fail_mode: secure > Port "tap2decc1bc-bf" > tag: 2 > Interface "tap2decc1bc-bf" > type: internal > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Port "qr-7b330e0e-5c" > tag: 2 > Interface "qr-7b330e0e-5c" > type: internal > Port "qvo164afbd4-0c" > tag: 2 > Interface "qvo164afbd4-0c" > Bridge br-ex > Port "eth0" > Interface "eth0" > Port br-ex > Interface br-ex > type: internal > Port "qg-e1b584b4-db" > Interface "qg-e1b584b4-db" > type: internal > Bridge br-tun > Port br-tun > Interface br-tun > type: internal > Port "vxlan-c0a80520" > Interface "vxlan-c0a80520" > type: vxlan > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > out_key=flow, remote_ip="192.168.5.32"} > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > ovs_version: "2.3.1" > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > : > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini and the other > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com > : > > > > There is one thing , which I clearly see . It is qrouter-namespace > misconfiguration. There is no qr-xxxxx bridge attached to br-int > Picture , in general, should look like this > > ubuntu at ubuntu-System:~$ sudo ip netns exec > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > Kernel IP routing table > Destination Gateway Genmask Flags Metric Ref Use Iface > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > ubuntu at ubuntu-System:~$ sudo ip netns exec > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > inet6 addr: ::1/128 Scope:Host > UP LOOPBACK RUNNING MTU:65536 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > UP BROADCAST RUNNING MTU:1500 Metric:1 > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > UP BROADCAST RUNNING MTU:1500 Metric:1 > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > I would also advise you to post a question also on ask.openstack.org > > Boris. > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > From: ichi.sara at gmail.com > To: rdo-list at redhat.com > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > Hey people, > I have an issue with my networking. I connected my openstack to an external > network I did all the changes required. But still my router can't reach the > external gateway. > > =====ifcfg-br-ex > DEVICE=br-ex > DEVICETYPE=ovs > TYPE=OVSBridge > BOOTPROTO=static > IPADDR=192.168.5.33 > NETMASK=255.255.255.0 > ONBOOT=yes > GATEWAY=192.168.5.1 > DNS1=8.8.8.8 > DNS2=192.168.5.1 > > > ====ifcfg-eth0 > DEVICE=eth0 > HWADDR=00:0c:29:a2:b1:b9 > ONBOOT=yes > TYPE=OVSPort > NM_CONTROLLED=yes > DEVICETYPE=ovs > OVS_BRIDGE=br-ex > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > 19de58db-509d-4de8-bd88-9222019b13f1 > Bridge br-int > fail_mode: secure > Port "tap8652132e-b8" > tag: 1 > Interface "tap8652132e-b8" > type: internal > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Bridge br-ex > Port "qg-5f8ebe30-40" > Interface "qg-5f8ebe30-40" > type: internal > Port "eth0" > Interface "eth0" > Port br-ex > Interface br-ex > type: internal > Bridge br-tun > Port "vxlan-c0a80520" > Interface "vxlan-c0a80520" > type: vxlan > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > out_key=flow, remote_ip="192.168.5.32"} > Port br-tun > Interface br-tun > type: internal > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > ovs_version: "2.3.1" > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > ^C > --- 192.168.5.1 ping statistics --- > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > [root at localhost ~(keystone_admin)]# > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 14: qg-5f8ebe30-40: mtu 1500 qdisc noqueue > state UNKNOWN > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > valid_lft forever preferred_lft forever > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip r > default via 192.168.5.1 dev br-ex > default via 192.168.4.1 dev eth1 > 169.254.0.0/16 dev eth0 scope link metric 1002 > 169.254.0.0/16 dev eth1 scope link metric 1003 > 169.254.0.0/16 dev br-ex scope link metric 1005 > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > default via 192.168.5.1 dev qg-5f8ebe30-40 > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src 192.168.5.70 > [root at localhost ~(keystone_admin)]# > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > ^C > --- 192.168.5.1 ping statistics --- > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > any hints?? > > > > > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mcornea at redhat.com Tue May 19 11:00:13 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 19 May 2015 07:00:13 -0400 (EDT) Subject: [Rdo-list] [Neutron] router can't ping external gateway In-Reply-To: <1614651306.1136287.1432032645602.JavaMail.zimbra@redhat.com> References: <1614651306.1136287.1432032645602.JavaMail.zimbra@redhat.com> Message-ID: <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> Also, I'm seeing that you have 2 default routes on your host. I'm not sure it affects the setup but try keeping only one: e.g. 'ip route del default via 192.168.4.1' to delete the eth1 one. ======[root at localhost ~(keystone_admin)]# ip r default via 192.168.5.1 dev br-ex default via 192.168.4.1 dev eth1 ----- Original Message ----- > From: "Marius Cornea" > To: "ICHIBA Sara" > Cc: rdo-list at redhat.com > Sent: Tuesday, May 19, 2015 12:50:45 PM > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > Hi, > > Try to see if any of the ICMP requests leave the eth0 interface like 'tcpdump > -i eth0 icmp' while pinging 192.168.5.1 from the router namespace. > > Thanks, > Marius > > ----- Original Message ----- > > From: "ICHIBA Sara" > > To: "Boris Derzhavets" , rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > ====updates > > > > I have deleted my networks, rebooted my machines and configured an other > > network. Now I can see the qr bridge mapped to the router but still can't > > ping the external gateway: > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > default via 192.168.5.1 dev qg-e1b584b4-db > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src 10.0.0.1 > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src 192.168.5.70 > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > inet6 ::1/128 scope host > > valid_lft forever preferred_lft forever > > 12: qg-e1b584b4-db: mtu 1500 qdisc > > noqueue > > state UNKNOWN > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-e1b584b4-db > > valid_lft forever preferred_lft forever > > inet 192.168.5.73/32 brd 192.168.5.73 scope global qg-e1b584b4-db > > valid_lft forever preferred_lft forever > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > valid_lft forever preferred_lft forever > > 13: qr-7b330e0e-5c: mtu 1500 qdisc > > noqueue > > state UNKNOWN > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > valid_lft forever preferred_lft forever > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > valid_lft forever preferred_lft forever > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > 19de58db-509d-4de8-bd88-9222019b13f1 > > Bridge br-int > > fail_mode: secure > > Port "tap2decc1bc-bf" > > tag: 2 > > Interface "tap2decc1bc-bf" > > type: internal > > Port br-int > > Interface br-int > > type: internal > > Port patch-tun > > Interface patch-tun > > type: patch > > options: {peer=patch-int} > > Port "qr-7b330e0e-5c" > > tag: 2 > > Interface "qr-7b330e0e-5c" > > type: internal > > Port "qvo164afbd4-0c" > > tag: 2 > > Interface "qvo164afbd4-0c" > > Bridge br-ex > > Port "eth0" > > Interface "eth0" > > Port br-ex > > Interface br-ex > > type: internal > > Port "qg-e1b584b4-db" > > Interface "qg-e1b584b4-db" > > type: internal > > Bridge br-tun > > Port br-tun > > Interface br-tun > > type: internal > > Port "vxlan-c0a80520" > > Interface "vxlan-c0a80520" > > type: vxlan > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > out_key=flow, remote_ip="192.168.5.32"} > > Port patch-int > > Interface patch-int > > type: patch > > options: {peer=patch-tun} > > ovs_version: "2.3.1" > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > : > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini and the other > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com > : > > > > > > > > There is one thing , which I clearly see . It is qrouter-namespace > > misconfiguration. There is no qr-xxxxx bridge attached to br-int > > Picture , in general, should look like this > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > Kernel IP routing table > > Destination Gateway Genmask Flags Metric Ref Use Iface > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > lo Link encap:Local Loopback > > inet addr:127.0.0.1 Mask:255.0.0.0 > > inet6 addr: ::1/128 Scope:Host > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:0 > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:0 > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:0 > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > I would also advise you to post a question also on ask.openstack.org > > > > Boris. > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > From: ichi.sara at gmail.com > > To: rdo-list at redhat.com > > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > Hey people, > > I have an issue with my networking. I connected my openstack to an external > > network I did all the changes required. But still my router can't reach the > > external gateway. > > > > =====ifcfg-br-ex > > DEVICE=br-ex > > DEVICETYPE=ovs > > TYPE=OVSBridge > > BOOTPROTO=static > > IPADDR=192.168.5.33 > > NETMASK=255.255.255.0 > > ONBOOT=yes > > GATEWAY=192.168.5.1 > > DNS1=8.8.8.8 > > DNS2=192.168.5.1 > > > > > > ====ifcfg-eth0 > > DEVICE=eth0 > > HWADDR=00:0c:29:a2:b1:b9 > > ONBOOT=yes > > TYPE=OVSPort > > NM_CONTROLLED=yes > > DEVICETYPE=ovs > > OVS_BRIDGE=br-ex > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > 19de58db-509d-4de8-bd88-9222019b13f1 > > Bridge br-int > > fail_mode: secure > > Port "tap8652132e-b8" > > tag: 1 > > Interface "tap8652132e-b8" > > type: internal > > Port br-int > > Interface br-int > > type: internal > > Port patch-tun > > Interface patch-tun > > type: patch > > options: {peer=patch-int} > > Bridge br-ex > > Port "qg-5f8ebe30-40" > > Interface "qg-5f8ebe30-40" > > type: internal > > Port "eth0" > > Interface "eth0" > > Port br-ex > > Interface br-ex > > type: internal > > Bridge br-tun > > Port "vxlan-c0a80520" > > Interface "vxlan-c0a80520" > > type: vxlan > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > out_key=flow, remote_ip="192.168.5.32"} > > Port br-tun > > Interface br-tun > > type: internal > > Port patch-int > > Interface patch-int > > type: patch > > options: {peer=patch-tun} > > ovs_version: "2.3.1" > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > ^C > > --- 192.168.5.1 ping statistics --- > > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > [root at localhost ~(keystone_admin)]# > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > inet6 ::1/128 scope host > > valid_lft forever preferred_lft forever > > 14: qg-5f8ebe30-40: mtu 1500 qdisc > > noqueue > > state UNKNOWN > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 > > valid_lft forever preferred_lft forever > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > valid_lft forever preferred_lft forever > > [root at localhost ~(keystone_admin)]# > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > default via 192.168.5.1 dev br-ex > > default via 192.168.4.1 dev eth1 > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > > 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 > > [root at localhost ~(keystone_admin)]# > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src 192.168.5.70 > > [root at localhost ~(keystone_admin)]# > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > ^C > > --- 192.168.5.1 ping statistics --- > > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > > > any hints?? > > > > > > > > > > > > _______________________________________________ Rdo-list mailing list > > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ichi.sara at gmail.com Tue May 19 11:17:20 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 13:17:20 +0200 Subject: [Rdo-list] [Neutron] router can't ping external gateway In-Reply-To: <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> References: <1614651306.1136287.1432032645602.JavaMail.zimbra@redhat.com> <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> Message-ID: the ICMP requests arrives to the eth0 interface [root at localhost ~]# tcpdump -i eth0 icmp tcpdump: WARNING: eth0: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, seq 1, length 64 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, seq 2, length 64 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, seq 3, length 64 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, seq 4, length 64 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, seq 5, length 64 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, seq 6, length 64 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, seq 7, length 64 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, seq 8, length 64 13:14:33.060267 what should I do next? P.S: My compute and controller hosts are ESXi VMs and I can ssh to both of them without a problem. 2015-05-19 13:00 GMT+02:00 Marius Cornea : > Also, I'm seeing that you have 2 default routes on your host. I'm not sure > it affects the setup but try keeping only one: e.g. 'ip route del default > via 192.168.4.1' to delete the eth1 one. > > ======[root at localhost ~(keystone_admin)]# ip r > default via 192.168.5.1 dev br-ex > default via 192.168.4.1 dev eth1 > > ----- Original Message ----- > > From: "Marius Cornea" > > To: "ICHIBA Sara" > > Cc: rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > Hi, > > > > Try to see if any of the ICMP requests leave the eth0 interface like > 'tcpdump > > -i eth0 icmp' while pinging 192.168.5.1 from the router namespace. > > > > Thanks, > > Marius > > > > ----- Original Message ----- > > > From: "ICHIBA Sara" > > > To: "Boris Derzhavets" , rdo-list at redhat.com > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > ====updates > > > > > > I have deleted my networks, rebooted my machines and configured an > other > > > network. Now I can see the qr bridge mapped to the router but still > can't > > > ping the external gateway: > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src 10.0.0.1 > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src > 192.168.5.70 > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > inet 127.0.0.1/8 scope host lo > > > valid_lft forever preferred_lft forever > > > inet6 ::1/128 scope host > > > valid_lft forever preferred_lft forever > > > 12: qg-e1b584b4-db: mtu 1500 qdisc > > > noqueue > > > state UNKNOWN > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-e1b584b4-db > > > valid_lft forever preferred_lft forever > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global qg-e1b584b4-db > > > valid_lft forever preferred_lft forever > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > valid_lft forever preferred_lft forever > > > 13: qr-7b330e0e-5c: mtu 1500 qdisc > > > noqueue > > > state UNKNOWN > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > valid_lft forever preferred_lft forever > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > valid_lft forever preferred_lft forever > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > Bridge br-int > > > fail_mode: secure > > > Port "tap2decc1bc-bf" > > > tag: 2 > > > Interface "tap2decc1bc-bf" > > > type: internal > > > Port br-int > > > Interface br-int > > > type: internal > > > Port patch-tun > > > Interface patch-tun > > > type: patch > > > options: {peer=patch-int} > > > Port "qr-7b330e0e-5c" > > > tag: 2 > > > Interface "qr-7b330e0e-5c" > > > type: internal > > > Port "qvo164afbd4-0c" > > > tag: 2 > > > Interface "qvo164afbd4-0c" > > > Bridge br-ex > > > Port "eth0" > > > Interface "eth0" > > > Port br-ex > > > Interface br-ex > > > type: internal > > > Port "qg-e1b584b4-db" > > > Interface "qg-e1b584b4-db" > > > type: internal > > > Bridge br-tun > > > Port br-tun > > > Interface br-tun > > > type: internal > > > Port "vxlan-c0a80520" > > > Interface "vxlan-c0a80520" > > > type: vxlan > > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > > out_key=flow, remote_ip="192.168.5.32"} > > > Port patch-int > > > Interface patch-int > > > type: patch > > > options: {peer=patch-tun} > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > : > > > > > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini and the > other > > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com > > : > > > > > > > > > > > > There is one thing , which I clearly see . It is qrouter-namespace > > > misconfiguration. There is no qr-xxxxx bridge attached to br-int > > > Picture , in general, should look like this > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > Kernel IP routing table > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > lo Link encap:Local Loopback > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > inet6 addr: ::1/128 Scope:Host > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > collisions:0 txqueuelen:0 > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > collisions:0 txqueuelen:0 > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > collisions:0 txqueuelen:0 > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > I would also advise you to post a question also on ask.openstack.org > > > > > > Boris. > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > From: ichi.sara at gmail.com > > > To: rdo-list at redhat.com > > > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > > Hey people, > > > I have an issue with my networking. I connected my openstack to an > external > > > network I did all the changes required. But still my router can't > reach the > > > external gateway. > > > > > > =====ifcfg-br-ex > > > DEVICE=br-ex > > > DEVICETYPE=ovs > > > TYPE=OVSBridge > > > BOOTPROTO=static > > > IPADDR=192.168.5.33 > > > NETMASK=255.255.255.0 > > > ONBOOT=yes > > > GATEWAY=192.168.5.1 > > > DNS1=8.8.8.8 > > > DNS2=192.168.5.1 > > > > > > > > > ====ifcfg-eth0 > > > DEVICE=eth0 > > > HWADDR=00:0c:29:a2:b1:b9 > > > ONBOOT=yes > > > TYPE=OVSPort > > > NM_CONTROLLED=yes > > > DEVICETYPE=ovs > > > OVS_BRIDGE=br-ex > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > Bridge br-int > > > fail_mode: secure > > > Port "tap8652132e-b8" > > > tag: 1 > > > Interface "tap8652132e-b8" > > > type: internal > > > Port br-int > > > Interface br-int > > > type: internal > > > Port patch-tun > > > Interface patch-tun > > > type: patch > > > options: {peer=patch-int} > > > Bridge br-ex > > > Port "qg-5f8ebe30-40" > > > Interface "qg-5f8ebe30-40" > > > type: internal > > > Port "eth0" > > > Interface "eth0" > > > Port br-ex > > > Interface br-ex > > > type: internal > > > Bridge br-tun > > > Port "vxlan-c0a80520" > > > Interface "vxlan-c0a80520" > > > type: vxlan > > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > > out_key=flow, remote_ip="192.168.5.32"} > > > Port br-tun > > > Interface br-tun > > > type: internal > > > Port patch-int > > > Interface patch-int > > > type: patch > > > options: {peer=patch-tun} > > > ovs_version: "2.3.1" > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > ^C > > > --- 192.168.5.1 ping statistics --- > > > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > [root at localhost ~(keystone_admin)]# > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > inet 127.0.0.1/8 scope host lo > > > valid_lft forever preferred_lft forever > > > inet6 ::1/128 scope host > > > valid_lft forever preferred_lft forever > > > 14: qg-5f8ebe30-40: mtu 1500 qdisc > > > noqueue > > > state UNKNOWN > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 > > > valid_lft forever preferred_lft forever > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > valid_lft forever preferred_lft forever > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > default via 192.168.5.1 dev br-ex > > > default via 192.168.4.1 dev eth1 > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > > > 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src > 192.168.5.70 > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > ^C > > > --- 192.168.5.1 ping statistics --- > > > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > > > > > any hints?? > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list mailing list > > > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list > To > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue May 19 11:29:52 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 19 May 2015 07:29:52 -0400 (EDT) Subject: [Rdo-list] [Neutron] router can't ping external gateway In-Reply-To: References: <1614651306.1136287.1432032645602.JavaMail.zimbra@redhat.com> <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> Message-ID: <1846855912.1156541.1432034992884.JavaMail.zimbra@redhat.com> Oh, ESXi...I remember that the vswitch had some security features in place. You can check those and I think the one that you're looking for is called forged retransmits. Thanks, Marius ----- Original Message ----- > From: "ICHIBA Sara" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Tuesday, May 19, 2015 1:17:20 PM > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > the ICMP requests arrives to the eth0 interface > [root at localhost ~]# tcpdump -i eth0 icmp > tcpdump: WARNING: eth0: no IPv4 address assigned > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, > seq 1, length 64 > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, > seq 2, length 64 > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, > seq 3, length 64 > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, > seq 4, length 64 > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, > seq 5, length 64 > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, > seq 6, length 64 > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, > seq 7, length 64 > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id 31055, > seq 8, length 64 > 13:14:33.060267 > > > what should I do next? > > P.S: My compute and controller hosts are ESXi VMs and I can ssh to both of > them without a problem. > > 2015-05-19 13:00 GMT+02:00 Marius Cornea : > > > Also, I'm seeing that you have 2 default routes on your host. I'm not sure > > it affects the setup but try keeping only one: e.g. 'ip route del default > > via 192.168.4.1' to delete the eth1 one. > > > > ======[root at localhost ~(keystone_admin)]# ip r > > default via 192.168.5.1 dev br-ex > > default via 192.168.4.1 dev eth1 > > > > ----- Original Message ----- > > > From: "Marius Cornea" > > > To: "ICHIBA Sara" > > > Cc: rdo-list at redhat.com > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > Hi, > > > > > > Try to see if any of the ICMP requests leave the eth0 interface like > > 'tcpdump > > > -i eth0 icmp' while pinging 192.168.5.1 from the router namespace. > > > > > > Thanks, > > > Marius > > > > > > ----- Original Message ----- > > > > From: "ICHIBA Sara" > > > > To: "Boris Derzhavets" , rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > ====updates > > > > > > > > I have deleted my networks, rebooted my machines and configured an > > other > > > > network. Now I can see the qr bridge mapped to the router but still > > can't > > > > ping the external gateway: > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src 10.0.0.1 > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src > > 192.168.5.70 > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > inet 127.0.0.1/8 scope host lo > > > > valid_lft forever preferred_lft forever > > > > inet6 ::1/128 scope host > > > > valid_lft forever preferred_lft forever > > > > 12: qg-e1b584b4-db: mtu 1500 qdisc > > > > noqueue > > > > state UNKNOWN > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-e1b584b4-db > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global qg-e1b584b4-db > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 13: qr-7b330e0e-5c: mtu 1500 qdisc > > > > noqueue > > > > state UNKNOWN > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > Bridge br-int > > > > fail_mode: secure > > > > Port "tap2decc1bc-bf" > > > > tag: 2 > > > > Interface "tap2decc1bc-bf" > > > > type: internal > > > > Port br-int > > > > Interface br-int > > > > type: internal > > > > Port patch-tun > > > > Interface patch-tun > > > > type: patch > > > > options: {peer=patch-int} > > > > Port "qr-7b330e0e-5c" > > > > tag: 2 > > > > Interface "qr-7b330e0e-5c" > > > > type: internal > > > > Port "qvo164afbd4-0c" > > > > tag: 2 > > > > Interface "qvo164afbd4-0c" > > > > Bridge br-ex > > > > Port "eth0" > > > > Interface "eth0" > > > > Port br-ex > > > > Interface br-ex > > > > type: internal > > > > Port "qg-e1b584b4-db" > > > > Interface "qg-e1b584b4-db" > > > > type: internal > > > > Bridge br-tun > > > > Port br-tun > > > > Interface br-tun > > > > type: internal > > > > Port "vxlan-c0a80520" > > > > Interface "vxlan-c0a80520" > > > > type: vxlan > > > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > Port patch-int > > > > Interface patch-int > > > > type: patch > > > > options: {peer=patch-tun} > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > : > > > > > > > > > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini and the > > other > > > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com > > > : > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is qrouter-namespace > > > > misconfiguration. There is no qr-xxxxx bridge attached to br-int > > > > Picture , in general, should look like this > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > Kernel IP routing table > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > lo Link encap:Local Loopback > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > inet6 addr: ::1/128 Scope:Host > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > collisions:0 txqueuelen:0 > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > collisions:0 txqueuelen:0 > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > collisions:0 txqueuelen:0 > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > I would also advise you to post a question also on ask.openstack.org > > > > > > > > Boris. > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > From: ichi.sara at gmail.com > > > > To: rdo-list at redhat.com > > > > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > > > > > Hey people, > > > > I have an issue with my networking. I connected my openstack to an > > external > > > > network I did all the changes required. But still my router can't > > reach the > > > > external gateway. > > > > > > > > =====ifcfg-br-ex > > > > DEVICE=br-ex > > > > DEVICETYPE=ovs > > > > TYPE=OVSBridge > > > > BOOTPROTO=static > > > > IPADDR=192.168.5.33 > > > > NETMASK=255.255.255.0 > > > > ONBOOT=yes > > > > GATEWAY=192.168.5.1 > > > > DNS1=8.8.8.8 > > > > DNS2=192.168.5.1 > > > > > > > > > > > > ====ifcfg-eth0 > > > > DEVICE=eth0 > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > ONBOOT=yes > > > > TYPE=OVSPort > > > > NM_CONTROLLED=yes > > > > DEVICETYPE=ovs > > > > OVS_BRIDGE=br-ex > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > Bridge br-int > > > > fail_mode: secure > > > > Port "tap8652132e-b8" > > > > tag: 1 > > > > Interface "tap8652132e-b8" > > > > type: internal > > > > Port br-int > > > > Interface br-int > > > > type: internal > > > > Port patch-tun > > > > Interface patch-tun > > > > type: patch > > > > options: {peer=patch-int} > > > > Bridge br-ex > > > > Port "qg-5f8ebe30-40" > > > > Interface "qg-5f8ebe30-40" > > > > type: internal > > > > Port "eth0" > > > > Interface "eth0" > > > > Port br-ex > > > > Interface br-ex > > > > type: internal > > > > Bridge br-tun > > > > Port "vxlan-c0a80520" > > > > Interface "vxlan-c0a80520" > > > > type: vxlan > > > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > Port br-tun > > > > Interface br-tun > > > > type: internal > > > > Port patch-int > > > > Interface patch-int > > > > type: patch > > > > options: {peer=patch-tun} > > > > ovs_version: "2.3.1" > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > > ^C > > > > --- 192.168.5.1 ping statistics --- > > > > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > inet 127.0.0.1/8 scope host lo > > > > valid_lft forever preferred_lft forever > > > > inet6 ::1/128 scope host > > > > valid_lft forever preferred_lft forever > > > > 14: qg-5f8ebe30-40: mtu 1500 qdisc > > > > noqueue > > > > state UNKNOWN > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > valid_lft forever preferred_lft forever > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > default via 192.168.5.1 dev br-ex > > > > default via 192.168.4.1 dev eth1 > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src > > 192.168.5.70 > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > ^C > > > > --- 192.168.5.1 ping statistics --- > > > > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list mailing list > > > > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list > > To > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > From ichi.sara at gmail.com Tue May 19 11:42:11 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 13:42:11 +0200 Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway In-Reply-To: References: <1614651306.1136287.1432032645602.JavaMail.zimbra@redhat.com> <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> <1846855912.1156541.1432034992884.JavaMail.zimbra@redhat.com> Message-ID: ---------- Forwarded message ---------- From: ICHIBA Sara Date: 2015-05-19 13:41 GMT+02:00 Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway To: Marius Cornea The forged transmissions on the vswitch are accepted. What's next? 2015-05-19 13:29 GMT+02:00 Marius Cornea : > Oh, ESXi...I remember that the vswitch had some security features in > place. You can check those and I think the one that you're looking for is > called forged retransmits. > > Thanks, > Marius > > ----- Original Message ----- > > From: "ICHIBA Sara" > > To: "Marius Cornea" > > Cc: rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > the ICMP requests arrives to the eth0 interface > > [root at localhost ~]# tcpdump -i eth0 icmp > > tcpdump: WARNING: eth0: no IPv4 address assigned > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id > 31055, > > seq 1, length 64 > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id > 31055, > > seq 2, length 64 > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id > 31055, > > seq 3, length 64 > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id > 31055, > > seq 4, length 64 > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id > 31055, > > seq 5, length 64 > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id > 31055, > > seq 6, length 64 > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id > 31055, > > seq 7, length 64 > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1: ICMP echo request, id > 31055, > > seq 8, length 64 > > 13:14:33.060267 > > > > > > what should I do next? > > > > P.S: My compute and controller hosts are ESXi VMs and I can ssh to both > of > > them without a problem. > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea : > > > > > Also, I'm seeing that you have 2 default routes on your host. I'm not > sure > > > it affects the setup but try keeping only one: e.g. 'ip route del > default > > > via 192.168.4.1' to delete the eth1 one. > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > default via 192.168.5.1 dev br-ex > > > default via 192.168.4.1 dev eth1 > > > > > > ----- Original Message ----- > > > > From: "Marius Cornea" > > > > To: "ICHIBA Sara" > > > > Cc: rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > Hi, > > > > > > > > Try to see if any of the ICMP requests leave the eth0 interface like > > > 'tcpdump > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router namespace. > > > > > > > > Thanks, > > > > Marius > > > > > > > > ----- Original Message ----- > > > > > From: "ICHIBA Sara" > > > > > To: "Boris Derzhavets" , > rdo-list at redhat.com > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > gateway > > > > > > > > > > ====updates > > > > > > > > > > I have deleted my networks, rebooted my machines and configured an > > > other > > > > > network. Now I can see the qr bridge mapped to the router but still > > > can't > > > > > ping the external gateway: > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src > 10.0.0.1 > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src > > > 192.168.5.70 > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever preferred_lft forever > > > > > 12: qg-e1b584b4-db: mtu 1500 > qdisc > > > > > noqueue > > > > > state UNKNOWN > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-e1b584b4-db > > > > > valid_lft forever preferred_lft forever > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global qg-e1b584b4-db > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 13: qr-7b330e0e-5c: mtu 1500 > qdisc > > > > > noqueue > > > > > state UNKNOWN > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > Bridge br-int > > > > > fail_mode: secure > > > > > Port "tap2decc1bc-bf" > > > > > tag: 2 > > > > > Interface "tap2decc1bc-bf" > > > > > type: internal > > > > > Port br-int > > > > > Interface br-int > > > > > type: internal > > > > > Port patch-tun > > > > > Interface patch-tun > > > > > type: patch > > > > > options: {peer=patch-int} > > > > > Port "qr-7b330e0e-5c" > > > > > tag: 2 > > > > > Interface "qr-7b330e0e-5c" > > > > > type: internal > > > > > Port "qvo164afbd4-0c" > > > > > tag: 2 > > > > > Interface "qvo164afbd4-0c" > > > > > Bridge br-ex > > > > > Port "eth0" > > > > > Interface "eth0" > > > > > Port br-ex > > > > > Interface br-ex > > > > > type: internal > > > > > Port "qg-e1b584b4-db" > > > > > Interface "qg-e1b584b4-db" > > > > > type: internal > > > > > Bridge br-tun > > > > > Port br-tun > > > > > Interface br-tun > > > > > type: internal > > > > > Port "vxlan-c0a80520" > > > > > Interface "vxlan-c0a80520" > > > > > type: vxlan > > > > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > Port patch-int > > > > > Interface patch-int > > > > > type: patch > > > > > options: {peer=patch-tun} > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > : > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini and > the > > > other > > > > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > bderzhavets at hotmail.com > > > > : > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is qrouter-namespace > > > > > misconfiguration. There is no qr-xxxxx bridge attached to br-int > > > > > Picture , in general, should look like this > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > Kernel IP routing table > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > lo Link encap:Local Loopback > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > inet6 addr: ::1/128 Scope:Host > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > collisions:0 txqueuelen:0 > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > > collisions:0 txqueuelen:0 > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > > collisions:0 txqueuelen:0 > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > I would also advise you to post a question also on > ask.openstack.org > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > From: ichi.sara at gmail.com > > > > > To: rdo-list at redhat.com > > > > > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > > > > > > > > Hey people, > > > > > I have an issue with my networking. I connected my openstack to an > > > external > > > > > network I did all the changes required. But still my router can't > > > reach the > > > > > external gateway. > > > > > > > > > > =====ifcfg-br-ex > > > > > DEVICE=br-ex > > > > > DEVICETYPE=ovs > > > > > TYPE=OVSBridge > > > > > BOOTPROTO=static > > > > > IPADDR=192.168.5.33 > > > > > NETMASK=255.255.255.0 > > > > > ONBOOT=yes > > > > > GATEWAY=192.168.5.1 > > > > > DNS1=8.8.8.8 > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > DEVICE=eth0 > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > ONBOOT=yes > > > > > TYPE=OVSPort > > > > > NM_CONTROLLED=yes > > > > > DEVICETYPE=ovs > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > Bridge br-int > > > > > fail_mode: secure > > > > > Port "tap8652132e-b8" > > > > > tag: 1 > > > > > Interface "tap8652132e-b8" > > > > > type: internal > > > > > Port br-int > > > > > Interface br-int > > > > > type: internal > > > > > Port patch-tun > > > > > Interface patch-tun > > > > > type: patch > > > > > options: {peer=patch-int} > > > > > Bridge br-ex > > > > > Port "qg-5f8ebe30-40" > > > > > Interface "qg-5f8ebe30-40" > > > > > type: internal > > > > > Port "eth0" > > > > > Interface "eth0" > > > > > Port br-ex > > > > > Interface br-ex > > > > > type: internal > > > > > Bridge br-tun > > > > > Port "vxlan-c0a80520" > > > > > Interface "vxlan-c0a80520" > > > > > type: vxlan > > > > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > Port br-tun > > > > > Interface br-tun > > > > > type: internal > > > > > Port patch-int > > > > > Interface patch-int > > > > > type: patch > > > > > options: {peer=patch-tun} > > > > > ovs_version: "2.3.1" > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > > > ^C > > > > > --- 192.168.5.1 ping statistics --- > > > > > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever preferred_lft forever > > > > > 14: qg-5f8ebe30-40: mtu 1500 > qdisc > > > > > noqueue > > > > > state UNKNOWN > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > default via 192.168.5.1 dev br-ex > > > > > default via 192.168.4.1 dev eth1 > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src > > > 192.168.5.70 > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > ^C > > > > > --- 192.168.5.1 ping statistics --- > > > > > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list mailing > list > > > > > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > To > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue May 19 12:12:45 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 19 May 2015 08:12:45 -0400 (EDT) Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway In-Reply-To: References: <1614651306.1136287.1432032645602.JavaMail.zimbra@redhat.com> <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> <1846855912.1156541.1432034992884.JavaMail.zimbra@redhat.com> Message-ID: <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> Is there an ARP entry for 192.168.5.1 ? ip n | grep '192.168.5.1 ' in the router namespace ----- Original Message ----- > From: "ICHIBA Sara" > To: rdo-list at redhat.com > Sent: Tuesday, May 19, 2015 1:42:11 PM > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > > ---------- Forwarded message ---------- > From: ICHIBA Sara < ichi.sara at gmail.com > > Date: 2015-05-19 13:41 GMT+02:00 > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > To: Marius Cornea < mcornea at redhat.com > > > > The forged transmissions on the vswitch are accepted. What's next? > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > Oh, ESXi...I remember that the vswitch had some security features in place. > You can check those and I think the one that you're looking for is called > forged retransmits. > > Thanks, > Marius > > ----- Original Message ----- > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > To: "Marius Cornea" < mcornea at redhat.com > > > Cc: rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > the ICMP requests arrives to the eth0 interface > > [root at localhost ~]# tcpdump -i eth0 icmp > > tcpdump: WARNING: eth0: no IPv4 address assigned > > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > 31055, > > seq 1, length 64 > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > 31055, > > seq 2, length 64 > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > 31055, > > seq 3, length 64 > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > 31055, > > seq 4, length 64 > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > 31055, > > seq 5, length 64 > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > 31055, > > seq 6, length 64 > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > 31055, > > seq 7, length 64 > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > 31055, > > seq 8, length 64 > > 13:14:33.060267 > > > > > > what should I do next? > > > > P.S: My compute and controller hosts are ESXi VMs and I can ssh to both of > > them without a problem. > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > Also, I'm seeing that you have 2 default routes on your host. I'm not > > > sure > > > it affects the setup but try keeping only one: e.g. 'ip route del default > > > via 192.168.4.1' to delete the eth1 one. > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > default via 192.168.5.1 dev br-ex > > > default via 192.168.4.1 dev eth1 > > > > > > ----- Original Message ----- > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > Cc: rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > Hi, > > > > > > > > Try to see if any of the ICMP requests leave the eth0 interface like > > > 'tcpdump > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router namespace. > > > > > > > > Thanks, > > > > Marius > > > > > > > > ----- Original Message ----- > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > rdo-list at redhat.com > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > > > ====updates > > > > > > > > > > I have deleted my networks, rebooted my machines and configured an > > > other > > > > > network. Now I can see the qr bridge mapped to the router but still > > > can't > > > > > ping the external gateway: > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src 10.0.0.1 > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src > > > 192.168.5.70 > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever preferred_lft forever > > > > > 12: qg-e1b584b4-db: mtu 1500 qdisc > > > > > noqueue > > > > > state UNKNOWN > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-e1b584b4-db > > > > > valid_lft forever preferred_lft forever > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global qg-e1b584b4-db > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 13: qr-7b330e0e-5c: mtu 1500 qdisc > > > > > noqueue > > > > > state UNKNOWN > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > Bridge br-int > > > > > fail_mode: secure > > > > > Port "tap2decc1bc-bf" > > > > > tag: 2 > > > > > Interface "tap2decc1bc-bf" > > > > > type: internal > > > > > Port br-int > > > > > Interface br-int > > > > > type: internal > > > > > Port patch-tun > > > > > Interface patch-tun > > > > > type: patch > > > > > options: {peer=patch-int} > > > > > Port "qr-7b330e0e-5c" > > > > > tag: 2 > > > > > Interface "qr-7b330e0e-5c" > > > > > type: internal > > > > > Port "qvo164afbd4-0c" > > > > > tag: 2 > > > > > Interface "qvo164afbd4-0c" > > > > > Bridge br-ex > > > > > Port "eth0" > > > > > Interface "eth0" > > > > > Port br-ex > > > > > Interface br-ex > > > > > type: internal > > > > > Port "qg-e1b584b4-db" > > > > > Interface "qg-e1b584b4-db" > > > > > type: internal > > > > > Bridge br-tun > > > > > Port br-tun > > > > > Interface br-tun > > > > > type: internal > > > > > Port "vxlan-c0a80520" > > > > > Interface "vxlan-c0a80520" > > > > > type: vxlan > > > > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > Port patch-int > > > > > Interface patch-int > > > > > type: patch > > > > > options: {peer=patch-tun} > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > : > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini and the > > > other > > > > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com > > > > : > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is qrouter-namespace > > > > > misconfiguration. There is no qr-xxxxx bridge attached to br-int > > > > > Picture , in general, should look like this > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > Kernel IP routing table > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > lo Link encap:Local Loopback > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > inet6 addr: ::1/128 Scope:Host > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > collisions:0 txqueuelen:0 > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > > collisions:0 txqueuelen:0 > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > > collisions:0 txqueuelen:0 > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > I would also advise you to post a question also on ask.openstack.org > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > From: ichi.sara at gmail.com > > > > > To: rdo-list at redhat.com > > > > > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > > > > > > > > Hey people, > > > > > I have an issue with my networking. I connected my openstack to an > > > external > > > > > network I did all the changes required. But still my router can't > > > reach the > > > > > external gateway. > > > > > > > > > > =====ifcfg-br-ex > > > > > DEVICE=br-ex > > > > > DEVICETYPE=ovs > > > > > TYPE=OVSBridge > > > > > BOOTPROTO=static > > > > > IPADDR=192.168.5.33 > > > > > NETMASK=255.255.255.0 > > > > > ONBOOT=yes > > > > > GATEWAY=192.168.5.1 > > > > > DNS1=8.8.8.8 > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > DEVICE=eth0 > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > ONBOOT=yes > > > > > TYPE=OVSPort > > > > > NM_CONTROLLED=yes > > > > > DEVICETYPE=ovs > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > Bridge br-int > > > > > fail_mode: secure > > > > > Port "tap8652132e-b8" > > > > > tag: 1 > > > > > Interface "tap8652132e-b8" > > > > > type: internal > > > > > Port br-int > > > > > Interface br-int > > > > > type: internal > > > > > Port patch-tun > > > > > Interface patch-tun > > > > > type: patch > > > > > options: {peer=patch-int} > > > > > Bridge br-ex > > > > > Port "qg-5f8ebe30-40" > > > > > Interface "qg-5f8ebe30-40" > > > > > type: internal > > > > > Port "eth0" > > > > > Interface "eth0" > > > > > Port br-ex > > > > > Interface br-ex > > > > > type: internal > > > > > Bridge br-tun > > > > > Port "vxlan-c0a80520" > > > > > Interface "vxlan-c0a80520" > > > > > type: vxlan > > > > > options: {df_default="true", in_key=flow, local_ip="192.168.5.33", > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > Port br-tun > > > > > Interface br-tun > > > > > type: internal > > > > > Port patch-int > > > > > Interface patch-int > > > > > type: patch > > > > > options: {peer=patch-tun} > > > > > ovs_version: "2.3.1" > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > > > ^C > > > > > --- 192.168.5.1 ping statistics --- > > > > > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever preferred_lft forever > > > > > 14: qg-5f8ebe30-40: mtu 1500 qdisc > > > > > noqueue > > > > > state UNKNOWN > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global qg-5f8ebe30-40 > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > default via 192.168.5.1 dev br-ex > > > > > default via 192.168.4.1 dev eth1 > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src 192.168.5.33 > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src > > > 192.168.5.70 > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > ^C > > > > > --- 192.168.5.1 ping statistics --- > > > > > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list mailing list > > > > > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list > > > To > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ichi.sara at gmail.com Tue May 19 12:15:06 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 14:15:06 +0200 Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway In-Reply-To: <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> References: <1614651306.1136287.1432032645602.JavaMail.zimbra@redhat.com> <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> <1846855912.1156541.1432034992884.JavaMail.zimbra@redhat.com> <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> Message-ID: [root at localhost ~(keystone_admin)]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep '192.168.5.1 ' 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE 2015-05-19 14:12 GMT+02:00 Marius Cornea : > Is there an ARP entry for 192.168.5.1 ? > > ip n | grep '192.168.5.1 ' in the router namespace > > > > ----- Original Message ----- > > From: "ICHIBA Sara" > > To: rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > > > > > ---------- Forwarded message ---------- > > From: ICHIBA Sara < ichi.sara at gmail.com > > > Date: 2015-05-19 13:41 GMT+02:00 > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > To: Marius Cornea < mcornea at redhat.com > > > > > > > The forged transmissions on the vswitch are accepted. What's next? > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > > > > Oh, ESXi...I remember that the vswitch had some security features in > place. > > You can check those and I think the one that you're looking for is called > > forged retransmits. > > > > Thanks, > > Marius > > > > ----- Original Message ----- > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > Cc: rdo-list at redhat.com > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > the ICMP requests arrives to the eth0 interface > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 > bytes > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > 31055, > > > seq 1, length 64 > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > 31055, > > > seq 2, length 64 > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > 31055, > > > seq 3, length 64 > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > 31055, > > > seq 4, length 64 > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > 31055, > > > seq 5, length 64 > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > 31055, > > > seq 6, length 64 > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > 31055, > > > seq 7, length 64 > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > 31055, > > > seq 8, length 64 > > > 13:14:33.060267 > > > > > > > > > what should I do next? > > > > > > P.S: My compute and controller hosts are ESXi VMs and I can ssh to > both of > > > them without a problem. > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > Also, I'm seeing that you have 2 default routes on your host. I'm not > > > > sure > > > > it affects the setup but try keeping only one: e.g. 'ip route del > default > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > default via 192.168.5.1 dev br-ex > > > > default via 192.168.4.1 dev eth1 > > > > > > > > ----- Original Message ----- > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > Cc: rdo-list at redhat.com > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > gateway > > > > > > > > > > Hi, > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 interface > like > > > > 'tcpdump > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router namespace. > > > > > > > > > > Thanks, > > > > > Marius > > > > > > > > > > ----- Original Message ----- > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > > rdo-list at redhat.com > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > gateway > > > > > > > > > > > > ====updates > > > > > > > > > > > > I have deleted my networks, rebooted my machines and configured > an > > > > other > > > > > > network. Now I can see the qr bridge mapped to the router but > still > > > > can't > > > > > > ping the external gateway: > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src > 10.0.0.1 > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src > > > > 192.168.5.70 > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > 1: lo: mtu 65536 qdisc noqueue state > UNKNOWN > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > valid_lft forever preferred_lft forever > > > > > > inet6 ::1/128 scope host > > > > > > valid_lft forever preferred_lft forever > > > > > > 12: qg-e1b584b4-db: mtu 1500 > qdisc > > > > > > noqueue > > > > > > state UNKNOWN > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > qg-e1b584b4-db > > > > > > valid_lft forever preferred_lft forever > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > qg-e1b584b4-db > > > > > > valid_lft forever preferred_lft forever > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > valid_lft forever preferred_lft forever > > > > > > 13: qr-7b330e0e-5c: mtu 1500 > qdisc > > > > > > noqueue > > > > > > state UNKNOWN > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > > > > valid_lft forever preferred_lft forever > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > Bridge br-int > > > > > > fail_mode: secure > > > > > > Port "tap2decc1bc-bf" > > > > > > tag: 2 > > > > > > Interface "tap2decc1bc-bf" > > > > > > type: internal > > > > > > Port br-int > > > > > > Interface br-int > > > > > > type: internal > > > > > > Port patch-tun > > > > > > Interface patch-tun > > > > > > type: patch > > > > > > options: {peer=patch-int} > > > > > > Port "qr-7b330e0e-5c" > > > > > > tag: 2 > > > > > > Interface "qr-7b330e0e-5c" > > > > > > type: internal > > > > > > Port "qvo164afbd4-0c" > > > > > > tag: 2 > > > > > > Interface "qvo164afbd4-0c" > > > > > > Bridge br-ex > > > > > > Port "eth0" > > > > > > Interface "eth0" > > > > > > Port br-ex > > > > > > Interface br-ex > > > > > > type: internal > > > > > > Port "qg-e1b584b4-db" > > > > > > Interface "qg-e1b584b4-db" > > > > > > type: internal > > > > > > Bridge br-tun > > > > > > Port br-tun > > > > > > Interface br-tun > > > > > > type: internal > > > > > > Port "vxlan-c0a80520" > > > > > > Interface "vxlan-c0a80520" > > > > > > type: vxlan > > > > > > options: {df_default="true", in_key=flow, > local_ip="192.168.5.33", > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > Port patch-int > > > > > > Interface patch-int > > > > > > type: patch > > > > > > options: {peer=patch-tun} > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > : > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini > and the > > > > other > > > > > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > bderzhavets at hotmail.com > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > qrouter-namespace > > > > > > misconfiguration. There is no qr-xxxxx bridge attached to br-int > > > > > > Picture , in general, should look like this > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > > > Kernel IP routing table > > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > > lo Link encap:Local Loopback > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > collisions:0 txqueuelen:0 > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > collisions:0 txqueuelen:0 > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > collisions:0 txqueuelen:0 > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > > > I would also advise you to post a question also on > ask.openstack.org > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > From: ichi.sara at gmail.com > > > > > > To: rdo-list at redhat.com > > > > > > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > > > > > > > > > > > Hey people, > > > > > > I have an issue with my networking. I connected my openstack to > an > > > > external > > > > > > network I did all the changes required. But still my router can't > > > > reach the > > > > > > external gateway. > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > DEVICE=br-ex > > > > > > DEVICETYPE=ovs > > > > > > TYPE=OVSBridge > > > > > > BOOTPROTO=static > > > > > > IPADDR=192.168.5.33 > > > > > > NETMASK=255.255.255.0 > > > > > > ONBOOT=yes > > > > > > GATEWAY=192.168.5.1 > > > > > > DNS1=8.8.8.8 > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > DEVICE=eth0 > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > ONBOOT=yes > > > > > > TYPE=OVSPort > > > > > > NM_CONTROLLED=yes > > > > > > DEVICETYPE=ovs > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > Bridge br-int > > > > > > fail_mode: secure > > > > > > Port "tap8652132e-b8" > > > > > > tag: 1 > > > > > > Interface "tap8652132e-b8" > > > > > > type: internal > > > > > > Port br-int > > > > > > Interface br-int > > > > > > type: internal > > > > > > Port patch-tun > > > > > > Interface patch-tun > > > > > > type: patch > > > > > > options: {peer=patch-int} > > > > > > Bridge br-ex > > > > > > Port "qg-5f8ebe30-40" > > > > > > Interface "qg-5f8ebe30-40" > > > > > > type: internal > > > > > > Port "eth0" > > > > > > Interface "eth0" > > > > > > Port br-ex > > > > > > Interface br-ex > > > > > > type: internal > > > > > > Bridge br-tun > > > > > > Port "vxlan-c0a80520" > > > > > > Interface "vxlan-c0a80520" > > > > > > type: vxlan > > > > > > options: {df_default="true", in_key=flow, > local_ip="192.168.5.33", > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > Port br-tun > > > > > > Interface br-tun > > > > > > type: internal > > > > > > Port patch-int > > > > > > Interface patch-int > > > > > > type: patch > > > > > > options: {peer=patch-tun} > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > > > > ^C > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > 1: lo: mtu 65536 qdisc noqueue state > UNKNOWN > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > valid_lft forever preferred_lft forever > > > > > > inet6 ::1/128 scope host > > > > > > valid_lft forever preferred_lft forever > > > > > > 14: qg-5f8ebe30-40: mtu 1500 > qdisc > > > > > > noqueue > > > > > > state UNKNOWN > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > qg-5f8ebe30-40 > > > > > > valid_lft forever preferred_lft forever > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > valid_lft forever preferred_lft forever > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > default via 192.168.5.1 dev br-ex > > > > > > default via 192.168.4.1 dev eth1 > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src > 192.168.5.33 > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src > > > > 192.168.5.70 > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > ^C > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list mailing > list > > > > > > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue May 19 12:32:08 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 19 May 2015 08:32:08 -0400 (EDT) Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway In-Reply-To: References: <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> <1846855912.1156541.1432034992884.JavaMail.zimbra@redhat.com> <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> Message-ID: <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> Delete and check if other computers in the network are receiving broadcasts: ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh flush 192.168.5.1 tcpdump -i arp #on one of the computers in the 192.168.5.0 network ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 See if any ARP requests reach the computer where you run tcpdump. I'm still thinking about some blocking stuff happening in the vswitch since the ICMP requests are sent to the eth0 interface so they should reach the vswitch port. ----- Original Message ----- > From: "ICHIBA Sara" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Tuesday, May 19, 2015 2:15:06 PM > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > [root at localhost ~(keystone_admin)]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep '192.168.5.1 ' > 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE > > > > > > 2015-05-19 14:12 GMT+02:00 Marius Cornea : > > > Is there an ARP entry for 192.168.5.1 ? > > > > ip n | grep '192.168.5.1 ' in the router namespace > > > > > > > > ----- Original Message ----- > > > From: "ICHIBA Sara" > > > To: rdo-list at redhat.com > > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > > > > > > > > ---------- Forwarded message ---------- > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > Date: 2015-05-19 13:41 GMT+02:00 > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > The forged transmissions on the vswitch are accepted. What's next? > > > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > > > > > > > Oh, ESXi...I remember that the vswitch had some security features in > > place. > > > You can check those and I think the one that you're looking for is called > > > forged retransmits. > > > > > > Thanks, > > > Marius > > > > > > ----- Original Message ----- > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > Cc: rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > the ICMP requests arrives to the eth0 interface > > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > > decode > > > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 > > bytes > > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > > 31055, > > > > seq 1, length 64 > > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > > 31055, > > > > seq 2, length 64 > > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > > 31055, > > > > seq 3, length 64 > > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > > 31055, > > > > seq 4, length 64 > > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > > 31055, > > > > seq 5, length 64 > > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > > 31055, > > > > seq 6, length 64 > > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > > 31055, > > > > seq 7, length 64 > > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, id > > > > 31055, > > > > seq 8, length 64 > > > > 13:14:33.060267 > > > > > > > > > > > > what should I do next? > > > > > > > > P.S: My compute and controller hosts are ESXi VMs and I can ssh to > > both of > > > > them without a problem. > > > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > Also, I'm seeing that you have 2 default routes on your host. I'm not > > > > > sure > > > > > it affects the setup but try keeping only one: e.g. 'ip route del > > default > > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > default via 192.168.5.1 dev br-ex > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > Cc: rdo-list at redhat.com > > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > gateway > > > > > > > > > > > > Hi, > > > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 interface > > like > > > > > 'tcpdump > > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router namespace. > > > > > > > > > > > > Thanks, > > > > > > Marius > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > > > rdo-list at redhat.com > > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > gateway > > > > > > > > > > > > > > ====updates > > > > > > > > > > > > > > I have deleted my networks, rebooted my machines and configured > > an > > > > > other > > > > > > > network. Now I can see the qr bridge mapped to the router but > > still > > > > > can't > > > > > > > ping the external gateway: > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src > > 10.0.0.1 > > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link src > > > > > 192.168.5.70 > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > UNKNOWN > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > valid_lft forever preferred_lft forever > > > > > > > inet6 ::1/128 scope host > > > > > > > valid_lft forever preferred_lft forever > > > > > > > 12: qg-e1b584b4-db: mtu 1500 > > qdisc > > > > > > > noqueue > > > > > > > state UNKNOWN > > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > qg-e1b584b4-db > > > > > > > valid_lft forever preferred_lft forever > > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > > qg-e1b584b4-db > > > > > > > valid_lft forever preferred_lft forever > > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > > valid_lft forever preferred_lft forever > > > > > > > 13: qr-7b330e0e-5c: mtu 1500 > > qdisc > > > > > > > noqueue > > > > > > > state UNKNOWN > > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > > > > > valid_lft forever preferred_lft forever > > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > Bridge br-int > > > > > > > fail_mode: secure > > > > > > > Port "tap2decc1bc-bf" > > > > > > > tag: 2 > > > > > > > Interface "tap2decc1bc-bf" > > > > > > > type: internal > > > > > > > Port br-int > > > > > > > Interface br-int > > > > > > > type: internal > > > > > > > Port patch-tun > > > > > > > Interface patch-tun > > > > > > > type: patch > > > > > > > options: {peer=patch-int} > > > > > > > Port "qr-7b330e0e-5c" > > > > > > > tag: 2 > > > > > > > Interface "qr-7b330e0e-5c" > > > > > > > type: internal > > > > > > > Port "qvo164afbd4-0c" > > > > > > > tag: 2 > > > > > > > Interface "qvo164afbd4-0c" > > > > > > > Bridge br-ex > > > > > > > Port "eth0" > > > > > > > Interface "eth0" > > > > > > > Port br-ex > > > > > > > Interface br-ex > > > > > > > type: internal > > > > > > > Port "qg-e1b584b4-db" > > > > > > > Interface "qg-e1b584b4-db" > > > > > > > type: internal > > > > > > > Bridge br-tun > > > > > > > Port br-tun > > > > > > > Interface br-tun > > > > > > > type: internal > > > > > > > Port "vxlan-c0a80520" > > > > > > > Interface "vxlan-c0a80520" > > > > > > > type: vxlan > > > > > > > options: {df_default="true", in_key=flow, > > local_ip="192.168.5.33", > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > Port patch-int > > > > > > > Interface patch-int > > > > > > > type: patch > > > > > > > options: {peer=patch-tun} > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini > > and the > > > > > other > > > > > > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > > bderzhavets at hotmail.com > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > > qrouter-namespace > > > > > > > misconfiguration. There is no qr-xxxxx bridge attached to br-int > > > > > > > Picture , in general, should look like this > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > > > > > Kernel IP routing table > > > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > > > lo Link encap:Local Loopback > > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > collisions:0 txqueuelen:0 > > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 Mask:255.255.255.0 > > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > collisions:0 txqueuelen:0 > > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > collisions:0 txqueuelen:0 > > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > > > > > I would also advise you to post a question also on > > ask.openstack.org > > > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > > From: ichi.sara at gmail.com > > > > > > > To: rdo-list at redhat.com > > > > > > > Subject: [Rdo-list] [Neutron] router can't ping external gateway > > > > > > > > > > > > > > > > > > > > > Hey people, > > > > > > > I have an issue with my networking. I connected my openstack to > > an > > > > > external > > > > > > > network I did all the changes required. But still my router can't > > > > > reach the > > > > > > > external gateway. > > > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > > DEVICE=br-ex > > > > > > > DEVICETYPE=ovs > > > > > > > TYPE=OVSBridge > > > > > > > BOOTPROTO=static > > > > > > > IPADDR=192.168.5.33 > > > > > > > NETMASK=255.255.255.0 > > > > > > > ONBOOT=yes > > > > > > > GATEWAY=192.168.5.1 > > > > > > > DNS1=8.8.8.8 > > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > > DEVICE=eth0 > > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > > ONBOOT=yes > > > > > > > TYPE=OVSPort > > > > > > > NM_CONTROLLED=yes > > > > > > > DEVICETYPE=ovs > > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > Bridge br-int > > > > > > > fail_mode: secure > > > > > > > Port "tap8652132e-b8" > > > > > > > tag: 1 > > > > > > > Interface "tap8652132e-b8" > > > > > > > type: internal > > > > > > > Port br-int > > > > > > > Interface br-int > > > > > > > type: internal > > > > > > > Port patch-tun > > > > > > > Interface patch-tun > > > > > > > type: patch > > > > > > > options: {peer=patch-int} > > > > > > > Bridge br-ex > > > > > > > Port "qg-5f8ebe30-40" > > > > > > > Interface "qg-5f8ebe30-40" > > > > > > > type: internal > > > > > > > Port "eth0" > > > > > > > Interface "eth0" > > > > > > > Port br-ex > > > > > > > Interface br-ex > > > > > > > type: internal > > > > > > > Bridge br-tun > > > > > > > Port "vxlan-c0a80520" > > > > > > > Interface "vxlan-c0a80520" > > > > > > > type: vxlan > > > > > > > options: {df_default="true", in_key=flow, > > local_ip="192.168.5.33", > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > Port br-tun > > > > > > > Interface br-tun > > > > > > > type: internal > > > > > > > Port patch-int > > > > > > > Interface patch-int > > > > > > > type: patch > > > > > > > options: {peer=patch-tun} > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > > > > > ^C > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms > > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > UNKNOWN > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > valid_lft forever preferred_lft forever > > > > > > > inet6 ::1/128 scope host > > > > > > > valid_lft forever preferred_lft forever > > > > > > > 14: qg-5f8ebe30-40: mtu 1500 > > qdisc > > > > > > > noqueue > > > > > > > state UNKNOWN > > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > qg-5f8ebe30-40 > > > > > > > valid_lft forever preferred_lft forever > > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > > valid_lft forever preferred_lft forever > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src 192.168.4.14 > > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src > > 192.168.5.33 > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link src > > > > > 192.168.5.70 > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > ^C > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > 5 packets transmitted, 0 received, 100% packet loss, time 3999ms > > > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list mailing > > list > > > > > > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > To > > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From ichi.sara at gmail.com Tue May 19 12:42:28 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 14:42:28 +0200 Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway In-Reply-To: <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> References: <1437726620.1139426.1432033213137.JavaMail.zimbra@redhat.com> <1846855912.1156541.1432034992884.JavaMail.zimbra@redhat.com> <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> Message-ID: [root at localhost ~]# ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh flush 192.168.5.1 Nothing to flush. [root at pc20 ~]# tcpdump -i eth0 arp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 14:39:31.292222 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:39:31.293093 ARP, Request who-has livebox.home tell PC20.home, length 28 14:39:31.293882 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui Unknown), length 46 14:39:32.300067 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:39:33.310100 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:39:34.320335 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:39:35.330123 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:39:36.289836 ARP, Request who-has PC20.home tell livebox.home, length 46 14:39:36.289873 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), length 28 14:39:36.340219 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:39:51.026708 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui Unknown)) tell 192.168.5.99, length 46 14:39:51.026733 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), length 28 14:39:56.027218 ARP, Request who-has livebox.home tell PC20.home, length 28 14:39:56.027848 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui Unknown), length 46 14:40:01.035292 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 14:40:01.035925 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui Unknown), length 46 14:40:01.454515 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:40:02.460552 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:40:03.470625 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:40:04.480937 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:40:05.490810 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:40:06.500671 ARP, Request who-has PC22.home (Broadcast) tell livebox.home, length 46 14:40:21.527063 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui Unknown)) tell 192.168.5.99, length 46 14:40:21.527157 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), length 28 14:40:36.747216 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 14:40:36.747765 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui Unknown), length 46 14:40:51.527605 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui Unknown)) tell 192.168.5.99, length 46 14:40:51.527638 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), length 28 14:41:01.729345 ARP, Request who-has PC20.home (Broadcast) tell livebox.home, length 46 14:41:01.729408 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), length 28 14:41:21.528760 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui Unknown)) tell 192.168.5.99, length 46 14:41:21.528792 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), length 28 14:41:26.540361 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 14:41:26.540809 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui Unknown), length 46 14:41:31.900298 ARP, Request who-has PC19.home (Broadcast) tell livebox.home, length 46 14:41:31.950399 ARP, Request who-has PC20.home (Broadcast) tell livebox.home, length 46 14:41:31.950410 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), length 28 14:41:51.529113 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui Unknown)) tell 192.168.5.99, length 46 14:41:51.529147 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), length 28 14:41:56.539268 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 14:41:56.539912 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui Unknown), length 46 14:42:02.102645 ARP, Request who-has PC19.home (Broadcast) tell livebox.home, length 46 2015-05-19 14:32 GMT+02:00 Marius Cornea : > Delete and check if other computers in the network are receiving > broadcasts: > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh > flush 192.168.5.1 > tcpdump -i arp #on one of the computers in the 192.168.5.0 > network > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > See if any ARP requests reach the computer where you run tcpdump. > > I'm still thinking about some blocking stuff happening in the vswitch > since the ICMP requests are sent to the eth0 interface so they should reach > the vswitch port. > > ----- Original Message ----- > > From: "ICHIBA Sara" > > To: "Marius Cornea" > > Cc: rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 2:15:06 PM > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > > > [root at localhost ~(keystone_admin)]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep '192.168.5.1 ' > > 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE > > > > > > > > > > > > 2015-05-19 14:12 GMT+02:00 Marius Cornea : > > > > > Is there an ARP entry for 192.168.5.1 ? > > > > > > ip n | grep '192.168.5.1 ' in the router namespace > > > > > > > > > > > > ----- Original Message ----- > > > > From: "ICHIBA Sara" > > > > To: rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external > gateway > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > > Date: 2015-05-19 13:41 GMT+02:00 > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > > > > The forged transmissions on the vswitch are accepted. What's next? > > > > > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > > > > > > > > > > Oh, ESXi...I remember that the vswitch had some security features in > > > place. > > > > You can check those and I think the one that you're looking for is > called > > > > forged retransmits. > > > > > > > > Thanks, > > > > Marius > > > > > > > > ----- Original Message ----- > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > Cc: rdo-list at redhat.com > > > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > gateway > > > > > > > > > > the ICMP requests arrives to the eth0 interface > > > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > > > decode > > > > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 > > > bytes > > > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > id > > > > > 31055, > > > > > seq 1, length 64 > > > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > id > > > > > 31055, > > > > > seq 2, length 64 > > > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > id > > > > > 31055, > > > > > seq 3, length 64 > > > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > id > > > > > 31055, > > > > > seq 4, length 64 > > > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > id > > > > > 31055, > > > > > seq 5, length 64 > > > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > id > > > > > 31055, > > > > > seq 6, length 64 > > > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > id > > > > > 31055, > > > > > seq 7, length 64 > > > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > id > > > > > 31055, > > > > > seq 8, length 64 > > > > > 13:14:33.060267 > > > > > > > > > > > > > > > what should I do next? > > > > > > > > > > P.S: My compute and controller hosts are ESXi VMs and I can ssh to > > > both of > > > > > them without a problem. > > > > > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > > > Also, I'm seeing that you have 2 default routes on your host. > I'm not > > > > > > sure > > > > > > it affects the setup but try keeping only one: e.g. 'ip route del > > > default > > > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > default via 192.168.5.1 dev br-ex > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 interface > > > like > > > > > > 'tcpdump > > > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router > namespace. > > > > > > > > > > > > > > Thanks, > > > > > > > Marius > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > > > > rdo-list at redhat.com > > > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > > > > > ====updates > > > > > > > > > > > > > > > > I have deleted my networks, rebooted my machines and > configured > > > an > > > > > > other > > > > > > > > network. Now I can see the qr bridge mapped to the router but > > > still > > > > > > can't > > > > > > > > ping the external gateway: > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src > > > 10.0.0.1 > > > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link > src > > > > > > 192.168.5.70 > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > UNKNOWN > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > inet6 ::1/128 scope host > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > 12: qg-e1b584b4-db: mtu > 1500 > > > qdisc > > > > > > > > noqueue > > > > > > > > state UNKNOWN > > > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > qg-e1b584b4-db > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > > > qg-e1b584b4-db > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > 13: qr-7b330e0e-5c: mtu > 1500 > > > qdisc > > > > > > > > noqueue > > > > > > > > state UNKNOWN > > > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > Bridge br-int > > > > > > > > fail_mode: secure > > > > > > > > Port "tap2decc1bc-bf" > > > > > > > > tag: 2 > > > > > > > > Interface "tap2decc1bc-bf" > > > > > > > > type: internal > > > > > > > > Port br-int > > > > > > > > Interface br-int > > > > > > > > type: internal > > > > > > > > Port patch-tun > > > > > > > > Interface patch-tun > > > > > > > > type: patch > > > > > > > > options: {peer=patch-int} > > > > > > > > Port "qr-7b330e0e-5c" > > > > > > > > tag: 2 > > > > > > > > Interface "qr-7b330e0e-5c" > > > > > > > > type: internal > > > > > > > > Port "qvo164afbd4-0c" > > > > > > > > tag: 2 > > > > > > > > Interface "qvo164afbd4-0c" > > > > > > > > Bridge br-ex > > > > > > > > Port "eth0" > > > > > > > > Interface "eth0" > > > > > > > > Port br-ex > > > > > > > > Interface br-ex > > > > > > > > type: internal > > > > > > > > Port "qg-e1b584b4-db" > > > > > > > > Interface "qg-e1b584b4-db" > > > > > > > > type: internal > > > > > > > > Bridge br-tun > > > > > > > > Port br-tun > > > > > > > > Interface br-tun > > > > > > > > type: internal > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > type: vxlan > > > > > > > > options: {df_default="true", in_key=flow, > > > local_ip="192.168.5.33", > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > Port patch-int > > > > > > > > Interface patch-int > > > > > > > > type: patch > > > > > > > > options: {peer=patch-tun} > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini > > > and the > > > > > > other > > > > > > > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > > > bderzhavets at hotmail.com > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > > > qrouter-namespace > > > > > > > > misconfiguration. There is no qr-xxxxx bridge attached to > br-int > > > > > > > > Picture , in general, should look like this > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > > > > > > > Kernel IP routing table > > > > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > > > > lo Link encap:Local Loopback > > > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 > Mask:255.255.255.0 > > > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > > > > > > > I would also advise you to post a question also on > > > ask.openstack.org > > > > > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > > > From: ichi.sara at gmail.com > > > > > > > > To: rdo-list at redhat.com > > > > > > > > Subject: [Rdo-list] [Neutron] router can't ping external > gateway > > > > > > > > > > > > > > > > > > > > > > > > Hey people, > > > > > > > > I have an issue with my networking. I connected my openstack > to > > > an > > > > > > external > > > > > > > > network I did all the changes required. But still my router > can't > > > > > > reach the > > > > > > > > external gateway. > > > > > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > > > DEVICE=br-ex > > > > > > > > DEVICETYPE=ovs > > > > > > > > TYPE=OVSBridge > > > > > > > > BOOTPROTO=static > > > > > > > > IPADDR=192.168.5.33 > > > > > > > > NETMASK=255.255.255.0 > > > > > > > > ONBOOT=yes > > > > > > > > GATEWAY=192.168.5.1 > > > > > > > > DNS1=8.8.8.8 > > > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > > > DEVICE=eth0 > > > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > > > ONBOOT=yes > > > > > > > > TYPE=OVSPort > > > > > > > > NM_CONTROLLED=yes > > > > > > > > DEVICETYPE=ovs > > > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > Bridge br-int > > > > > > > > fail_mode: secure > > > > > > > > Port "tap8652132e-b8" > > > > > > > > tag: 1 > > > > > > > > Interface "tap8652132e-b8" > > > > > > > > type: internal > > > > > > > > Port br-int > > > > > > > > Interface br-int > > > > > > > > type: internal > > > > > > > > Port patch-tun > > > > > > > > Interface patch-tun > > > > > > > > type: patch > > > > > > > > options: {peer=patch-int} > > > > > > > > Bridge br-ex > > > > > > > > Port "qg-5f8ebe30-40" > > > > > > > > Interface "qg-5f8ebe30-40" > > > > > > > > type: internal > > > > > > > > Port "eth0" > > > > > > > > Interface "eth0" > > > > > > > > Port br-ex > > > > > > > > Interface br-ex > > > > > > > > type: internal > > > > > > > > Bridge br-tun > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > type: vxlan > > > > > > > > options: {df_default="true", in_key=flow, > > > local_ip="192.168.5.33", > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > Port br-tun > > > > > > > > Interface br-tun > > > > > > > > type: internal > > > > > > > > Port patch-int > > > > > > > > Interface patch-int > > > > > > > > type: patch > > > > > > > > options: {peer=patch-tun} > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > > > > > > ^C > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > 3 packets transmitted, 3 received, 0% packet loss, time > 2002ms > > > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > UNKNOWN > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > inet6 ::1/128 scope host > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > 14: qg-5f8ebe30-40: mtu > 1500 > > > qdisc > > > > > > > > noqueue > > > > > > > > state UNKNOWN > > > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > qg-5f8ebe30-40 > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src > 192.168.4.14 > > > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src > > > 192.168.5.33 > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link > src > > > > > > 192.168.5.70 > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > ^C > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > 5 packets transmitted, 0 received, 100% packet loss, time > 3999ms > > > > > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list > mailing > > > list > > > > > > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To > > > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > Rdo-list mailing list > > > > > > > > Rdo-list at redhat.com > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue May 19 13:41:05 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 19 May 2015 09:41:05 -0400 (EDT) Subject: [Rdo-list] Fwd: [Neutron] router can't ping external gateway In-Reply-To: References: <1846855912.1156541.1432034992884.JavaMail.zimbra@redhat.com> <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> Message-ID: <2145793261.1252540.1432042865630.JavaMail.zimbra@redhat.com> Hm, do you have promiscuous mode turned on for the port eth0 is connected to ? ----- Original Message ----- > From: "ICHIBA Sara" > To: "Marius Cornea" , rdo-list at redhat.com > Sent: Tuesday, May 19, 2015 2:42:28 PM > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > [root at localhost ~]# ip netns exec > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh flush > 192.168.5.1 > Nothing to flush. > > > [root at pc20 ~]# tcpdump -i eth0 arp > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes > 14:39:31.292222 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:39:31.293093 ARP, Request who-has livebox.home tell PC20.home, length 28 > 14:39:31.293882 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > Unknown), length 46 > 14:39:32.300067 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:39:33.310100 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:39:34.320335 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:39:35.330123 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:39:36.289836 ARP, Request who-has PC20.home tell livebox.home, length 46 > 14:39:36.289873 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > length 28 > 14:39:36.340219 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:39:51.026708 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > Unknown)) tell 192.168.5.99, length 46 > 14:39:51.026733 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > length 28 > 14:39:56.027218 ARP, Request who-has livebox.home tell PC20.home, length 28 > 14:39:56.027848 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > Unknown), length 46 > 14:40:01.035292 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 > 14:40:01.035925 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > Unknown), length 46 > 14:40:01.454515 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:40:02.460552 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:40:03.470625 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:40:04.480937 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:40:05.490810 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:40:06.500671 ARP, Request who-has PC22.home (Broadcast) tell > livebox.home, length 46 > 14:40:21.527063 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > Unknown)) tell 192.168.5.99, length 46 > 14:40:21.527157 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > length 28 > 14:40:36.747216 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 > 14:40:36.747765 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > Unknown), length 46 > 14:40:51.527605 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > Unknown)) tell 192.168.5.99, length 46 > 14:40:51.527638 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > length 28 > 14:41:01.729345 ARP, Request who-has PC20.home (Broadcast) tell > livebox.home, length 46 > 14:41:01.729408 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > length 28 > 14:41:21.528760 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > Unknown)) tell 192.168.5.99, length 46 > 14:41:21.528792 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > length 28 > 14:41:26.540361 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 > 14:41:26.540809 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > Unknown), length 46 > 14:41:31.900298 ARP, Request who-has PC19.home (Broadcast) tell > livebox.home, length 46 > 14:41:31.950399 ARP, Request who-has PC20.home (Broadcast) tell > livebox.home, length 46 > 14:41:31.950410 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > length 28 > 14:41:51.529113 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > Unknown)) tell 192.168.5.99, length 46 > 14:41:51.529147 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > length 28 > 14:41:56.539268 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 > 14:41:56.539912 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > Unknown), length 46 > 14:42:02.102645 ARP, Request who-has PC19.home (Broadcast) tell > livebox.home, length 46 > > > > > 2015-05-19 14:32 GMT+02:00 Marius Cornea : > > > Delete and check if other computers in the network are receiving > > broadcasts: > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh > > flush 192.168.5.1 > > tcpdump -i arp #on one of the computers in the 192.168.5.0 > > network > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > See if any ARP requests reach the computer where you run tcpdump. > > > > I'm still thinking about some blocking stuff happening in the vswitch > > since the ICMP requests are sent to the eth0 interface so they should reach > > the vswitch port. > > > > ----- Original Message ----- > > > From: "ICHIBA Sara" > > > To: "Marius Cornea" > > > Cc: rdo-list at redhat.com > > > Sent: Tuesday, May 19, 2015 2:15:06 PM > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > > > > > [root at localhost ~(keystone_admin)]# ip netns exec > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep '192.168.5.1 ' > > > 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE > > > > > > > > > > > > > > > > > > 2015-05-19 14:12 GMT+02:00 Marius Cornea : > > > > > > > Is there an ARP entry for 192.168.5.1 ? > > > > > > > > ip n | grep '192.168.5.1 ' in the router namespace > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "ICHIBA Sara" > > > > > To: rdo-list at redhat.com > > > > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > > > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external > > gateway > > > > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > > > Date: 2015-05-19 13:41 GMT+02:00 > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external gateway > > > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > > > > > > > The forged transmissions on the vswitch are accepted. What's next? > > > > > > > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > > > > > > > > > > > > > Oh, ESXi...I remember that the vswitch had some security features in > > > > place. > > > > > You can check those and I think the one that you're looking for is > > called > > > > > forged retransmits. > > > > > > > > > > Thanks, > > > > > Marius > > > > > > > > > > ----- Original Message ----- > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > > Cc: rdo-list at redhat.com > > > > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > gateway > > > > > > > > > > > > the ICMP requests arrives to the eth0 interface > > > > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > > > > decode > > > > > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 > > > > bytes > > > > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > > id > > > > > > 31055, > > > > > > seq 1, length 64 > > > > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > > id > > > > > > 31055, > > > > > > seq 2, length 64 > > > > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > > id > > > > > > 31055, > > > > > > seq 3, length 64 > > > > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > > id > > > > > > 31055, > > > > > > seq 4, length 64 > > > > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > > id > > > > > > 31055, > > > > > > seq 5, length 64 > > > > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > > id > > > > > > 31055, > > > > > > seq 6, length 64 > > > > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > > id > > > > > > 31055, > > > > > > seq 7, length 64 > > > > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo request, > > id > > > > > > 31055, > > > > > > seq 8, length 64 > > > > > > 13:14:33.060267 > > > > > > > > > > > > > > > > > > what should I do next? > > > > > > > > > > > > P.S: My compute and controller hosts are ESXi VMs and I can ssh to > > > > both of > > > > > > them without a problem. > > > > > > > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > > > > > Also, I'm seeing that you have 2 default routes on your host. > > I'm not > > > > > > > sure > > > > > > > it affects the setup but try keeping only one: e.g. 'ip route del > > > > default > > > > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > > gateway > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 interface > > > > like > > > > > > > 'tcpdump > > > > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router > > namespace. > > > > > > > > > > > > > > > > Thanks, > > > > > > > > Marius > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > > > > > rdo-list at redhat.com > > > > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > > gateway > > > > > > > > > > > > > > > > > > ====updates > > > > > > > > > > > > > > > > > > I have deleted my networks, rebooted my machines and > > configured > > > > an > > > > > > > other > > > > > > > > > network. Now I can see the qr bridge mapped to the router but > > > > still > > > > > > > can't > > > > > > > > > ping the external gateway: > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src > > > > 10.0.0.1 > > > > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link > > src > > > > > > > 192.168.5.70 > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > > UNKNOWN > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > 12: qg-e1b584b4-db: mtu > > 1500 > > > > qdisc > > > > > > > > > noqueue > > > > > > > > > state UNKNOWN > > > > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > qg-e1b584b4-db > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > > > > qg-e1b584b4-db > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > 13: qr-7b330e0e-5c: mtu > > 1500 > > > > qdisc > > > > > > > > > noqueue > > > > > > > > > state UNKNOWN > > > > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > Bridge br-int > > > > > > > > > fail_mode: secure > > > > > > > > > Port "tap2decc1bc-bf" > > > > > > > > > tag: 2 > > > > > > > > > Interface "tap2decc1bc-bf" > > > > > > > > > type: internal > > > > > > > > > Port br-int > > > > > > > > > Interface br-int > > > > > > > > > type: internal > > > > > > > > > Port patch-tun > > > > > > > > > Interface patch-tun > > > > > > > > > type: patch > > > > > > > > > options: {peer=patch-int} > > > > > > > > > Port "qr-7b330e0e-5c" > > > > > > > > > tag: 2 > > > > > > > > > Interface "qr-7b330e0e-5c" > > > > > > > > > type: internal > > > > > > > > > Port "qvo164afbd4-0c" > > > > > > > > > tag: 2 > > > > > > > > > Interface "qvo164afbd4-0c" > > > > > > > > > Bridge br-ex > > > > > > > > > Port "eth0" > > > > > > > > > Interface "eth0" > > > > > > > > > Port br-ex > > > > > > > > > Interface br-ex > > > > > > > > > type: internal > > > > > > > > > Port "qg-e1b584b4-db" > > > > > > > > > Interface "qg-e1b584b4-db" > > > > > > > > > type: internal > > > > > > > > > Bridge br-tun > > > > > > > > > Port br-tun > > > > > > > > > Interface br-tun > > > > > > > > > type: internal > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > type: vxlan > > > > > > > > > options: {df_default="true", in_key=flow, > > > > local_ip="192.168.5.33", > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > Port patch-int > > > > > > > > > Interface patch-int > > > > > > > > > type: patch > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < ichi.sara at gmail.com > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? /etc/neutron/plugin.ini > > > > and the > > > > > > > other > > > > > > > > > file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > > > > bderzhavets at hotmail.com > > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > > > > qrouter-namespace > > > > > > > > > misconfiguration. There is no qr-xxxxx bridge attached to > > br-int > > > > > > > > > Picture , in general, should look like this > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > > > > > > > > > Kernel IP routing table > > > > > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > > > > > lo Link encap:Local Loopback > > > > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 > > Mask:255.255.255.0 > > > > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > > > > > > > > > I would also advise you to post a question also on > > > > ask.openstack.org > > > > > > > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > > > > From: ichi.sara at gmail.com > > > > > > > > > To: rdo-list at redhat.com > > > > > > > > > Subject: [Rdo-list] [Neutron] router can't ping external > > gateway > > > > > > > > > > > > > > > > > > > > > > > > > > > Hey people, > > > > > > > > > I have an issue with my networking. I connected my openstack > > to > > > > an > > > > > > > external > > > > > > > > > network I did all the changes required. But still my router > > can't > > > > > > > reach the > > > > > > > > > external gateway. > > > > > > > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > > > > DEVICE=br-ex > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > TYPE=OVSBridge > > > > > > > > > BOOTPROTO=static > > > > > > > > > IPADDR=192.168.5.33 > > > > > > > > > NETMASK=255.255.255.0 > > > > > > > > > ONBOOT=yes > > > > > > > > > GATEWAY=192.168.5.1 > > > > > > > > > DNS1=8.8.8.8 > > > > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > > > > DEVICE=eth0 > > > > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > > > > ONBOOT=yes > > > > > > > > > TYPE=OVSPort > > > > > > > > > NM_CONTROLLED=yes > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > Bridge br-int > > > > > > > > > fail_mode: secure > > > > > > > > > Port "tap8652132e-b8" > > > > > > > > > tag: 1 > > > > > > > > > Interface "tap8652132e-b8" > > > > > > > > > type: internal > > > > > > > > > Port br-int > > > > > > > > > Interface br-int > > > > > > > > > type: internal > > > > > > > > > Port patch-tun > > > > > > > > > Interface patch-tun > > > > > > > > > type: patch > > > > > > > > > options: {peer=patch-int} > > > > > > > > > Bridge br-ex > > > > > > > > > Port "qg-5f8ebe30-40" > > > > > > > > > Interface "qg-5f8ebe30-40" > > > > > > > > > type: internal > > > > > > > > > Port "eth0" > > > > > > > > > Interface "eth0" > > > > > > > > > Port br-ex > > > > > > > > > Interface br-ex > > > > > > > > > type: internal > > > > > > > > > Bridge br-tun > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > type: vxlan > > > > > > > > > options: {df_default="true", in_key=flow, > > > > local_ip="192.168.5.33", > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > Port br-tun > > > > > > > > > Interface br-tun > > > > > > > > > type: internal > > > > > > > > > Port patch-int > > > > > > > > > Interface patch-int > > > > > > > > > type: patch > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > > > > > > > ^C > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > 3 packets transmitted, 3 received, 0% packet loss, time > > 2002ms > > > > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > > UNKNOWN > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > 14: qg-5f8ebe30-40: mtu > > 1500 > > > > qdisc > > > > > > > > > noqueue > > > > > > > > > state UNKNOWN > > > > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > qg-5f8ebe30-40 > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src > > 192.168.4.14 > > > > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src > > > > 192.168.5.33 > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link > > src > > > > > > > 192.168.5.70 > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping 192.168.5.1 > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > ^C > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > 5 packets transmitted, 0 received, 100% packet loss, time > > 3999ms > > > > > > > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list > > mailing > > > > list > > > > > > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > To > > > > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > Rdo-list mailing list > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > Rdo-list mailing list > > > > > > > > Rdo-list at redhat.com > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > From ichi.sara at gmail.com Tue May 19 13:53:20 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 15:53:20 +0200 Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external gateway In-Reply-To: References: <1846855912.1156541.1432034992884.JavaMail.zimbra@redhat.com> <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> <2145793261.1252540.1432042865630.JavaMail.zimbra@redhat.com> Message-ID: ---------- Forwarded message ---------- From: ICHIBA Sara Date: 2015-05-19 15:53 GMT+02:00 Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway To: Marius Cornea no i don't. [root at localhost ~(keystone_admin)]# ifconfig eth0 | grep -i up eth0: flags=4163 mtu 1500 2015-05-19 15:41 GMT+02:00 Marius Cornea : > Hm, do you have promiscuous mode turned on for the port eth0 is connected > to ? > > ----- Original Message ----- > > From: "ICHIBA Sara" > > To: "Marius Cornea" , rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 2:42:28 PM > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > > > [root at localhost ~]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh flush > > 192.168.5.1 > > Nothing to flush. > > > > > > [root at pc20 ~]# tcpdump -i eth0 arp > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes > > 14:39:31.292222 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:31.293093 ARP, Request who-has livebox.home tell PC20.home, length > 28 > > 14:39:31.293882 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > > Unknown), length 46 > > 14:39:32.300067 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:33.310100 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:34.320335 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:35.330123 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:36.289836 ARP, Request who-has PC20.home tell livebox.home, length > 46 > > 14:39:36.289873 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > length 28 > > 14:39:36.340219 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:51.026708 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:39:51.026733 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > length 28 > > 14:39:56.027218 ARP, Request who-has livebox.home tell PC20.home, length > 28 > > 14:39:56.027848 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > > Unknown), length 46 > > 14:40:01.035292 ARP, Request who-has 192.168.5.99 tell PC20.home, length > 28 > > 14:40:01.035925 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > Unknown), length 46 > > 14:40:01.454515 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:02.460552 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:03.470625 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:04.480937 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:05.490810 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:06.500671 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:21.527063 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:40:21.527157 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > length 28 > > 14:40:36.747216 ARP, Request who-has 192.168.5.99 tell PC20.home, length > 28 > > 14:40:36.747765 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > Unknown), length 46 > > 14:40:51.527605 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:40:51.527638 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > length 28 > > 14:41:01.729345 ARP, Request who-has PC20.home (Broadcast) tell > > livebox.home, length 46 > > 14:41:01.729408 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > length 28 > > 14:41:21.528760 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:41:21.528792 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > length 28 > > 14:41:26.540361 ARP, Request who-has 192.168.5.99 tell PC20.home, length > 28 > > 14:41:26.540809 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > Unknown), length 46 > > 14:41:31.900298 ARP, Request who-has PC19.home (Broadcast) tell > > livebox.home, length 46 > > 14:41:31.950399 ARP, Request who-has PC20.home (Broadcast) tell > > livebox.home, length 46 > > 14:41:31.950410 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > length 28 > > 14:41:51.529113 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:41:51.529147 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > length 28 > > 14:41:56.539268 ARP, Request who-has 192.168.5.99 tell PC20.home, length > 28 > > 14:41:56.539912 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > Unknown), length 46 > > 14:42:02.102645 ARP, Request who-has PC19.home (Broadcast) tell > > livebox.home, length 46 > > > > > > > > > > 2015-05-19 14:32 GMT+02:00 Marius Cornea : > > > > > Delete and check if other computers in the network are receiving > > > broadcasts: > > > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s > neigh > > > flush 192.168.5.1 > > > tcpdump -i arp #on one of the computers in the 192.168.5.0 > > > network > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > 192.168.5.1 > > > > > > See if any ARP requests reach the computer where you run tcpdump. > > > > > > I'm still thinking about some blocking stuff happening in the vswitch > > > since the ICMP requests are sent to the eth0 interface so they should > reach > > > the vswitch port. > > > > > > ----- Original Message ----- > > > > From: "ICHIBA Sara" > > > > To: "Marius Cornea" > > > > Cc: rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 2:15:06 PM > > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external > gateway > > > > > > > > [root at localhost ~(keystone_admin)]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep > '192.168.5.1 ' > > > > 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 14:12 GMT+02:00 Marius Cornea : > > > > > > > > > Is there an ARP entry for 192.168.5.1 ? > > > > > > > > > > ip n | grep '192.168.5.1 ' in the router namespace > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > From: "ICHIBA Sara" > > > > > > To: rdo-list at redhat.com > > > > > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > > > > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > > > > Date: 2015-05-19 13:41 GMT+02:00 > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > gateway > > > > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > > > > > > > > > > The forged transmissions on the vswitch are accepted. What's > next? > > > > > > > > > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > > : > > > > > > > > > > > > > > > > > > Oh, ESXi...I remember that the vswitch had some security > features in > > > > > place. > > > > > > You can check those and I think the one that you're looking for > is > > > called > > > > > > forged retransmits. > > > > > > > > > > > > Thanks, > > > > > > Marius > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > > > the ICMP requests arrives to the eth0 interface > > > > > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > > > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > > > > > tcpdump: verbose output suppressed, use -v or -vv for full > protocol > > > > > decode > > > > > > > listening on eth0, link-type EN10MB (Ethernet), capture size > 65535 > > > > > bytes > > > > > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > request, > > > id > > > > > > > 31055, > > > > > > > seq 1, length 64 > > > > > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > request, > > > id > > > > > > > 31055, > > > > > > > seq 2, length 64 > > > > > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > request, > > > id > > > > > > > 31055, > > > > > > > seq 3, length 64 > > > > > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > request, > > > id > > > > > > > 31055, > > > > > > > seq 4, length 64 > > > > > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > request, > > > id > > > > > > > 31055, > > > > > > > seq 5, length 64 > > > > > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > request, > > > id > > > > > > > 31055, > > > > > > > seq 6, length 64 > > > > > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > request, > > > id > > > > > > > 31055, > > > > > > > seq 7, length 64 > > > > > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > request, > > > id > > > > > > > 31055, > > > > > > > seq 8, length 64 > > > > > > > 13:14:33.060267 > > > > > > > > > > > > > > > > > > > > > what should I do next? > > > > > > > > > > > > > > P.S: My compute and controller hosts are ESXi VMs and I can > ssh to > > > > > both of > > > > > > > them without a problem. > > > > > > > > > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < mcornea at redhat.com > >: > > > > > > > > > > > > > > > Also, I'm seeing that you have 2 default routes on your host. > > > I'm not > > > > > > > > sure > > > > > > > > it affects the setup but try keeping only one: e.g. 'ip > route del > > > > > default > > > > > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > external > > > > > gateway > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 > interface > > > > > like > > > > > > > > 'tcpdump > > > > > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router > > > namespace. > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > Marius > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > external > > > > > gateway > > > > > > > > > > > > > > > > > > > > ====updates > > > > > > > > > > > > > > > > > > > > I have deleted my networks, rebooted my machines and > > > configured > > > > > an > > > > > > > > other > > > > > > > > > > network. Now I can see the qr bridge mapped to the > router but > > > > > still > > > > > > > > can't > > > > > > > > > > ping the external gateway: > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link > src > > > > > 10.0.0.1 > > > > > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope > link > > > src > > > > > > > > 192.168.5.70 > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue > state > > > > > UNKNOWN > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > 12: qg-e1b584b4-db: mtu > > > 1500 > > > > > qdisc > > > > > > > > > > noqueue > > > > > > > > > > state UNKNOWN > > > > > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > > qg-e1b584b4-db > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > > > > > qg-e1b584b4-db > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > 13: qr-7b330e0e-5c: mtu > > > 1500 > > > > > qdisc > > > > > > > > > > noqueue > > > > > > > > > > state UNKNOWN > > > > > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global > qr-7b330e0e-5c > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > 192.168.5.1 > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host > Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host > Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host > Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host > Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host > Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host > Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host > Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host > Unreachable > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > Bridge br-int > > > > > > > > > > fail_mode: secure > > > > > > > > > > Port "tap2decc1bc-bf" > > > > > > > > > > tag: 2 > > > > > > > > > > Interface "tap2decc1bc-bf" > > > > > > > > > > type: internal > > > > > > > > > > Port br-int > > > > > > > > > > Interface br-int > > > > > > > > > > type: internal > > > > > > > > > > Port patch-tun > > > > > > > > > > Interface patch-tun > > > > > > > > > > type: patch > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > Port "qr-7b330e0e-5c" > > > > > > > > > > tag: 2 > > > > > > > > > > Interface "qr-7b330e0e-5c" > > > > > > > > > > type: internal > > > > > > > > > > Port "qvo164afbd4-0c" > > > > > > > > > > tag: 2 > > > > > > > > > > Interface "qvo164afbd4-0c" > > > > > > > > > > Bridge br-ex > > > > > > > > > > Port "eth0" > > > > > > > > > > Interface "eth0" > > > > > > > > > > Port br-ex > > > > > > > > > > Interface br-ex > > > > > > > > > > type: internal > > > > > > > > > > Port "qg-e1b584b4-db" > > > > > > > > > > Interface "qg-e1b584b4-db" > > > > > > > > > > type: internal > > > > > > > > > > Bridge br-tun > > > > > > > > > > Port br-tun > > > > > > > > > > Interface br-tun > > > > > > > > > > type: internal > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > type: vxlan > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > local_ip="192.168.5.33", > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > Port patch-int > > > > > > > > > > Interface patch-int > > > > > > > > > > type: patch > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < > ichi.sara at gmail.com > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? > /etc/neutron/plugin.ini > > > > > and the > > > > > > > > other > > > > > > > > > > file > /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > > > > > bderzhavets at hotmail.com > > > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > > > > > qrouter-namespace > > > > > > > > > > misconfiguration. There is no qr-xxxxx bridge attached to > > > br-int > > > > > > > > > > Picture , in general, should look like this > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > > > > > > > > > > > Kernel IP routing table > > > > > > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > > > > > > lo Link encap:Local Loopback > > > > > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr > fa:16:3e:a2:11:b4 > > > > > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 > > > Mask:255.255.255.0 > > > > > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr > fa:16:3e:9e:ec:01 > > > > > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 > Mask:255.255.255.0 > > > > > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > > > > > > > > > > > I would also advise you to post a question also on > > > > > ask.openstack.org > > > > > > > > > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > > > > > From: ichi.sara at gmail.com > > > > > > > > > > To: rdo-list at redhat.com > > > > > > > > > > Subject: [Rdo-list] [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hey people, > > > > > > > > > > I have an issue with my networking. I connected my > openstack > > > to > > > > > an > > > > > > > > external > > > > > > > > > > network I did all the changes required. But still my > router > > > can't > > > > > > > > reach the > > > > > > > > > > external gateway. > > > > > > > > > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > > > > > DEVICE=br-ex > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > TYPE=OVSBridge > > > > > > > > > > BOOTPROTO=static > > > > > > > > > > IPADDR=192.168.5.33 > > > > > > > > > > NETMASK=255.255.255.0 > > > > > > > > > > ONBOOT=yes > > > > > > > > > > GATEWAY=192.168.5.1 > > > > > > > > > > DNS1=8.8.8.8 > > > > > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > > > > > DEVICE=eth0 > > > > > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > > > > > ONBOOT=yes > > > > > > > > > > TYPE=OVSPort > > > > > > > > > > NM_CONTROLLED=yes > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > Bridge br-int > > > > > > > > > > fail_mode: secure > > > > > > > > > > Port "tap8652132e-b8" > > > > > > > > > > tag: 1 > > > > > > > > > > Interface "tap8652132e-b8" > > > > > > > > > > type: internal > > > > > > > > > > Port br-int > > > > > > > > > > Interface br-int > > > > > > > > > > type: internal > > > > > > > > > > Port patch-tun > > > > > > > > > > Interface patch-tun > > > > > > > > > > type: patch > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > Bridge br-ex > > > > > > > > > > Port "qg-5f8ebe30-40" > > > > > > > > > > Interface "qg-5f8ebe30-40" > > > > > > > > > > type: internal > > > > > > > > > > Port "eth0" > > > > > > > > > > Interface "eth0" > > > > > > > > > > Port br-ex > > > > > > > > > > Interface br-ex > > > > > > > > > > type: internal > > > > > > > > > > Bridge br-tun > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > type: vxlan > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > local_ip="192.168.5.33", > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > Port br-tun > > > > > > > > > > Interface br-tun > > > > > > > > > > type: internal > > > > > > > > > > Port patch-int > > > > > > > > > > Interface patch-int > > > > > > > > > > type: patch > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping > 192.168.5.1 > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 > ms > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 > ms > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 > ms > > > > > > > > > > ^C > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > 3 packets transmitted, 3 received, 0% packet loss, time > > > 2002ms > > > > > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue > state > > > > > UNKNOWN > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > 14: qg-5f8ebe30-40: mtu > > > 1500 > > > > > qdisc > > > > > > > > > > noqueue > > > > > > > > > > state UNKNOWN > > > > > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > > qg-5f8ebe30-40 > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src > > > 192.168.4.14 > > > > > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src > > > > > 192.168.5.33 > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope > link > > > src > > > > > > > > 192.168.5.70 > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > 192.168.5.1 > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > ^C > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > 5 packets transmitted, 0 received, 100% packet loss, time > > > 3999ms > > > > > > > > > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list > > > mailing > > > > > list > > > > > > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To > > > > > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > Rdo-list mailing list > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue May 19 13:57:16 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 19 May 2015 09:57:16 -0400 (EDT) Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external gateway In-Reply-To: References: <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> <2145793261.1252540.1432042865630.JavaMail.zimbra@redhat.com> Message-ID: <704340030.1268598.1432043836972.JavaMail.zimbra@redhat.com> Try enabling promiscuous mode (along with forged transmits) on the vswitch port that eth0 is connected to and see how it goes. ----- Original Message ----- > From: "ICHIBA Sara" > To: rdo-list at redhat.com > Sent: Tuesday, May 19, 2015 3:53:20 PM > Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external gateway > > > ---------- Forwarded message ---------- > From: ICHIBA Sara < ichi.sara at gmail.com > > Date: 2015-05-19 15:53 GMT+02:00 > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > To: Marius Cornea < mcornea at redhat.com > > > > no i don't. > [root at localhost ~(keystone_admin)]# ifconfig eth0 | grep -i up > eth0: flags=4163 mtu 1500 > > > > 2015-05-19 15:41 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > Hm, do you have promiscuous mode turned on for the port eth0 is connected to > ? > > ----- Original Message ----- > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > To: "Marius Cornea" < mcornea at redhat.com >, rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 2:42:28 PM > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > > > [root at localhost ~]# ip netns exec > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh flush > > 192.168.5.1 > > Nothing to flush. > > > > > > [root at pc20 ~]# tcpdump -i eth0 arp > > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes > > 14:39:31.292222 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:31.293093 ARP, Request who-has livebox.home tell PC20.home, length 28 > > 14:39:31.293882 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > > Unknown), length 46 > > 14:39:32.300067 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:33.310100 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:34.320335 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:35.330123 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:36.289836 ARP, Request who-has PC20.home tell livebox.home, length 46 > > 14:39:36.289873 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > > length 28 > > 14:39:36.340219 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:39:51.026708 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:39:51.026733 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > > length 28 > > 14:39:56.027218 ARP, Request who-has livebox.home tell PC20.home, length 28 > > 14:39:56.027848 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > > Unknown), length 46 > > 14:40:01.035292 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 > > 14:40:01.035925 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > Unknown), length 46 > > 14:40:01.454515 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:02.460552 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:03.470625 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:04.480937 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:05.490810 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:06.500671 ARP, Request who-has PC22.home (Broadcast) tell > > livebox.home, length 46 > > 14:40:21.527063 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:40:21.527157 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > > length 28 > > 14:40:36.747216 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 > > 14:40:36.747765 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > Unknown), length 46 > > 14:40:51.527605 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:40:51.527638 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > > length 28 > > 14:41:01.729345 ARP, Request who-has PC20.home (Broadcast) tell > > livebox.home, length 46 > > 14:41:01.729408 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > > length 28 > > 14:41:21.528760 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:41:21.528792 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > > length 28 > > 14:41:26.540361 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 > > 14:41:26.540809 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > Unknown), length 46 > > 14:41:31.900298 ARP, Request who-has PC19.home (Broadcast) tell > > livebox.home, length 46 > > 14:41:31.950399 ARP, Request who-has PC20.home (Broadcast) tell > > livebox.home, length 46 > > 14:41:31.950410 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > > length 28 > > 14:41:51.529113 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > Unknown)) tell 192.168.5.99, length 46 > > 14:41:51.529147 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui Unknown), > > length 28 > > 14:41:56.539268 ARP, Request who-has 192.168.5.99 tell PC20.home, length 28 > > 14:41:56.539912 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > Unknown), length 46 > > 14:42:02.102645 ARP, Request who-has PC19.home (Broadcast) tell > > livebox.home, length 46 > > > > > > > > > > 2015-05-19 14:32 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > Delete and check if other computers in the network are receiving > > > broadcasts: > > > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh > > > flush 192.168.5.1 > > > tcpdump -i arp #on one of the computers in the 192.168.5.0 > > > network > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > 192.168.5.1 > > > > > > See if any ARP requests reach the computer where you run tcpdump. > > > > > > I'm still thinking about some blocking stuff happening in the vswitch > > > since the ICMP requests are sent to the eth0 interface so they should > > > reach > > > the vswitch port. > > > > > > ----- Original Message ----- > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > Cc: rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 2:15:06 PM > > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external > > > > gateway > > > > > > > > [root at localhost ~(keystone_admin)]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep '192.168.5.1 ' > > > > 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 14:12 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > Is there an ARP entry for 192.168.5.1 ? > > > > > > > > > > ip n | grep '192.168.5.1 ' in the router namespace > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > To: rdo-list at redhat.com > > > > > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > > > > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > > > > Date: 2015-05-19 13:41 GMT+02:00 > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > > > > gateway > > > > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > > > > > > > > > > The forged transmissions on the vswitch are accepted. What's next? > > > > > > > > > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > > > > > > > > > > > > > > > > Oh, ESXi...I remember that the vswitch had some security features > > > > > > in > > > > > place. > > > > > > You can check those and I think the one that you're looking for is > > > called > > > > > > forged retransmits. > > > > > > > > > > > > Thanks, > > > > > > Marius > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > > > the ICMP requests arrives to the eth0 interface > > > > > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > > > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > > > > > tcpdump: verbose output suppressed, use -v or -vv for full > > > > > > > protocol > > > > > decode > > > > > > > listening on eth0, link-type EN10MB (Ethernet), capture size > > > > > > > 65535 > > > > > bytes > > > > > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > request, > > > id > > > > > > > 31055, > > > > > > > seq 1, length 64 > > > > > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > request, > > > id > > > > > > > 31055, > > > > > > > seq 2, length 64 > > > > > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > request, > > > id > > > > > > > 31055, > > > > > > > seq 3, length 64 > > > > > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > request, > > > id > > > > > > > 31055, > > > > > > > seq 4, length 64 > > > > > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > request, > > > id > > > > > > > 31055, > > > > > > > seq 5, length 64 > > > > > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > request, > > > id > > > > > > > 31055, > > > > > > > seq 6, length 64 > > > > > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > request, > > > id > > > > > > > 31055, > > > > > > > seq 7, length 64 > > > > > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > request, > > > id > > > > > > > 31055, > > > > > > > seq 8, length 64 > > > > > > > 13:14:33.060267 > > > > > > > > > > > > > > > > > > > > > what should I do next? > > > > > > > > > > > > > > P.S: My compute and controller hosts are ESXi VMs and I can ssh > > > > > > > to > > > > > both of > > > > > > > them without a problem. > > > > > > > > > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > > > > > > > Also, I'm seeing that you have 2 default routes on your host. > > > I'm not > > > > > > > > sure > > > > > > > > it affects the setup but try keeping only one: e.g. 'ip route > > > > > > > > del > > > > > default > > > > > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > > > gateway > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 > > > > > > > > > interface > > > > > like > > > > > > > > 'tcpdump > > > > > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router > > > namespace. > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > Marius > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > > > > > > > > > > external > > > > > gateway > > > > > > > > > > > > > > > > > > > > ====updates > > > > > > > > > > > > > > > > > > > > I have deleted my networks, rebooted my machines and > > > configured > > > > > an > > > > > > > > other > > > > > > > > > > network. Now I can see the qr bridge mapped to the router > > > > > > > > > > but > > > > > still > > > > > > > > can't > > > > > > > > > > ping the external gateway: > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope link src > > > > > 10.0.0.1 > > > > > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope link > > > src > > > > > > > > 192.168.5.70 > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > > > UNKNOWN > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > 12: qg-e1b584b4-db: mtu > > > 1500 > > > > > qdisc > > > > > > > > > > noqueue > > > > > > > > > > state UNKNOWN > > > > > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > > qg-e1b584b4-db > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > > > > > qg-e1b584b4-db > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > 13: qr-7b330e0e-5c: mtu > > > 1500 > > > > > qdisc > > > > > > > > > > noqueue > > > > > > > > > > state UNKNOWN > > > > > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-7b330e0e-5c > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > > > > > 192.168.5.1 > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host Unreachable > > > > > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host Unreachable > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > Bridge br-int > > > > > > > > > > fail_mode: secure > > > > > > > > > > Port "tap2decc1bc-bf" > > > > > > > > > > tag: 2 > > > > > > > > > > Interface "tap2decc1bc-bf" > > > > > > > > > > type: internal > > > > > > > > > > Port br-int > > > > > > > > > > Interface br-int > > > > > > > > > > type: internal > > > > > > > > > > Port patch-tun > > > > > > > > > > Interface patch-tun > > > > > > > > > > type: patch > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > Port "qr-7b330e0e-5c" > > > > > > > > > > tag: 2 > > > > > > > > > > Interface "qr-7b330e0e-5c" > > > > > > > > > > type: internal > > > > > > > > > > Port "qvo164afbd4-0c" > > > > > > > > > > tag: 2 > > > > > > > > > > Interface "qvo164afbd4-0c" > > > > > > > > > > Bridge br-ex > > > > > > > > > > Port "eth0" > > > > > > > > > > Interface "eth0" > > > > > > > > > > Port br-ex > > > > > > > > > > Interface br-ex > > > > > > > > > > type: internal > > > > > > > > > > Port "qg-e1b584b4-db" > > > > > > > > > > Interface "qg-e1b584b4-db" > > > > > > > > > > type: internal > > > > > > > > > > Bridge br-tun > > > > > > > > > > Port br-tun > > > > > > > > > > Interface br-tun > > > > > > > > > > type: internal > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > type: vxlan > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > local_ip="192.168.5.33", > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > Port patch-int > > > > > > > > > > Interface patch-int > > > > > > > > > > type: patch > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < > > > > > > > > > > ichi.sara at gmail.com > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? > > > > > > > > > > /etc/neutron/plugin.ini > > > > > and the > > > > > > > > other > > > > > > > > > > file > > > > > > > > > > /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > > > > > bderzhavets at hotmail.com > > > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > > > > > qrouter-namespace > > > > > > > > > > misconfiguration. There is no qr-xxxxx bridge attached to > > > br-int > > > > > > > > > > Picture , in general, should look like this > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > > > > > > > > > > > Kernel IP routing table > > > > > > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > > > > > > lo Link encap:Local Loopback > > > > > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr fa:16:3e:a2:11:b4 > > > > > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 > > > Mask:255.255.255.0 > > > > > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr fa:16:3e:9e:ec:01 > > > > > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 Mask:255.255.255.0 > > > > > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > > > > > > > > > > > I would also advise you to post a question also on > > > > > ask.openstack.org > > > > > > > > > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > > > > > From: ichi.sara at gmail.com > > > > > > > > > > To: rdo-list at redhat.com > > > > > > > > > > Subject: [Rdo-list] [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hey people, > > > > > > > > > > I have an issue with my networking. I connected my > > > > > > > > > > openstack > > > to > > > > > an > > > > > > > > external > > > > > > > > > > network I did all the changes required. But still my router > > > can't > > > > > > > > reach the > > > > > > > > > > external gateway. > > > > > > > > > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > > > > > DEVICE=br-ex > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > TYPE=OVSBridge > > > > > > > > > > BOOTPROTO=static > > > > > > > > > > IPADDR=192.168.5.33 > > > > > > > > > > NETMASK=255.255.255.0 > > > > > > > > > > ONBOOT=yes > > > > > > > > > > GATEWAY=192.168.5.1 > > > > > > > > > > DNS1=8.8.8.8 > > > > > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > > > > > DEVICE=eth0 > > > > > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > > > > > ONBOOT=yes > > > > > > > > > > TYPE=OVSPort > > > > > > > > > > NM_CONTROLLED=yes > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl show > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > Bridge br-int > > > > > > > > > > fail_mode: secure > > > > > > > > > > Port "tap8652132e-b8" > > > > > > > > > > tag: 1 > > > > > > > > > > Interface "tap8652132e-b8" > > > > > > > > > > type: internal > > > > > > > > > > Port br-int > > > > > > > > > > Interface br-int > > > > > > > > > > type: internal > > > > > > > > > > Port patch-tun > > > > > > > > > > Interface patch-tun > > > > > > > > > > type: patch > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > Bridge br-ex > > > > > > > > > > Port "qg-5f8ebe30-40" > > > > > > > > > > Interface "qg-5f8ebe30-40" > > > > > > > > > > type: internal > > > > > > > > > > Port "eth0" > > > > > > > > > > Interface "eth0" > > > > > > > > > > Port br-ex > > > > > > > > > > Interface br-ex > > > > > > > > > > type: internal > > > > > > > > > > Bridge br-tun > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > type: vxlan > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > local_ip="192.168.5.33", > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > Port br-tun > > > > > > > > > > Interface br-tun > > > > > > > > > > type: internal > > > > > > > > > > Port patch-int > > > > > > > > > > Interface patch-int > > > > > > > > > > type: patch > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping 192.168.5.1 > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 time=1.76 ms > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 time=1.88 ms > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 time=1.45 ms > > > > > > > > > > ^C > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > 3 packets transmitted, 3 received, 0% packet loss, time > > > 2002ms > > > > > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > > > UNKNOWN > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > 14: qg-5f8ebe30-40: mtu > > > 1500 > > > > > qdisc > > > > > > > > > > noqueue > > > > > > > > > > state UNKNOWN > > > > > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > > qg-5f8ebe30-40 > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src > > > 192.168.4.14 > > > > > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src > > > > > 192.168.5.33 > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope link > > > src > > > > > > > > 192.168.5.70 > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > > > > > 192.168.5.1 > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > ^C > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > 5 packets transmitted, 0 received, 100% packet loss, time > > > 3999ms > > > > > > > > > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Rdo-list > > > mailing > > > > > list > > > > > > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To > > > > > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > Rdo-list mailing list > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ichi.sara at gmail.com Tue May 19 14:12:44 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 16:12:44 +0200 Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external gateway In-Reply-To: <704340030.1268598.1432043836972.JavaMail.zimbra@redhat.com> References: <587390647.1184257.1432037565735.JavaMail.zimbra@redhat.com> <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> <2145793261.1252540.1432042865630.JavaMail.zimbra@redhat.com> <704340030.1268598.1432043836972.JavaMail.zimbra@redhat.com> Message-ID: I did it and nothing changed :/ 2015-05-19 15:57 GMT+02:00 Marius Cornea : > Try enabling promiscuous mode (along with forged transmits) on the vswitch > port that eth0 is connected to and see how it goes. > > ----- Original Message ----- > > From: "ICHIBA Sara" > > To: rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 3:53:20 PM > > Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external > gateway > > > > > > ---------- Forwarded message ---------- > > From: ICHIBA Sara < ichi.sara at gmail.com > > > Date: 2015-05-19 15:53 GMT+02:00 > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > To: Marius Cornea < mcornea at redhat.com > > > > > > > no i don't. > > [root at localhost ~(keystone_admin)]# ifconfig eth0 | grep -i up > > eth0: flags=4163 mtu 1500 > > > > > > > > 2015-05-19 15:41 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > > > > Hm, do you have promiscuous mode turned on for the port eth0 is > connected to > > ? > > > > ----- Original Message ----- > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > To: "Marius Cornea" < mcornea at redhat.com >, rdo-list at redhat.com > > > Sent: Tuesday, May 19, 2015 2:42:28 PM > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external > gateway > > > > > > [root at localhost ~]# ip netns exec > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh flush > > > 192.168.5.1 > > > Nothing to flush. > > > > > > > > > [root at pc20 ~]# tcpdump -i eth0 arp > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 > bytes > > > 14:39:31.292222 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:39:31.293093 ARP, Request who-has livebox.home tell PC20.home, > length 28 > > > 14:39:31.293882 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > > > Unknown), length 46 > > > 14:39:32.300067 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:39:33.310100 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:39:34.320335 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:39:35.330123 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:39:36.289836 ARP, Request who-has PC20.home tell livebox.home, > length 46 > > > 14:39:36.289873 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > > length 28 > > > 14:39:36.340219 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:39:51.026708 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > Unknown)) tell 192.168.5.99, length 46 > > > 14:39:51.026733 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > > length 28 > > > 14:39:56.027218 ARP, Request who-has livebox.home tell PC20.home, > length 28 > > > 14:39:56.027848 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > > > Unknown), length 46 > > > 14:40:01.035292 ARP, Request who-has 192.168.5.99 tell PC20.home, > length 28 > > > 14:40:01.035925 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > > Unknown), length 46 > > > 14:40:01.454515 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:40:02.460552 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:40:03.470625 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:40:04.480937 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:40:05.490810 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:40:06.500671 ARP, Request who-has PC22.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:40:21.527063 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > Unknown)) tell 192.168.5.99, length 46 > > > 14:40:21.527157 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > > length 28 > > > 14:40:36.747216 ARP, Request who-has 192.168.5.99 tell PC20.home, > length 28 > > > 14:40:36.747765 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > > Unknown), length 46 > > > 14:40:51.527605 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > Unknown)) tell 192.168.5.99, length 46 > > > 14:40:51.527638 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > > length 28 > > > 14:41:01.729345 ARP, Request who-has PC20.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:41:01.729408 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > > length 28 > > > 14:41:21.528760 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > Unknown)) tell 192.168.5.99, length 46 > > > 14:41:21.528792 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > > length 28 > > > 14:41:26.540361 ARP, Request who-has 192.168.5.99 tell PC20.home, > length 28 > > > 14:41:26.540809 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > > Unknown), length 46 > > > 14:41:31.900298 ARP, Request who-has PC19.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:41:31.950399 ARP, Request who-has PC20.home (Broadcast) tell > > > livebox.home, length 46 > > > 14:41:31.950410 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > > length 28 > > > 14:41:51.529113 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > Unknown)) tell 192.168.5.99, length 46 > > > 14:41:51.529147 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > Unknown), > > > length 28 > > > 14:41:56.539268 ARP, Request who-has 192.168.5.99 tell PC20.home, > length 28 > > > 14:41:56.539912 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > > Unknown), length 46 > > > 14:42:02.102645 ARP, Request who-has PC19.home (Broadcast) tell > > > livebox.home, length 46 > > > > > > > > > > > > > > > 2015-05-19 14:32 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > Delete and check if other computers in the network are receiving > > > > broadcasts: > > > > > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s > neigh > > > > flush 192.168.5.1 > > > > tcpdump -i arp #on one of the computers in the 192.168.5.0 > > > > network > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > 192.168.5.1 > > > > > > > > See if any ARP requests reach the computer where you run tcpdump. > > > > > > > > I'm still thinking about some blocking stuff happening in the vswitch > > > > since the ICMP requests are sent to the eth0 interface so they should > > > > reach > > > > the vswitch port. > > > > > > > > ----- Original Message ----- > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > Cc: rdo-list at redhat.com > > > > > Sent: Tuesday, May 19, 2015 2:15:06 PM > > > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external > > > > > gateway > > > > > > > > > > [root at localhost ~(keystone_admin)]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep > '192.168.5.1 ' > > > > > 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 14:12 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > > > Is there an ARP entry for 192.168.5.1 ? > > > > > > > > > > > > ip n | grep '192.168.5.1 ' in the router namespace > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > To: rdo-list at redhat.com > > > > > > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > > > > > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external > > > > gateway > > > > > > > > > > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > > > > > Date: 2015-05-19 13:41 GMT+02:00 > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > > > > > gateway > > > > > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > > > > > > > > > > > > > The forged transmissions on the vswitch are accepted. What's > next? > > > > > > > > > > > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > > : > > > > > > > > > > > > > > > > > > > > > Oh, ESXi...I remember that the vswitch had some security > features > > > > > > > in > > > > > > place. > > > > > > > You can check those and I think the one that you're looking > for is > > > > called > > > > > > > forged retransmits. > > > > > > > > > > > > > > Thanks, > > > > > > > Marius > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > > gateway > > > > > > > > > > > > > > > > the ICMP requests arrives to the eth0 interface > > > > > > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > > > > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > > > > > > tcpdump: verbose output suppressed, use -v or -vv for full > > > > > > > > protocol > > > > > > decode > > > > > > > > listening on eth0, link-type EN10MB (Ethernet), capture size > > > > > > > > 65535 > > > > > > bytes > > > > > > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > request, > > > > id > > > > > > > > 31055, > > > > > > > > seq 1, length 64 > > > > > > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > request, > > > > id > > > > > > > > 31055, > > > > > > > > seq 2, length 64 > > > > > > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > request, > > > > id > > > > > > > > 31055, > > > > > > > > seq 3, length 64 > > > > > > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > request, > > > > id > > > > > > > > 31055, > > > > > > > > seq 4, length 64 > > > > > > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > request, > > > > id > > > > > > > > 31055, > > > > > > > > seq 5, length 64 > > > > > > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > request, > > > > id > > > > > > > > 31055, > > > > > > > > seq 6, length 64 > > > > > > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > request, > > > > id > > > > > > > > 31055, > > > > > > > > seq 7, length 64 > > > > > > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > request, > > > > id > > > > > > > > 31055, > > > > > > > > seq 8, length 64 > > > > > > > > 13:14:33.060267 > > > > > > > > > > > > > > > > > > > > > > > > what should I do next? > > > > > > > > > > > > > > > > P.S: My compute and controller hosts are ESXi VMs and I can > ssh > > > > > > > > to > > > > > > both of > > > > > > > > them without a problem. > > > > > > > > > > > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < > mcornea at redhat.com >: > > > > > > > > > > > > > > > > > Also, I'm seeing that you have 2 default routes on your > host. > > > > I'm not > > > > > > > > > sure > > > > > > > > > it affects the setup but try keeping only one: e.g. 'ip > route > > > > > > > > > del > > > > > > default > > > > > > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > external > > > > > > gateway > > > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 > > > > > > > > > > interface > > > > > > like > > > > > > > > > 'tcpdump > > > > > > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router > > > > namespace. > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > Marius > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > > > > > > > > > > > external > > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > ====updates > > > > > > > > > > > > > > > > > > > > > > I have deleted my networks, rebooted my machines and > > > > configured > > > > > > an > > > > > > > > > other > > > > > > > > > > > network. Now I can see the qr bridge mapped to the > router > > > > > > > > > > > but > > > > > > still > > > > > > > > > can't > > > > > > > > > > > ping the external gateway: > > > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope > link src > > > > > > 10.0.0.1 > > > > > > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope > link > > > > src > > > > > > > > > 192.168.5.70 > > > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue > state > > > > > > UNKNOWN > > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > 12: qg-e1b584b4-db: > mtu > > > > 1500 > > > > > > qdisc > > > > > > > > > > > noqueue > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > > > qg-e1b584b4-db > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > > > > > > qg-e1b584b4-db > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > 13: qr-7b330e0e-5c: > mtu > > > > 1500 > > > > > > qdisc > > > > > > > > > > > noqueue > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global > qr-7b330e0e-5c > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > > > > > > 192.168.5.1 > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host > Unreachable > > > > > > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host > Unreachable > > > > > > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host > Unreachable > > > > > > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host > Unreachable > > > > > > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host > Unreachable > > > > > > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host > Unreachable > > > > > > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host > Unreachable > > > > > > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host > Unreachable > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl > show > > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > > Bridge br-int > > > > > > > > > > > fail_mode: secure > > > > > > > > > > > Port "tap2decc1bc-bf" > > > > > > > > > > > tag: 2 > > > > > > > > > > > Interface "tap2decc1bc-bf" > > > > > > > > > > > type: internal > > > > > > > > > > > Port br-int > > > > > > > > > > > Interface br-int > > > > > > > > > > > type: internal > > > > > > > > > > > Port patch-tun > > > > > > > > > > > Interface patch-tun > > > > > > > > > > > type: patch > > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > > Port "qr-7b330e0e-5c" > > > > > > > > > > > tag: 2 > > > > > > > > > > > Interface "qr-7b330e0e-5c" > > > > > > > > > > > type: internal > > > > > > > > > > > Port "qvo164afbd4-0c" > > > > > > > > > > > tag: 2 > > > > > > > > > > > Interface "qvo164afbd4-0c" > > > > > > > > > > > Bridge br-ex > > > > > > > > > > > Port "eth0" > > > > > > > > > > > Interface "eth0" > > > > > > > > > > > Port br-ex > > > > > > > > > > > Interface br-ex > > > > > > > > > > > type: internal > > > > > > > > > > > Port "qg-e1b584b4-db" > > > > > > > > > > > Interface "qg-e1b584b4-db" > > > > > > > > > > > type: internal > > > > > > > > > > > Bridge br-tun > > > > > > > > > > > Port br-tun > > > > > > > > > > > Interface br-tun > > > > > > > > > > > type: internal > > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > > type: vxlan > > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > > local_ip="192.168.5.33", > > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > > Port patch-int > > > > > > > > > > > Interface patch-int > > > > > > > > > > > type: patch > > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < > > > > > > > > > > > ichi.sara at gmail.com > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? > > > > > > > > > > > /etc/neutron/plugin.ini > > > > > > and the > > > > > > > > > other > > > > > > > > > > > file > > > > > > > > > > > /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > > > > > > bderzhavets at hotmail.com > > > > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > > > > > > qrouter-namespace > > > > > > > > > > > misconfiguration. There is no qr-xxxxx bridge attached > to > > > > br-int > > > > > > > > > > > Picture , in general, should look like this > > > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > > > > > > > > > > > > > Kernel IP routing table > > > > > > > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 > qg-a753a8f5-c8 > > > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > > > > > > > lo Link encap:Local Loopback > > > > > > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr > fa:16:3e:a2:11:b4 > > > > > > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 > > > > Mask:255.255.255.0 > > > > > > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 > carrier:0 > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr > fa:16:3e:9e:ec:01 > > > > > > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 > Mask:255.255.255.0 > > > > > > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 > carrier:0 > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > > > > > > > > > > > > > I would also advise you to post a question also on > > > > > > ask.openstack.org > > > > > > > > > > > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > > > > > > From: ichi.sara at gmail.com > > > > > > > > > > > To: rdo-list at redhat.com > > > > > > > > > > > Subject: [Rdo-list] [Neutron] router can't ping > external > > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hey people, > > > > > > > > > > > I have an issue with my networking. I connected my > > > > > > > > > > > openstack > > > > to > > > > > > an > > > > > > > > > external > > > > > > > > > > > network I did all the changes required. But still my > router > > > > can't > > > > > > > > > reach the > > > > > > > > > > > external gateway. > > > > > > > > > > > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > > > > > > DEVICE=br-ex > > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > > TYPE=OVSBridge > > > > > > > > > > > BOOTPROTO=static > > > > > > > > > > > IPADDR=192.168.5.33 > > > > > > > > > > > NETMASK=255.255.255.0 > > > > > > > > > > > ONBOOT=yes > > > > > > > > > > > GATEWAY=192.168.5.1 > > > > > > > > > > > DNS1=8.8.8.8 > > > > > > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > > > > > > DEVICE=eth0 > > > > > > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > > > > > > ONBOOT=yes > > > > > > > > > > > TYPE=OVSPort > > > > > > > > > > > NM_CONTROLLED=yes > > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl > show > > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > > Bridge br-int > > > > > > > > > > > fail_mode: secure > > > > > > > > > > > Port "tap8652132e-b8" > > > > > > > > > > > tag: 1 > > > > > > > > > > > Interface "tap8652132e-b8" > > > > > > > > > > > type: internal > > > > > > > > > > > Port br-int > > > > > > > > > > > Interface br-int > > > > > > > > > > > type: internal > > > > > > > > > > > Port patch-tun > > > > > > > > > > > Interface patch-tun > > > > > > > > > > > type: patch > > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > > Bridge br-ex > > > > > > > > > > > Port "qg-5f8ebe30-40" > > > > > > > > > > > Interface "qg-5f8ebe30-40" > > > > > > > > > > > type: internal > > > > > > > > > > > Port "eth0" > > > > > > > > > > > Interface "eth0" > > > > > > > > > > > Port br-ex > > > > > > > > > > > Interface br-ex > > > > > > > > > > > type: internal > > > > > > > > > > > Bridge br-tun > > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > > type: vxlan > > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > > local_ip="192.168.5.33", > > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > > Port br-tun > > > > > > > > > > > Interface br-tun > > > > > > > > > > > type: internal > > > > > > > > > > > Port patch-int > > > > > > > > > > > Interface patch-int > > > > > > > > > > > type: patch > > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping > 192.168.5.1 > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 > time=1.76 ms > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 > time=1.88 ms > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 > time=1.45 ms > > > > > > > > > > > ^C > > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > > 3 packets transmitted, 3 received, 0% packet loss, time > > > > 2002ms > > > > > > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > exec > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue > state > > > > > > UNKNOWN > > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > 14: qg-5f8ebe30-40: > mtu > > > > 1500 > > > > > > qdisc > > > > > > > > > > > noqueue > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > > > qg-5f8ebe30-40 > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src > > > > 192.168.4.14 > > > > > > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src > > > > > > 192.168.5.33 > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > exec > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope > link > > > > src > > > > > > > > > 192.168.5.70 > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > exec > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > > > > > > 192.168.5.1 > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > > ^C > > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > > 5 packets transmitted, 0 received, 100% packet loss, > time > > > > 3999ms > > > > > > > > > > > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > Rdo-list > > > > mailing > > > > > > list > > > > > > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > To > > > > > > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue May 19 16:06:27 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 19 May 2015 12:06:27 -0400 (EDT) Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external gateway In-Reply-To: References: <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> <2145793261.1252540.1432042865630.JavaMail.zimbra@redhat.com> <704340030.1268598.1432043836972.JavaMail.zimbra@redhat.com> Message-ID: <773598767.1378537.1432051587099.JavaMail.zimbra@redhat.com> I'm out of suggestions then. You should though double check those 2 settings(promiscuous and forged transmits) as it looks like the esxi vswitch drops all traffic not originated specifically from the VMs nics which I believe is your case. ----- Original Message ----- > From: "ICHIBA Sara" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Tuesday, May 19, 2015 4:12:44 PM > Subject: Re: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external gateway > > I did it and nothing changed :/ > > > 2015-05-19 15:57 GMT+02:00 Marius Cornea : > > > Try enabling promiscuous mode (along with forged transmits) on the vswitch > > port that eth0 is connected to and see how it goes. > > > > ----- Original Message ----- > > > From: "ICHIBA Sara" > > > To: rdo-list at redhat.com > > > Sent: Tuesday, May 19, 2015 3:53:20 PM > > > Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external > > gateway > > > > > > > > > ---------- Forwarded message ---------- > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > Date: 2015-05-19 15:53 GMT+02:00 > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external gateway > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > no i don't. > > > [root at localhost ~(keystone_admin)]# ifconfig eth0 | grep -i up > > > eth0: flags=4163 mtu 1500 > > > > > > > > > > > > 2015-05-19 15:41 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > > > > > > > Hm, do you have promiscuous mode turned on for the port eth0 is > > connected to > > > ? > > > > > > ----- Original Message ----- > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > To: "Marius Cornea" < mcornea at redhat.com >, rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 2:42:28 PM > > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external > > gateway > > > > > > > > [root at localhost ~]# ip netns exec > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh flush > > > > 192.168.5.1 > > > > Nothing to flush. > > > > > > > > > > > > [root at pc20 ~]# tcpdump -i eth0 arp > > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > > decode > > > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 > > bytes > > > > 14:39:31.292222 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:39:31.293093 ARP, Request who-has livebox.home tell PC20.home, > > length 28 > > > > 14:39:31.293882 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > > > > Unknown), length 46 > > > > 14:39:32.300067 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:39:33.310100 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:39:34.320335 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:39:35.330123 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:39:36.289836 ARP, Request who-has PC20.home tell livebox.home, > > length 46 > > > > 14:39:36.289873 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > Unknown), > > > > length 28 > > > > 14:39:36.340219 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:39:51.026708 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > > Unknown)) tell 192.168.5.99, length 46 > > > > 14:39:51.026733 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > Unknown), > > > > length 28 > > > > 14:39:56.027218 ARP, Request who-has livebox.home tell PC20.home, > > length 28 > > > > 14:39:56.027848 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c (oui > > > > Unknown), length 46 > > > > 14:40:01.035292 ARP, Request who-has 192.168.5.99 tell PC20.home, > > length 28 > > > > 14:40:01.035925 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > > > Unknown), length 46 > > > > 14:40:01.454515 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:40:02.460552 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:40:03.470625 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:40:04.480937 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:40:05.490810 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:40:06.500671 ARP, Request who-has PC22.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:40:21.527063 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > > Unknown)) tell 192.168.5.99, length 46 > > > > 14:40:21.527157 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > Unknown), > > > > length 28 > > > > 14:40:36.747216 ARP, Request who-has 192.168.5.99 tell PC20.home, > > length 28 > > > > 14:40:36.747765 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > > > Unknown), length 46 > > > > 14:40:51.527605 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > > Unknown)) tell 192.168.5.99, length 46 > > > > 14:40:51.527638 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > Unknown), > > > > length 28 > > > > 14:41:01.729345 ARP, Request who-has PC20.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:41:01.729408 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > Unknown), > > > > length 28 > > > > 14:41:21.528760 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > > Unknown)) tell 192.168.5.99, length 46 > > > > 14:41:21.528792 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > Unknown), > > > > length 28 > > > > 14:41:26.540361 ARP, Request who-has 192.168.5.99 tell PC20.home, > > length 28 > > > > 14:41:26.540809 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > > > Unknown), length 46 > > > > 14:41:31.900298 ARP, Request who-has PC19.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:41:31.950399 ARP, Request who-has PC20.home (Broadcast) tell > > > > livebox.home, length 46 > > > > 14:41:31.950410 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > Unknown), > > > > length 28 > > > > 14:41:51.529113 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 (oui > > > > Unknown)) tell 192.168.5.99, length 46 > > > > 14:41:51.529147 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > Unknown), > > > > length 28 > > > > 14:41:56.539268 ARP, Request who-has 192.168.5.99 tell PC20.home, > > length 28 > > > > 14:41:56.539912 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 (oui > > > > Unknown), length 46 > > > > 14:42:02.102645 ARP, Request who-has PC19.home (Broadcast) tell > > > > livebox.home, length 46 > > > > > > > > > > > > > > > > > > > > 2015-05-19 14:32 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > Delete and check if other computers in the network are receiving > > > > > broadcasts: > > > > > > > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s > > neigh > > > > > flush 192.168.5.1 > > > > > tcpdump -i arp #on one of the computers in the 192.168.5.0 > > > > > network > > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > 192.168.5.1 > > > > > > > > > > See if any ARP requests reach the computer where you run tcpdump. > > > > > > > > > > I'm still thinking about some blocking stuff happening in the vswitch > > > > > since the ICMP requests are sent to the eth0 interface so they should > > > > > reach > > > > > the vswitch port. > > > > > > > > > > ----- Original Message ----- > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > > Cc: rdo-list at redhat.com > > > > > > Sent: Tuesday, May 19, 2015 2:15:06 PM > > > > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external > > > > > > gateway > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# ip netns exec > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep > > '192.168.5.1 ' > > > > > > 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 14:12 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > > > > > Is there an ARP entry for 192.168.5.1 ? > > > > > > > > > > > > > > ip n | grep '192.168.5.1 ' in the router namespace > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > To: rdo-list at redhat.com > > > > > > > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > > > > > > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping external > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > > > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > > > > > > Date: 2015-05-19 13:41 GMT+02:00 > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > > > > > > gateway > > > > > > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > The forged transmissions on the vswitch are accepted. What's > > next? > > > > > > > > > > > > > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < mcornea at redhat.com > > > : > > > > > > > > > > > > > > > > > > > > > > > > Oh, ESXi...I remember that the vswitch had some security > > features > > > > > > > > in > > > > > > > place. > > > > > > > > You can check those and I think the one that you're looking > > for is > > > > > called > > > > > > > > forged retransmits. > > > > > > > > > > > > > > > > Thanks, > > > > > > > > Marius > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping external > > > > > gateway > > > > > > > > > > > > > > > > > > the ICMP requests arrives to the eth0 interface > > > > > > > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > > > > > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > > > > > > > tcpdump: verbose output suppressed, use -v or -vv for full > > > > > > > > > protocol > > > > > > > decode > > > > > > > > > listening on eth0, link-type EN10MB (Ethernet), capture size > > > > > > > > > 65535 > > > > > > > bytes > > > > > > > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > request, > > > > > id > > > > > > > > > 31055, > > > > > > > > > seq 1, length 64 > > > > > > > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > request, > > > > > id > > > > > > > > > 31055, > > > > > > > > > seq 2, length 64 > > > > > > > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > request, > > > > > id > > > > > > > > > 31055, > > > > > > > > > seq 3, length 64 > > > > > > > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > request, > > > > > id > > > > > > > > > 31055, > > > > > > > > > seq 4, length 64 > > > > > > > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > request, > > > > > id > > > > > > > > > 31055, > > > > > > > > > seq 5, length 64 > > > > > > > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > request, > > > > > id > > > > > > > > > 31055, > > > > > > > > > seq 6, length 64 > > > > > > > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > request, > > > > > id > > > > > > > > > 31055, > > > > > > > > > seq 7, length 64 > > > > > > > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > request, > > > > > id > > > > > > > > > 31055, > > > > > > > > > seq 8, length 64 > > > > > > > > > 13:14:33.060267 > > > > > > > > > > > > > > > > > > > > > > > > > > > what should I do next? > > > > > > > > > > > > > > > > > > P.S: My compute and controller hosts are ESXi VMs and I can > > ssh > > > > > > > > > to > > > > > > > both of > > > > > > > > > them without a problem. > > > > > > > > > > > > > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < > > mcornea at redhat.com >: > > > > > > > > > > > > > > > > > > > Also, I'm seeing that you have 2 default routes on your > > host. > > > > > I'm not > > > > > > > > > > sure > > > > > > > > > > it affects the setup but try keeping only one: e.g. 'ip > > route > > > > > > > > > > del > > > > > > > default > > > > > > > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > > external > > > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 > > > > > > > > > > > interface > > > > > > > like > > > > > > > > > > 'tcpdump > > > > > > > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the router > > > > > namespace. > > > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > Marius > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com >, > > > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > > > > > > > > > > > > external > > > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > ====updates > > > > > > > > > > > > > > > > > > > > > > > > I have deleted my networks, rebooted my machines and > > > > > configured > > > > > > > an > > > > > > > > > > other > > > > > > > > > > > > network. Now I can see the qr bridge mapped to the > > router > > > > > > > > > > > > but > > > > > > > still > > > > > > > > > > can't > > > > > > > > > > > > ping the external gateway: > > > > > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope > > link src > > > > > > > 10.0.0.1 > > > > > > > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel scope > > link > > > > > src > > > > > > > > > > 192.168.5.70 > > > > > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue > > state > > > > > > > UNKNOWN > > > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > 12: qg-e1b584b4-db: > > mtu > > > > > 1500 > > > > > > > qdisc > > > > > > > > > > > > noqueue > > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > > > > qg-e1b584b4-db > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > > > > > > > qg-e1b584b4-db > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > 13: qr-7b330e0e-5c: > > mtu > > > > > 1500 > > > > > > > qdisc > > > > > > > > > > > > noqueue > > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global > > qr-7b330e0e-5c > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > > > > > > > 192.168.5.1 > > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host > > Unreachable > > > > > > > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host > > Unreachable > > > > > > > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host > > Unreachable > > > > > > > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host > > Unreachable > > > > > > > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host > > Unreachable > > > > > > > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host > > Unreachable > > > > > > > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host > > Unreachable > > > > > > > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host > > Unreachable > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl > > show > > > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > > > Bridge br-int > > > > > > > > > > > > fail_mode: secure > > > > > > > > > > > > Port "tap2decc1bc-bf" > > > > > > > > > > > > tag: 2 > > > > > > > > > > > > Interface "tap2decc1bc-bf" > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port br-int > > > > > > > > > > > > Interface br-int > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port patch-tun > > > > > > > > > > > > Interface patch-tun > > > > > > > > > > > > type: patch > > > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > > > Port "qr-7b330e0e-5c" > > > > > > > > > > > > tag: 2 > > > > > > > > > > > > Interface "qr-7b330e0e-5c" > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port "qvo164afbd4-0c" > > > > > > > > > > > > tag: 2 > > > > > > > > > > > > Interface "qvo164afbd4-0c" > > > > > > > > > > > > Bridge br-ex > > > > > > > > > > > > Port "eth0" > > > > > > > > > > > > Interface "eth0" > > > > > > > > > > > > Port br-ex > > > > > > > > > > > > Interface br-ex > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port "qg-e1b584b4-db" > > > > > > > > > > > > Interface "qg-e1b584b4-db" > > > > > > > > > > > > type: internal > > > > > > > > > > > > Bridge br-tun > > > > > > > > > > > > Port br-tun > > > > > > > > > > > > Interface br-tun > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > > > type: vxlan > > > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > > > local_ip="192.168.5.33", > > > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > > > Port patch-int > > > > > > > > > > > > Interface patch-int > > > > > > > > > > > > type: patch > > > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < > > > > > > > > > > > > ichi.sara at gmail.com > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? > > > > > > > > > > > > /etc/neutron/plugin.ini > > > > > > > and the > > > > > > > > > > other > > > > > > > > > > > > file > > > > > > > > > > > > /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > > > > > > > bderzhavets at hotmail.com > > > > > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > > > > > > > qrouter-namespace > > > > > > > > > > > > misconfiguration. There is no qr-xxxxx bridge attached > > to > > > > > br-int > > > > > > > > > > > > Picture , in general, should look like this > > > > > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route -n > > > > > > > > > > > > > > > > > > > > > > > > Kernel IP routing table > > > > > > > > > > > > Destination Gateway Genmask Flags Metric Ref Use Iface > > > > > > > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 qg-a753a8f5-c8 > > > > > > > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-393d9f71-53 > > > > > > > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 > > qg-a753a8f5-c8 > > > > > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c ifconfig > > > > > > > > > > > > lo Link encap:Local Loopback > > > > > > > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr > > fa:16:3e:a2:11:b4 > > > > > > > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 > > > > > Mask:255.255.255.0 > > > > > > > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 > > carrier:0 > > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 MB) > > > > > > > > > > > > > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr > > fa:16:3e:9e:ec:01 > > > > > > > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 > > Mask:255.255.255.0 > > > > > > > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 frame:0 > > > > > > > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 > > carrier:0 > > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 MB) > > > > > > > > > > > > > > > > > > > > > > > > I would also advise you to post a question also on > > > > > > > ask.openstack.org > > > > > > > > > > > > > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > > > > > > > From: ichi.sara at gmail.com > > > > > > > > > > > > To: rdo-list at redhat.com > > > > > > > > > > > > Subject: [Rdo-list] [Neutron] router can't ping > > external > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hey people, > > > > > > > > > > > > I have an issue with my networking. I connected my > > > > > > > > > > > > openstack > > > > > to > > > > > > > an > > > > > > > > > > external > > > > > > > > > > > > network I did all the changes required. But still my > > router > > > > > can't > > > > > > > > > > reach the > > > > > > > > > > > > external gateway. > > > > > > > > > > > > > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > > > > > > > DEVICE=br-ex > > > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > > > TYPE=OVSBridge > > > > > > > > > > > > BOOTPROTO=static > > > > > > > > > > > > IPADDR=192.168.5.33 > > > > > > > > > > > > NETMASK=255.255.255.0 > > > > > > > > > > > > ONBOOT=yes > > > > > > > > > > > > GATEWAY=192.168.5.1 > > > > > > > > > > > > DNS1=8.8.8.8 > > > > > > > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > > > > > > > DEVICE=eth0 > > > > > > > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > > > > > > > ONBOOT=yes > > > > > > > > > > > > TYPE=OVSPort > > > > > > > > > > > > NM_CONTROLLED=yes > > > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ovs-vsctl > > show > > > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > > > Bridge br-int > > > > > > > > > > > > fail_mode: secure > > > > > > > > > > > > Port "tap8652132e-b8" > > > > > > > > > > > > tag: 1 > > > > > > > > > > > > Interface "tap8652132e-b8" > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port br-int > > > > > > > > > > > > Interface br-int > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port patch-tun > > > > > > > > > > > > Interface patch-tun > > > > > > > > > > > > type: patch > > > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > > > Bridge br-ex > > > > > > > > > > > > Port "qg-5f8ebe30-40" > > > > > > > > > > > > Interface "qg-5f8ebe30-40" > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port "eth0" > > > > > > > > > > > > Interface "eth0" > > > > > > > > > > > > Port br-ex > > > > > > > > > > > > Interface br-ex > > > > > > > > > > > > type: internal > > > > > > > > > > > > Bridge br-tun > > > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > > > type: vxlan > > > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > > > local_ip="192.168.5.33", > > > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > > > Port br-tun > > > > > > > > > > > > Interface br-tun > > > > > > > > > > > > type: internal > > > > > > > > > > > > Port patch-int > > > > > > > > > > > > Interface patch-int > > > > > > > > > > > > type: patch > > > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping > > 192.168.5.1 > > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 > > time=1.76 ms > > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 > > time=1.88 ms > > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 > > time=1.45 ms > > > > > > > > > > > > ^C > > > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > > > 3 packets transmitted, 3 received, 0% packet loss, time > > > > > 2002ms > > > > > > > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > > exec > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue > > state > > > > > > > UNKNOWN > > > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > 14: qg-5f8ebe30-40: > > mtu > > > > > 1500 > > > > > > > qdisc > > > > > > > > > > > > noqueue > > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope global > > > > > > > qg-5f8ebe30-40 > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link src > > > > > 192.168.4.14 > > > > > > > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link src > > > > > > > 192.168.5.33 > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > > exec > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel scope > > link > > > > > src > > > > > > > > > > 192.168.5.70 > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > > exec > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > > > > > > > 192.168.5.1 > > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data. > > > > > > > > > > > > ^C > > > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > > > 5 packets transmitted, 0 received, 100% packet loss, > > time > > > > > 3999ms > > > > > > > > > > > > > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list > > > > > mailing > > > > > > > list > > > > > > > > > > > > Rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To > > > > > > > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > Rdo-list mailing list > > > > > > > > Rdo-list at redhat.com > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From ichi.sara at gmail.com Tue May 19 16:26:49 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 19 May 2015 18:26:49 +0200 Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external gateway In-Reply-To: <773598767.1378537.1432051587099.JavaMail.zimbra@redhat.com> References: <1544998486.1195383.1432038728964.JavaMail.zimbra@redhat.com> <2145793261.1252540.1432042865630.JavaMail.zimbra@redhat.com> <704340030.1268598.1432043836972.JavaMail.zimbra@redhat.com> <773598767.1378537.1432051587099.JavaMail.zimbra@redhat.com> Message-ID: ok thank you for all your responses. I'll try to figure it out by tomorrow as i'm out of the office right now. I already checked those two and found nothing suspicious but I'll checked them again tomorrow. 2015-05-19 18:06 GMT+02:00 Marius Cornea : > I'm out of suggestions then. You should though double check those 2 > settings(promiscuous and forged transmits) as it looks like the esxi > vswitch drops all traffic not originated specifically from the VMs nics > which I believe is your case. > > ----- Original Message ----- > > From: "ICHIBA Sara" > > To: "Marius Cornea" > > Cc: rdo-list at redhat.com > > Sent: Tuesday, May 19, 2015 4:12:44 PM > > Subject: Re: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external > gateway > > > > I did it and nothing changed :/ > > > > > > 2015-05-19 15:57 GMT+02:00 Marius Cornea : > > > > > Try enabling promiscuous mode (along with forged transmits) on the > vswitch > > > port that eth0 is connected to and see how it goes. > > > > > > ----- Original Message ----- > > > > From: "ICHIBA Sara" > > > > To: rdo-list at redhat.com > > > > Sent: Tuesday, May 19, 2015 3:53:20 PM > > > > Subject: [Rdo-list] Fwd: Fwd: [Neutron] router can't ping external > > > gateway > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > > Date: 2015-05-19 15:53 GMT+02:00 > > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external > gateway > > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > > > > no i don't. > > > > [root at localhost ~(keystone_admin)]# ifconfig eth0 | grep -i up > > > > eth0: flags=4163 mtu 1500 > > > > > > > > > > > > > > > > 2015-05-19 15:41 GMT+02:00 Marius Cornea < mcornea at redhat.com > : > > > > > > > > > > > > Hm, do you have promiscuous mode turned on for the port eth0 is > > > connected to > > > > ? > > > > > > > > ----- Original Message ----- > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > To: "Marius Cornea" < mcornea at redhat.com >, rdo-list at redhat.com > > > > > Sent: Tuesday, May 19, 2015 2:42:28 PM > > > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping external > > > gateway > > > > > > > > > > [root at localhost ~]# ip netns exec > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s -s neigh flush > > > > > 192.168.5.1 > > > > > Nothing to flush. > > > > > > > > > > > > > > > [root at pc20 ~]# tcpdump -i eth0 arp > > > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > > > decode > > > > > listening on eth0, link-type EN10MB (Ethernet), capture size 65535 > > > bytes > > > > > 14:39:31.292222 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:39:31.293093 ARP, Request who-has livebox.home tell PC20.home, > > > length 28 > > > > > 14:39:31.293882 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c > (oui > > > > > Unknown), length 46 > > > > > 14:39:32.300067 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:39:33.310100 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:39:34.320335 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:39:35.330123 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:39:36.289836 ARP, Request who-has PC20.home tell livebox.home, > > > length 46 > > > > > 14:39:36.289873 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > > Unknown), > > > > > length 28 > > > > > 14:39:36.340219 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:39:51.026708 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 > (oui > > > > > Unknown)) tell 192.168.5.99, length 46 > > > > > 14:39:51.026733 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > > Unknown), > > > > > length 28 > > > > > 14:39:56.027218 ARP, Request who-has livebox.home tell PC20.home, > > > length 28 > > > > > 14:39:56.027848 ARP, Reply livebox.home is-at 00:23:48:9e:85:7c > (oui > > > > > Unknown), length 46 > > > > > 14:40:01.035292 ARP, Request who-has 192.168.5.99 tell PC20.home, > > > length 28 > > > > > 14:40:01.035925 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 > (oui > > > > > Unknown), length 46 > > > > > 14:40:01.454515 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:40:02.460552 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:40:03.470625 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:40:04.480937 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:40:05.490810 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:40:06.500671 ARP, Request who-has PC22.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:40:21.527063 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 > (oui > > > > > Unknown)) tell 192.168.5.99, length 46 > > > > > 14:40:21.527157 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > > Unknown), > > > > > length 28 > > > > > 14:40:36.747216 ARP, Request who-has 192.168.5.99 tell PC20.home, > > > length 28 > > > > > 14:40:36.747765 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 > (oui > > > > > Unknown), length 46 > > > > > 14:40:51.527605 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 > (oui > > > > > Unknown)) tell 192.168.5.99, length 46 > > > > > 14:40:51.527638 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > > Unknown), > > > > > length 28 > > > > > 14:41:01.729345 ARP, Request who-has PC20.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:41:01.729408 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > > Unknown), > > > > > length 28 > > > > > 14:41:21.528760 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 > (oui > > > > > Unknown)) tell 192.168.5.99, length 46 > > > > > 14:41:21.528792 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > > Unknown), > > > > > length 28 > > > > > 14:41:26.540361 ARP, Request who-has 192.168.5.99 tell PC20.home, > > > length 28 > > > > > 14:41:26.540809 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 > (oui > > > > > Unknown), length 46 > > > > > 14:41:31.900298 ARP, Request who-has PC19.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:41:31.950399 ARP, Request who-has PC20.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > 14:41:31.950410 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > > Unknown), > > > > > length 28 > > > > > 14:41:51.529113 ARP, Request who-has PC20.home (00:0c:29:9d:02:44 > (oui > > > > > Unknown)) tell 192.168.5.99, length 46 > > > > > 14:41:51.529147 ARP, Reply PC20.home is-at 00:0c:29:9d:02:44 (oui > > > Unknown), > > > > > length 28 > > > > > 14:41:56.539268 ARP, Request who-has 192.168.5.99 tell PC20.home, > > > length 28 > > > > > 14:41:56.539912 ARP, Reply 192.168.5.99 is-at 74:46:a0:9e:ff:a5 > (oui > > > > > Unknown), length 46 > > > > > 14:42:02.102645 ARP, Request who-has PC19.home (Broadcast) tell > > > > > livebox.home, length 46 > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 14:32 GMT+02:00 Marius Cornea < mcornea at redhat.com >: > > > > > > > > > > > Delete and check if other computers in the network are receiving > > > > > > broadcasts: > > > > > > > > > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip -s > -s > > > neigh > > > > > > flush 192.168.5.1 > > > > > > tcpdump -i arp #on one of the computers in the > 192.168.5.0 > > > > > > network > > > > > > ip netns exec qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > 192.168.5.1 > > > > > > > > > > > > See if any ARP requests reach the computer where you run tcpdump. > > > > > > > > > > > > I'm still thinking about some blocking stuff happening in the > vswitch > > > > > > since the ICMP requests are sent to the eth0 interface so they > should > > > > > > reach > > > > > > the vswitch port. > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > Sent: Tuesday, May 19, 2015 2:15:06 PM > > > > > > > Subject: Re: [Rdo-list] Fwd: [Neutron] router can't ping > external > > > > > > > gateway > > > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# ip netns exec > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip n | grep > > > '192.168.5.1 ' > > > > > > > 192.168.5.1 dev qg-e1b584b4-db lladdr 00:23:48:9e:85:7c STALE > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 14:12 GMT+02:00 Marius Cornea < mcornea at redhat.com > >: > > > > > > > > > > > > > > > Is there an ARP entry for 192.168.5.1 ? > > > > > > > > > > > > > > > > ip n | grep '192.168.5.1 ' in the router namespace > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > To: rdo-list at redhat.com > > > > > > > > > Sent: Tuesday, May 19, 2015 1:42:11 PM > > > > > > > > > Subject: [Rdo-list] Fwd: [Neutron] router can't ping > external > > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > > > > > > From: ICHIBA Sara < ichi.sara at gmail.com > > > > > > > > > > Date: 2015-05-19 13:41 GMT+02:00 > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > external > > > > > > > > > gateway > > > > > > > > > To: Marius Cornea < mcornea at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > The forged transmissions on the vswitch are accepted. > What's > > > next? > > > > > > > > > > > > > > > > > > 2015-05-19 13:29 GMT+02:00 Marius Cornea < > mcornea at redhat.com > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > Oh, ESXi...I remember that the vswitch had some security > > > features > > > > > > > > > in > > > > > > > > place. > > > > > > > > > You can check those and I think the one that you're looking > > > for is > > > > > > called > > > > > > > > > forged retransmits. > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > Marius > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > To: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > > > Sent: Tuesday, May 19, 2015 1:17:20 PM > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > external > > > > > > gateway > > > > > > > > > > > > > > > > > > > > the ICMP requests arrives to the eth0 interface > > > > > > > > > > [root at localhost ~]# tcpdump -i eth0 icmp > > > > > > > > > > tcpdump: WARNING: eth0: no IPv4 address assigned > > > > > > > > > > tcpdump: verbose output suppressed, use -v or -vv for > full > > > > > > > > > > protocol > > > > > > > > decode > > > > > > > > > > listening on eth0, link-type EN10MB (Ethernet), capture > size > > > > > > > > > > 65535 > > > > > > > > bytes > > > > > > > > > > 13:14:13.205573 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > > request, > > > > > > id > > > > > > > > > > 31055, > > > > > > > > > > seq 1, length 64 > > > > > > > > > > 13:14:14.205303 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > > request, > > > > > > id > > > > > > > > > > 31055, > > > > > > > > > > seq 2, length 64 > > > > > > > > > > 13:14:15.205391 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > > request, > > > > > > id > > > > > > > > > > 31055, > > > > > > > > > > seq 3, length 64 > > > > > > > > > > 13:14:16.205397 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > > request, > > > > > > id > > > > > > > > > > 31055, > > > > > > > > > > seq 4, length 64 > > > > > > > > > > 13:14:17.205408 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > > request, > > > > > > id > > > > > > > > > > 31055, > > > > > > > > > > seq 5, length 64 > > > > > > > > > > 13:14:18.205412 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > > request, > > > > > > id > > > > > > > > > > 31055, > > > > > > > > > > seq 6, length 64 > > > > > > > > > > 13:14:19.205392 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > > request, > > > > > > id > > > > > > > > > > 31055, > > > > > > > > > > seq 7, length 64 > > > > > > > > > > 13:14:20.205357 IP 192.168.5.70 > 192.168.5.1 : ICMP echo > > > > > > > > > > request, > > > > > > id > > > > > > > > > > 31055, > > > > > > > > > > seq 8, length 64 > > > > > > > > > > 13:14:33.060267 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > what should I do next? > > > > > > > > > > > > > > > > > > > > P.S: My compute and controller hosts are ESXi VMs and I > can > > > ssh > > > > > > > > > > to > > > > > > > > both of > > > > > > > > > > them without a problem. > > > > > > > > > > > > > > > > > > > > 2015-05-19 13:00 GMT+02:00 Marius Cornea < > > > mcornea at redhat.com >: > > > > > > > > > > > > > > > > > > > > > Also, I'm seeing that you have 2 default routes on your > > > host. > > > > > > I'm not > > > > > > > > > > > sure > > > > > > > > > > > it affects the setup but try keeping only one: e.g. 'ip > > > route > > > > > > > > > > > del > > > > > > > > default > > > > > > > > > > > via 192.168.4.1' to delete the eth1 one. > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > From: "Marius Cornea" < mcornea at redhat.com > > > > > > > > > > > > > To: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > > > > > Sent: Tuesday, May 19, 2015 12:50:45 PM > > > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > > > external > > > > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > > > > > > > Try to see if any of the ICMP requests leave the eth0 > > > > > > > > > > > > interface > > > > > > > > like > > > > > > > > > > > 'tcpdump > > > > > > > > > > > > -i eth0 icmp' while pinging 192.168.5.1 from the > router > > > > > > namespace. > > > > > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Marius > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > > From: "ICHIBA Sara" < ichi.sara at gmail.com > > > > > > > > > > > > > > To: "Boris Derzhavets" < bderzhavets at hotmail.com > >, > > > > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > Sent: Tuesday, May 19, 2015 12:12:30 PM > > > > > > > > > > > > > Subject: Re: [Rdo-list] [Neutron] router can't ping > > > > > > > > > > > > > external > > > > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > > > ====updates > > > > > > > > > > > > > > > > > > > > > > > > > > I have deleted my networks, rebooted my machines > and > > > > > > configured > > > > > > > > an > > > > > > > > > > > other > > > > > > > > > > > > > network. Now I can see the qr bridge mapped to the > > > router > > > > > > > > > > > > > but > > > > > > > > still > > > > > > > > > > > can't > > > > > > > > > > > > > ping the external gateway: > > > > > > > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns > exec > > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > > > > default via 192.168.5.1 dev qg-e1b584b4-db > > > > > > > > > > > > > 10.0.0.0/24 dev qr-7b330e0e-5c proto kernel scope > > > link src > > > > > > > > 10.0.0.1 > > > > > > > > > > > > > 192.168.5.0/24 dev qg-e1b584b4-db proto kernel > scope > > > link > > > > > > src > > > > > > > > > > > 192.168.5.70 > > > > > > > > > > > > > > > > > > > > > > > > > > ====[root at localhost ~(keystone_admin)]# ip netns > exec > > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > > > > 1: lo: mtu 65536 qdisc > noqueue > > > state > > > > > > > > UNKNOWN > > > > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd > 00:00:00:00:00:00 > > > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > 12: qg-e1b584b4-db: > > > > mtu > > > > > > 1500 > > > > > > > > qdisc > > > > > > > > > > > > > noqueue > > > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > > > link/ether fa:16:3e:68:83:f8 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope > global > > > > > > > > qg-e1b584b4-db > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > inet 192.168.5.73/32 brd 192.168.5.73 scope global > > > > > > > > qg-e1b584b4-db > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > inet6 fe80::f816:3eff:fe68:83f8/64 scope link > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > 13: qr-7b330e0e-5c: > > > > mtu > > > > > > 1500 > > > > > > > > qdisc > > > > > > > > > > > > > noqueue > > > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > > > link/ether fa:16:3e:92:9c:90 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > inet 10.0.0.1/24 brd 10.0.0.255 scope global > > > qr-7b330e0e-5c > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > inet6 fe80::f816:3eff:fe92:9c90/64 scope link > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ip netns > exec > > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > > > > > > > > 192.168.5.1 > > > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of > data. > > > > > > > > > > > > > From 192.168.5.70 icmp_seq=10 Destination Host > > > Unreachable > > > > > > > > > > > > > From 192.168.5.70 icmp_seq=11 Destination Host > > > Unreachable > > > > > > > > > > > > > From 192.168.5.70 icmp_seq=12 Destination Host > > > Unreachable > > > > > > > > > > > > > From 192.168.5.70 icmp_seq=13 Destination Host > > > Unreachable > > > > > > > > > > > > > From 192.168.5.70 icmp_seq=14 Destination Host > > > Unreachable > > > > > > > > > > > > > From 192.168.5.70 icmp_seq=15 Destination Host > > > Unreachable > > > > > > > > > > > > > From 192.168.5.70 icmp_seq=16 Destination Host > > > Unreachable > > > > > > > > > > > > > From 192.168.5.70 icmp_seq=17 Destination Host > > > Unreachable > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ovs-vsctl > > > show > > > > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > > > > Bridge br-int > > > > > > > > > > > > > fail_mode: secure > > > > > > > > > > > > > Port "tap2decc1bc-bf" > > > > > > > > > > > > > tag: 2 > > > > > > > > > > > > > Interface "tap2decc1bc-bf" > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port br-int > > > > > > > > > > > > > Interface br-int > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port patch-tun > > > > > > > > > > > > > Interface patch-tun > > > > > > > > > > > > > type: patch > > > > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > > > > Port "qr-7b330e0e-5c" > > > > > > > > > > > > > tag: 2 > > > > > > > > > > > > > Interface "qr-7b330e0e-5c" > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port "qvo164afbd4-0c" > > > > > > > > > > > > > tag: 2 > > > > > > > > > > > > > Interface "qvo164afbd4-0c" > > > > > > > > > > > > > Bridge br-ex > > > > > > > > > > > > > Port "eth0" > > > > > > > > > > > > > Interface "eth0" > > > > > > > > > > > > > Port br-ex > > > > > > > > > > > > > Interface br-ex > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port "qg-e1b584b4-db" > > > > > > > > > > > > > Interface "qg-e1b584b4-db" > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Bridge br-tun > > > > > > > > > > > > > Port br-tun > > > > > > > > > > > > > Interface br-tun > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > > > > type: vxlan > > > > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > > > > local_ip="192.168.5.33", > > > > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > > > > Port patch-int > > > > > > > > > > > > > Interface patch-int > > > > > > > > > > > > > type: patch > > > > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 11:58 GMT+02:00 ICHIBA Sara < > > > > > > > > > > > > > ichi.sara at gmail.com > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > can you show me your plugin.ini file? > > > > > > > > > > > > > /etc/neutron/plugin.ini > > > > > > > > and the > > > > > > > > > > > other > > > > > > > > > > > > > file > > > > > > > > > > > > > > /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2015-05-19 10:47 GMT+02:00 Boris Derzhavets < > > > > > > > > bderzhavets at hotmail.com > > > > > > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There is one thing , which I clearly see . It is > > > > > > > > qrouter-namespace > > > > > > > > > > > > > misconfiguration. There is no qr-xxxxx bridge > attached > > > to > > > > > > br-int > > > > > > > > > > > > > Picture , in general, should look like this > > > > > > > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c route > -n > > > > > > > > > > > > > > > > > > > > > > > > > > Kernel IP routing table > > > > > > > > > > > > > Destination Gateway Genmask Flags Metric Ref Use > Iface > > > > > > > > > > > > > 0.0.0.0 192.168.12.15 0.0.0.0 UG 0 0 0 > qg-a753a8f5-c8 > > > > > > > > > > > > > 10.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 > qr-393d9f71-53 > > > > > > > > > > > > > 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 > > > qg-a753a8f5-c8 > > > > > > > > > > > > > > > > > > > > > > > > > > ubuntu at ubuntu-System:~$ sudo ip netns exec > > > > > > > > > > > > > qrouter-6cb93ddd-2637-449d-8b10-7c07da49ee8c > ifconfig > > > > > > > > > > > > > lo Link encap:Local Loopback > > > > > > > > > > > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > > > > > > > > > > > inet6 addr: ::1/128 Scope:Host > > > > > > > > > > > > > UP LOOPBACK RUNNING MTU:65536 Metric:1 > > > > > > > > > > > > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > > > > > > > > > > > > TX packets:0 errors:0 dropped:0 overruns:0 > carrier:0 > > > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > > > > > > > > > > > > > > > > > > > > > > > > > > qg-a753a8f5-c8 Link encap:Ethernet HWaddr > > > fa:16:3e:a2:11:b4 > > > > > > > > > > > > > inet addr:192.168.12.150 Bcast:192.168.12.255 > > > > > > Mask:255.255.255.0 > > > > > > > > > > > > > inet6 addr: fe80::f816:3eff:fea2:11b4/64 Scope:Link > > > > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > > > > RX packets:24504 errors:0 dropped:0 overruns:0 > frame:0 > > > > > > > > > > > > > TX packets:17367 errors:0 dropped:0 overruns:0 > > > carrier:0 > > > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > > > RX bytes:24328699 (24.3 MB) TX bytes:1443691 (1.4 > MB) > > > > > > > > > > > > > > > > > > > > > > > > > > qr-393d9f71-53 Link encap:Ethernet HWaddr > > > fa:16:3e:9e:ec:01 > > > > > > > > > > > > > inet addr:10.254.1.1 Bcast:10.254.1.255 > > > Mask:255.255.255.0 > > > > > > > > > > > > > inet6 addr: fe80::f816:3eff:fe9e:ec01/64 Scope:Link > > > > > > > > > > > > > UP BROADCAST RUNNING MTU:1500 Metric:1 > > > > > > > > > > > > > RX packets:22487 errors:0 dropped:5 overruns:0 > frame:0 > > > > > > > > > > > > > TX packets:24736 errors:0 dropped:0 overruns:0 > > > carrier:0 > > > > > > > > > > > > > collisions:0 txqueuelen:0 > > > > > > > > > > > > > RX bytes:2379287 (2.3 MB) TX bytes:24338711 (24.3 > MB) > > > > > > > > > > > > > > > > > > > > > > > > > > I would also advise you to post a question also on > > > > > > > > ask.openstack.org > > > > > > > > > > > > > > > > > > > > > > > > > > Boris. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Date: Tue, 19 May 2015 09:48:58 +0200 > > > > > > > > > > > > > From: ichi.sara at gmail.com > > > > > > > > > > > > > To: rdo-list at redhat.com > > > > > > > > > > > > > Subject: [Rdo-list] [Neutron] router can't ping > > > external > > > > > > gateway > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hey people, > > > > > > > > > > > > > I have an issue with my networking. I connected my > > > > > > > > > > > > > openstack > > > > > > to > > > > > > > > an > > > > > > > > > > > external > > > > > > > > > > > > > network I did all the changes required. But still > my > > > router > > > > > > can't > > > > > > > > > > > reach the > > > > > > > > > > > > > external gateway. > > > > > > > > > > > > > > > > > > > > > > > > > > =====ifcfg-br-ex > > > > > > > > > > > > > DEVICE=br-ex > > > > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > > > > TYPE=OVSBridge > > > > > > > > > > > > > BOOTPROTO=static > > > > > > > > > > > > > IPADDR=192.168.5.33 > > > > > > > > > > > > > NETMASK=255.255.255.0 > > > > > > > > > > > > > ONBOOT=yes > > > > > > > > > > > > > GATEWAY=192.168.5.1 > > > > > > > > > > > > > DNS1=8.8.8.8 > > > > > > > > > > > > > DNS2=192.168.5.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ====ifcfg-eth0 > > > > > > > > > > > > > DEVICE=eth0 > > > > > > > > > > > > > HWADDR=00:0c:29:a2:b1:b9 > > > > > > > > > > > > > ONBOOT=yes > > > > > > > > > > > > > TYPE=OVSPort > > > > > > > > > > > > > NM_CONTROLLED=yes > > > > > > > > > > > > > DEVICETYPE=ovs > > > > > > > > > > > > > OVS_BRIDGE=br-ex > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# > ovs-vsctl > > > show > > > > > > > > > > > > > 19de58db-509d-4de8-bd88-9222019b13f1 > > > > > > > > > > > > > Bridge br-int > > > > > > > > > > > > > fail_mode: secure > > > > > > > > > > > > > Port "tap8652132e-b8" > > > > > > > > > > > > > tag: 1 > > > > > > > > > > > > > Interface "tap8652132e-b8" > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port br-int > > > > > > > > > > > > > Interface br-int > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port patch-tun > > > > > > > > > > > > > Interface patch-tun > > > > > > > > > > > > > type: patch > > > > > > > > > > > > > options: {peer=patch-int} > > > > > > > > > > > > > Bridge br-ex > > > > > > > > > > > > > Port "qg-5f8ebe30-40" > > > > > > > > > > > > > Interface "qg-5f8ebe30-40" > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port "eth0" > > > > > > > > > > > > > Interface "eth0" > > > > > > > > > > > > > Port br-ex > > > > > > > > > > > > > Interface br-ex > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Bridge br-tun > > > > > > > > > > > > > Port "vxlan-c0a80520" > > > > > > > > > > > > > Interface "vxlan-c0a80520" > > > > > > > > > > > > > type: vxlan > > > > > > > > > > > > > options: {df_default="true", in_key=flow, > > > > > > > > local_ip="192.168.5.33", > > > > > > > > > > > > > out_key=flow, remote_ip="192.168.5.32"} > > > > > > > > > > > > > Port br-tun > > > > > > > > > > > > > Interface br-tun > > > > > > > > > > > > > type: internal > > > > > > > > > > > > > Port patch-int > > > > > > > > > > > > > Interface patch-int > > > > > > > > > > > > > type: patch > > > > > > > > > > > > > options: {peer=patch-tun} > > > > > > > > > > > > > ovs_version: "2.3.1" > > > > > > > > > > > > > > > > > > > > > > > > > > =====[root at localhost ~(keystone_admin)]# ping > > > 192.168.5.1 > > > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of > data. > > > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=1 ttl=64 > > > time=1.76 ms > > > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=2 ttl=64 > > > time=1.88 ms > > > > > > > > > > > > > 64 bytes from 192.168.5.1 : icmp_seq=3 ttl=64 > > > time=1.45 ms > > > > > > > > > > > > > ^C > > > > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > > > > 3 packets transmitted, 3 received, 0% packet loss, > time > > > > > > 2002ms > > > > > > > > > > > > > rtt min/avg/max/mdev = 1.452/1.699/1.880/0.187 ms > > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > > > exec > > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip a > > > > > > > > > > > > > 1: lo: mtu 65536 qdisc > noqueue > > > state > > > > > > > > UNKNOWN > > > > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd > 00:00:00:00:00:00 > > > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > inet6 ::1/128 scope host > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > 14: qg-5f8ebe30-40: > > > > mtu > > > > > > 1500 > > > > > > > > qdisc > > > > > > > > > > > > > noqueue > > > > > > > > > > > > > state UNKNOWN > > > > > > > > > > > > > link/ether fa:16:3e:c2:1b:5e brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > inet 192.168.5.70/24 brd 192.168.5.255 scope > global > > > > > > > > qg-5f8ebe30-40 > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > inet6 fe80::f816:3eff:fec2:1b5e/64 scope link > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip r > > > > > > > > > > > > > default via 192.168.5.1 dev br-ex > > > > > > > > > > > > > default via 192.168.4.1 dev eth1 > > > > > > > > > > > > > 169.254.0.0/16 dev eth0 scope link metric 1002 > > > > > > > > > > > > > 169.254.0.0/16 dev eth1 scope link metric 1003 > > > > > > > > > > > > > 169.254.0.0/16 dev br-ex scope link metric 1005 > > > > > > > > > > > > > 192.168.4.0/24 dev eth1 proto kernel scope link > src > > > > > > 192.168.4.14 > > > > > > > > > > > > > 192.168.5.0/24 dev br-ex proto kernel scope link > src > > > > > > > > 192.168.5.33 > > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > > > exec > > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ip r > > > > > > > > > > > > > default via 192.168.5.1 dev qg-5f8ebe30-40 > > > > > > > > > > > > > 192.168.5.0/24 dev qg-5f8ebe30-40 proto kernel > scope > > > link > > > > > > src > > > > > > > > > > > 192.168.5.70 > > > > > > > > > > > > > [root at localhost ~(keystone_admin)]# > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ======[root at localhost ~(keystone_admin)]# ip netns > > > exec > > > > > > > > > > > > > qrouter-85fa9459-503d-4996-86f3-6042604fed74 ping > > > > > > > > > > > > > 192.168.5.1 > > > > > > > > > > > > > PING 192.168.5.1 (192.168.5.1) 56(84) bytes of > data. > > > > > > > > > > > > > ^C > > > > > > > > > > > > > --- 192.168.5.1 ping statistics --- > > > > > > > > > > > > > 5 packets transmitted, 0 received, 100% packet > loss, > > > time > > > > > > 3999ms > > > > > > > > > > > > > > > > > > > > > > > > > > any hints?? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list > > > > > > mailing > > > > > > > > list > > > > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > To > > > > > > > > > > > > > unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > > Rdo-list mailing list > > > > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > Rdo-list mailing list > > > > > > > > > Rdo-list at redhat.com > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From outbackdingo at gmail.com Wed May 20 00:37:53 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Wed, 20 May 2015 10:37:53 +1000 Subject: [Rdo-list] RDO on Fedora 22 Message-ID: I installed Fedora 22 workstation last night and tried to get an RDO allinone going but it fails with a conflict Linux localhost.localdomain 4.0.3-300.fc22.x86_64 #1 SMP Wed May 13 18:43:52 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Error: Transaction check error: file /usr/share/openstack-puppet/modules/remote/lib/puppet/provider/remote_database/mysql.rb from install of openstack-packstack-puppet-2014.2-0.16.dev1447.g6f4d34b.fc22.noarch conflicts with file from package openstack-puppet-modules-2015.1.2-1.fc23.noarch file /usr/share/openstack-puppet/modules/remote/lib/puppet/type/remote_database_user.rb from install of openstack-packstack-puppet-2014.2-0.16.dev1447.g6f4d34b.fc22.noarch conflicts with file from package open stack-puppet-modules-2015.1.2-1.fc23.noarch -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed May 20 00:46:33 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 19 May 2015 20:46:33 -0400 Subject: [Rdo-list] RDO on Fedora 22 In-Reply-To: References: Message-ID: Hi Just out of curiousity, why are you installing it on Fedora? And your conflict appears to be due to similar packages in 2 different repos. If you want to continue down this path, you would have to do some yum magic with excludes On Tue, May 19, 2015 at 8:37 PM, Outback Dingo wrote: > I installed Fedora 22 workstation last night and tried to get an RDO > allinone going but it fails with a conflict > > Linux localhost.localdomain 4.0.3-300.fc22.x86_64 #1 SMP Wed May 13 > 18:43:52 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > > Error: Transaction check error: > file > /usr/share/openstack-puppet/modules/remote/lib/puppet/provider/remote_database/mysql.rb > from install of > openstack-packstack-puppet-2014.2-0.16.dev1447.g6f4d34b.fc22.noarch > conflicts with file from package > openstack-puppet-modules-2015.1.2-1.fc23.noarch > file > /usr/share/openstack-puppet/modules/remote/lib/puppet/type/remote_database_user.rb > from install of > openstack-packstack-puppet-2014.2-0.16.dev1447.g6f4d34b.fc22.noarch > conflicts with file from package open > stack-puppet-modules-2015.1.2-1.fc23.noarch > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From outbackdingo at gmail.com Wed May 20 00:54:35 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Wed, 20 May 2015 10:54:35 +1000 Subject: [Rdo-list] RDO on Fedora 22 In-Reply-To: References: Message-ID: Fedora for a mobile desktop development platform while travelling the next few weeks, nd its installed fine on Fedora before. Seems maybe the latest packages are conflicting, so ill simply remove the offending "repo" On Wed, May 20, 2015 at 10:46 AM, Mohammed Arafa wrote: > Hi > > Just out of curiousity, why are you installing it on Fedora? > > And your conflict appears to be due to similar packages in 2 different > repos. If you want to continue down this path, you would have to do some > yum magic with excludes > > On Tue, May 19, 2015 at 8:37 PM, Outback Dingo > wrote: > >> I installed Fedora 22 workstation last night and tried to get an RDO >> allinone going but it fails with a conflict >> >> Linux localhost.localdomain 4.0.3-300.fc22.x86_64 #1 SMP Wed May 13 >> 18:43:52 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux >> >> Error: Transaction check error: >> file >> /usr/share/openstack-puppet/modules/remote/lib/puppet/provider/remote_database/mysql.rb >> from install of >> openstack-packstack-puppet-2014.2-0.16.dev1447.g6f4d34b.fc22.noarch >> conflicts with file from package >> openstack-puppet-modules-2015.1.2-1.fc23.noarch >> file >> /usr/share/openstack-puppet/modules/remote/lib/puppet/type/remote_database_user.rb >> from install of >> openstack-packstack-puppet-2014.2-0.16.dev1447.g6f4d34b.fc22.noarch >> conflicts with file from package open >> stack-puppet-modules-2015.1.2-1.fc23.noarch >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Wed May 20 07:32:50 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Wed, 20 May 2015 03:32:50 -0400 Subject: [Rdo-list] Is openstack-config working with Tempest (on Kilo)? Message-ID: <648473255763364B961A02AC3BE1060D03D0DCE76A@MX19A.corp.emc.com> I'm getting: openstack-config --set $TEMPEST_CONF volume backend1_name XtremIO-ISCSI-Backend [Errno 2] No such file or directory: '/usr/share/opensack-tempest-kilo/.tempest.conf.crudini.lck' [root at lgdrm432 openstack-tempest-kilo(keystone_admin)]# echo $TEMPEST_CONF /usr/share/opensack-tempest-kilo/tempest.conf [root at lgdrm432 openstack-tempest-kilo(keystone_admin)]# rpm -q --whatprovides `which openstack-config` openstack-utils-2014.2-1.el7.noarch [root at lgdrm432 openstack-tempest-kilo(keystone_admin)]# rpm -q openstack-tempest openstack-tempest-kilo-20150507.2.el7.noarch -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Wed May 20 07:56:06 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Wed, 20 May 2015 03:56:06 -0400 Subject: [Rdo-list] Is openstack-config working with Tempest (on Kilo)? In-Reply-To: <648473255763364B961A02AC3BE1060D03D0DCE76A@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03D0DCE76A@MX19A.corp.emc.com> Message-ID: <648473255763364B961A02AC3BE1060D03D0DCE777@MX19A.corp.emc.com> My fault (partially) - $TEMPEST_CONF should be /usr/share/opensack-tempest-kilo/etc/tempest.conf and not the below. Still, the error message could have been less cryptic. Y. From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Kaul, Yaniv Sent: Wednesday, May 20, 2015 10:33 AM To: rdo-list at redhat.com Subject: [Rdo-list] Is openstack-config working with Tempest (on Kilo)? I'm getting: openstack-config --set $TEMPEST_CONF volume backend1_name XtremIO-ISCSI-Backend [Errno 2] No such file or directory: '/usr/share/opensack-tempest-kilo/.tempest.conf.crudini.lck' [root at lgdrm432 openstack-tempest-kilo(keystone_admin)]# echo $TEMPEST_CONF /usr/share/opensack-tempest-kilo/tempest.conf [root at lgdrm432 openstack-tempest-kilo(keystone_admin)]# rpm -q --whatprovides `which openstack-config` openstack-utils-2014.2-1.el7.noarch [root at lgdrm432 openstack-tempest-kilo(keystone_admin)]# rpm -q openstack-tempest openstack-tempest-kilo-20150507.2.el7.noarch -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Wed May 20 07:56:36 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 20 May 2015 09:56:36 +0200 Subject: [Rdo-list] RDO on Fedora 22 In-Reply-To: References: Message-ID: 2015-05-20 2:37 GMT+02:00 Outback Dingo : > openstack-packstack-puppet-2014.2-0.16.dev1447.g6f4d34b.fc22.noarch > conflicts with file from package > openstack-puppet-modules-2015.1.2-1.fc23.noarch Not sure why that happens, 2015.1.2 should win, please pastebin your enabled repos, I'd like to have a look. Cheers, Alan From apevec at gmail.com Wed May 20 07:59:56 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 20 May 2015 09:59:56 +0200 Subject: [Rdo-list] RDO on Fedora 22 In-Reply-To: References: Message-ID: 2015-05-20 9:56 GMT+02:00 Alan Pevec : > 2015-05-20 2:37 GMT+02:00 Outback Dingo : >> openstack-packstack-puppet-2014.2-0.16.dev1447.g6f4d34b.fc22.noarch >> conflicts with file from package >> openstack-puppet-modules-2015.1.2-1.fc23.noarch > > Not sure why that happens, 2015.1.2 should win, please pastebin your > enabled repos, I'd like to have a look. nm, I see now, you have both juno and kilo repos enabled. Please use EL7 until I announce F22 RDO Kilo repo is ready. Cheers, Alan From ichi.sara at gmail.com Wed May 20 12:11:50 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 20 May 2015 14:11:50 +0200 Subject: [Rdo-list] [heat]can't access the console of instances created by heat Message-ID: Hey, I created an instance via heat, but I can't access its console. When I create a VM the classic way ( with horizon) with exactly the same parameters I don't get this problem. Do you have any idea what the problem might be? Plz find below my template. heat_template_version: 2014-10-16 description: A simple server. parameters: key_name: type: string description: name of a keypair default: userkey resources: server: type: OS::Nova::Server properties: image: cirros flavor: m1.tiny key_name: {get_param: key_name} networks: - network: 7512b12e-b9fd-4d64-8496-c6d8fb9ec1bc the logs says that the instance is created successfully info: info: initramfs: up at 2.64 GROWROOT: CHANGED: partition=1 start=16065 old: size=64260 end=80325 new: size=2072385,end=2088450 info: initramfs loading root from /dev/vda1 info: /etc/init.d/rc.sysinit: up at 4.27 info: container: none Starting logging: OK Initializing random number generator... done. Starting acpid: OK cirros-ds 'local' up at 6.06 no results found for mode=local. up 6.49. searched: nocloud configdrive ec2 Starting network... udhcpc (v1.20.1) started Sending discover... Sending select for 10.0.0.7... Lease of 10.0.0.7 obtained, lease time 86400 cirros-ds 'net' up at 7.95 checking http://169.254.169.254/2009-04-04/instance-id failed 1/20: up 8.27. request failed successful after 2/20 tries: up 20.57. iid=i-0000002c found datasource (ec2, net) Starting dropbear sshd: generating rsa key... generating dsa key... OK /run/cirros/datasource/data/user-data was not '#!' or executable === system information === Platform: Fedora Project OpenStack Nova Container: none Arch: x86_64 CPU(s): 1 @ 2533.399 MHz Cores/Sockets/Threads: 1/1/1 Virt-type: AMD-V RAM Size: 491MB Disks: NAME MAJ:MIN SIZE LABEL MOUNTPOINT vda 253:0 1073741824 vda1 253:1 1061061120 cirros-rootfs / === sshd host keys === -----BEGIN SSH HOST KEY KEYS----- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgwCOvtNAdh55hdE03kVs8q+9jLXCCtjlj9NRzCjsOUaPzO9THmfkhJS7BsKObf/GsMo/OiupcjC1ofDfESzXH5mlx4W8CMdp7XNxuhX1K3/jp/Fj6vimSgiEiVQ84FDZKUT+RlIpk3c05E4YAQcItBTZCfwbJ6UvjcUIY9iD/mTeT7X3 root at single-instance-server-2ujbywfyvu6w ssh-dss AAAAB3NzaC1kc3MAAACBAMz8T2VGHpc6RryB2IHhsWhGEG+hMfLgC1dxfMz/x8s6etEp8SMGlXQJ1blzHluLoZBqPubOGzbgtGYXdxAExD2cAeA8BwcRinIlA3AAVJbwo4mgqH5+sWmEWUxUrbhkiPUnq41KVyd9Js1KTgW5FgdpWQvmuKL5xo7wLBvP4DnFAAAAFQDdUErqwKR4DBwBqaeh+ROwCdqFIwAAAIEAkhVkypxvmuL+KHMy6wA7lLJ7RpOrJ3k0RgGGqh5tTYvXpaRYi/ju0p3xI0P3S+PtnXkVonOPpmnybPMarWolDT5AmtFWwug6YRybMPl0LMFUz9J6RZNwvewoytEhJBN2Zr9eJohlFU3mKN01mPedWCvZLazOuUwUZlpFLQ9NfzkAAACAZwTICpU+vyH+/eCkPR+XJDVTmaeymA9uYiBhtbiANIrq6wLO3SHrOtRZLbUCZRXs3sW5K8kMaF9teEHsdQ0zNx+I7ULaRGFZjZBDum7hiAOwGf0pwq6Xu82sxjxbwlQJGuO1qW4N0sv5y8mekFm+CiiT/37wmvLbDcRcd0KEnB0= root at single-instance-server-2ujbywfyvu6w -----END SSH HOST KEY KEYS----- === network info === if-info: lo,up,127.0.0.1,8,::1 if-info: eth0,up,10.0.0.7,24,fe80::f816:3eff:fec4:c0b4 ip-route:default via 10.0.0.1 dev eth0 ip-route:10.0.0.0/24 dev eth0 src 10.0.0.7 === datasource: ec2 net === instance-id: i-0000002c name: N/A availability-zone: nova local-hostname: single-instance-server-2ujbywfyvu6w.novalocal launch-index: 0 === cirros: current=0.3.3 uptime=42.75 === ____ ____ ____ / __/ __ ____ ____ / __ \/ __/ / /__ / // __// __// /_/ /\ \ \___//_//_/ /_/ \____/___/ http://cirros-cloud.net login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root. single-instance-server-2ujbywfyvu6w login: -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu May 21 09:42:09 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 21 May 2015 11:42:09 +0200 Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-05-20) Message-ID: ======================================== #rdo: RDO packaging meeting (2015-05-20) ======================================== Meeting started by apevec at 15:06:11 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-05-20/rdo.2015-05-20-15.06.log.html . Meeting summary --------------- * roll call (apevec, 15:06:34) * review https://trello.com/b/HhXlqdiu/rdo (apevec, 15:08:20) * LINK: https://github.com/openstack-packages/delorean/issues 0 issues - bug free :) (apevec, 15:10:51) * LINK: http://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-kilo/ (apevec, 15:18:26) * EL6 Juno packages (apevec, 15:22:43) * Add GBP in RDO Kilo (apevec, 15:26:34) * LINK: https://admin.fedoraproject.org/pkgdb/packages/*gbp*/ (apevec, 15:28:58) * tools for reqs (apevec, 15:30:17) * LINK: https://github.com/redhat-openstack/rdoinfo/commit/edb300dffd108c3e2c8c54fe8bf4a70045b4bf66 (apevec, 15:34:42) * (Delorean) CI job (apevec, 15:37:30) * https://trello.com/c/rsZEENKI/57-update-rdo-packaging-doc-for-kilo (apevec, 15:44:42) * LINK: https://trello.com/c/SY7bc0yP/58-creating-selinux-policy-module-for-delorean (apevec, 15:46:15) * https://etherpad.openstack.org/p/RDO_Vancouver (apevec, 15:47:30) * open floor (apevec, 15:58:27) Meeting ended at 16:02:29 UTC. Action Items ------------ Action Items, by person ----------------------- * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (109) * number80 (32) * gchamoul (26) * jruzicka (25) * eggmaster (16) * zodbot (5) * trown (2) * social (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From outbackdingo at gmail.com Thu May 21 10:32:20 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Thu, 21 May 2015 20:32:20 +1000 Subject: [Rdo-list] CentOS 7 continuos error Message-ID: Okay so we switch to CentOS 7 installed openstack packstack allinone, it was working fine initialially until after reboot now the dash board throws this error upon trying to login. Something went wrong! An unexpected error has occurred. Try refreshing the page. If that doesn't help, contact your local administrator. -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu May 21 12:54:59 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 21 May 2015 14:54:59 +0200 Subject: [Rdo-list] CentOS 7 continuos error In-Reply-To: References: Message-ID: > help, contact your local administrator. Have your local admin look at httpd logs and post relevant error message and backtrace here :) From apevec at gmail.com Thu May 21 12:55:56 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 21 May 2015 14:55:56 +0200 Subject: [Rdo-list] CentOS 7 continuos error In-Reply-To: References: Message-ID: Also output of openstack-status command. From outbackdingo at gmail.com Thu May 21 13:43:15 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Thu, 21 May 2015 23:43:15 +1000 Subject: [Rdo-list] CentOS 7 continuos error In-Reply-To: References: Message-ID: I am the admin, its on a laptop locally.... http://pastebin.com/PtBwMa9A On Thu, May 21, 2015 at 10:55 PM, Alan Pevec wrote: > Also output of openstack-status command. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu May 21 15:31:39 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 21 May 2015 08:31:39 -0700 Subject: [Rdo-list] [release] Kilo EL7 and Juno EL6 repos available from CentOS Cloud SIG Message-ID: <555DFA5B.1070408@redhat.com> The CentOS Cloud SIG is pleased to announce the availability of OpenStack Kilo package repositories for CentOS 7, and Juno repositories for CentOS 6. These are the result of the last few months of work by the Cloud SIG membership, and, of course, we owe a great deal of gratitude to the upstream OpenStack community as well. The CentOS 7 Kilo repository may be found at http://mirror.centos.org/centos/7/cloud/x86_64/ The Juno CentOS 6 repository may be found at http://mirror.centos.org/centos/6/cloud/x86_64/ The actual -release files will reside in Extras, so that you can yum install centos-release-openstack-kilo for Kilo and yum install centos-release-openstack-juno for Juno, without needing to mess with repo configurations. See also the Juno EL6 RDO QuickStart at http://wiki.centos.org/Cloud/OpenStack/JunoEL6QuickStart CentOS cares about OpenStack. We test all of our cloud images against OpenStack, in the CentOS 5, 6, and 7 branches. The CentOS Cloud SIG is very keen on facilitating community efforts at CentOS, and we have resources available for CI, repos, and other needs, which the community can use. We welcome your participation in this effort. We're dedicated to ensuring that CentOS is a solid, dependable platform for deploying OpenStack, and that all versions of OpenStack are thoroughly tested against CentOS, and vice versa. You can find out more about the CentOS Cloud SIG, and how to get involved, at http://wiki.centos.org/SpecialInterestGroup/Cloud and about the RDO project at http://rdoproject.org/ -- Rich Bowen - rbowen at redhat.com http://rdoproject.org/ From rdo-info at redhat.com Thu May 21 15:40:01 2015 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 21 May 2015 15:40:01 +0000 Subject: [Rdo-list] [RDO] CentOS Cloud SIG announces Kilo and Juno package repos Message-ID: <0000014d77219e1d-929991fa-64d8-4937-8f78-215812b4c479-000000@email.amazonses.com> rbowen started a discussion. CentOS Cloud SIG announces Kilo and Juno package repos --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1017/centos-cloud-sig-announces-kilo-and-juno-package-repos Have a great day! From mrunge at redhat.com Thu May 21 17:02:26 2015 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 21 May 2015 19:02:26 +0200 Subject: [Rdo-list] CentOS 7 continuos error In-Reply-To: References: Message-ID: <555E0FA2.1080501@redhat.com> On 21/05/15 15:43, Outback Dingo wrote: > I am the admin, its on a laptop locally.... > > http://pastebin.com/PtBwMa9A > > On Thu, May 21, 2015 at 10:55 PM, Alan Pevec > wrote: > > Also output of openstack-status command. > I suspect this is bug https://bugzilla.redhat.com/show_bug.cgi?id=1218894 I just submitted a patch (and build for fedora), as I can't built on centos. Alan (or Haikel), would you be able to rebuild https://kojipkgs.fedoraproject.org//packages/python-django-horizon/2015.1.0/6.fc23/src/python-django-horizon-2015.1.0-6.fc23.src.rpm on centos? Thanks. From outbackdingo at gmail.com Thu May 21 17:06:32 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Fri, 22 May 2015 03:06:32 +1000 Subject: [Rdo-list] CentOS 7 continuos error In-Reply-To: <555E0FA2.1080501@redhat.com> References: <555E0FA2.1080501@redhat.com> Message-ID: On Fri, May 22, 2015 at 3:02 AM, Matthias Runge wrote: > On 21/05/15 15:43, Outback Dingo wrote: > >> I am the admin, its on a laptop locally.... >> >> http://pastebin.com/PtBwMa9A >> >> On Thu, May 21, 2015 at 10:55 PM, Alan Pevec > > wrote: >> >> Also output of openstack-status command. >> >> I suspect this is > > bug https://bugzilla.redhat.com/show_bug.cgi?id=1218894 > > I just submitted a patch (and build for fedora), as I can't built on > centos. > hrmmm and build for Fedora? you have OpenStack running on Fedora? i tried on F22 Beta3 and it failed miserably. > > Alan (or Haikel), would you be able to rebuild > https://kojipkgs.fedoraproject.org//packages/python-django-horizon/2015.1.0/6.fc23/src/python-django-horizon-2015.1.0-6.fc23.src.rpm > on centos? > > Thanks. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu May 21 18:06:57 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 21 May 2015 20:06:57 +0200 Subject: [Rdo-list] CentOS 7 continuos error In-Reply-To: References: <555E0FA2.1080501@redhat.com> Message-ID: >> I just submitted a patch (and build for fedora), as I can't built on >> centos. CBS build done https://cbs.centos.org/koji/buildinfo?buildID=1256 I'll create rdopkg update. > hrmmm and build for Fedora? you have OpenStack running on Fedora? i tried on > F22 Beta3 and it failed miserably. RDO Kilo Fedora are Fedora Rawhide (fc23) builds: http://koji.fedoraproject.org/koji/buildinfo?buildID=638394 Cheers, Alan From mrunge at redhat.com Thu May 21 18:58:05 2015 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 21 May 2015 20:58:05 +0200 Subject: [Rdo-list] CentOS 7 continuos error In-Reply-To: References: <555E0FA2.1080501@redhat.com> Message-ID: <555E2ABD.9040507@redhat.com> On 21/05/15 19:06, Outback Dingo wrote: > > hrmmm and build for Fedora? you have OpenStack running on Fedora? i > tried on F22 Beta3 and it failed miserably. Was that a question? Yes, of course I have it running on Fedora. But you need to pull packages from rawhide, as horizon requires many of the python-clients, which have been on Fedora in the right version (at least at last time when I checked). Matthias From erwan at erwan.com Thu May 21 19:04:32 2015 From: erwan at erwan.com (Erwan Gallen) Date: Thu, 21 May 2015 12:04:32 -0700 Subject: [Rdo-list] [CI] [Khaleesi] Agenda for rdo meeting at summit In-Reply-To: <1431694819.2791.24.camel@redhat.com> References: <1431694819.2791.24.camel@redhat.com> Message-ID: Great RDO meeting at the Vancouver OpenStack Summit. You can find the minutes here: https://etherpad.openstack.org/p/RDO_Vancouver For those who could not join, you can find some pictures here: - Rich Bowen http://erwan.com/ressource/rdo/r1.jpg - Perry Myers http://erwan.com/ressource/rdo/p1.jpg http://erwan.com/ressource/rdo/p2.jpg - Derek Higgins http://erwan.com/ressource/rdo/d1.jpg http://erwan.com/ressource/rdo/d2.jpg - Jaromir Coufal http://erwan.com/ressource/rdo/j1.jpg http://erwan.com/ressource/rdo/j2.jpg Cheers, Erwan Le 15 mai 2015 ? 06:00, whayutin a ?crit : > Greetings, > I've added an agenda item to the rdo meeting for developing community > governance around khaleesi and openstack ci as it relates to rdo and > osp. > > https://etherpad.openstack.org/p/RDO_Vancouver > > * RDO/OSP Openstack CI [khaleesi] Governance > * develop and adopt community rules for submissions and review > * develop and adopt a set of best practices. > * develop public documentation > > Anyone involved in CI should do there best to attend. > Thanks! > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri May 22 05:58:20 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 22 May 2015 01:58:20 -0400 Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 Message-ID: Per https://trello.com/c/sfDsedeI/29-rdo-kilo-ga :- Fedora testing repo is now available at rdoproject.org/repos/openstack-kilo/testing/f22/ To enable it: yum install https://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm; yum-config-manager --enable openstack-kilo-testing --disable openstack-kilo Manually switched to testing repo to install openstack-packstack. # setenforce 0 # packstack --allinone fails immediately during running prescript puppet /sbin/chkconfig --add iptables No such directory Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Fri May 22 07:19:39 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 22 May 2015 09:19:39 +0200 Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: > fails immediately during running prescript puppet > /sbin/chkconfig --add iptables > No such directory Please try on F21, on F22 there are still puppet issues. Cheers, Alan From hguemar at fedoraproject.org Fri May 22 07:32:14 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Fri, 22 May 2015 09:32:14 +0200 Subject: [Rdo-list] [CI] [Khaleesi] Agenda for rdo meeting at summit In-Reply-To: References: <1431694819.2791.24.camel@redhat.com> Message-ID: Thanks for the pictures, Erwan. Compared to Paris, I may count more non-redhatters attending which is a good indicator of growth. This is a positive outcome, with some actions I wished to happen, that will take RDO to the next level: undisputed "Reference Distribution of OpenStack" ;) Regards, H. From bderzhavets at hotmail.com Fri May 22 08:45:00 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 22 May 2015 04:45:00 -0400 Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: , Message-ID: AIO packstack install on F21 works fine, except I was again forced to switch to testing repo manually. Thanks. Boris. > Date: Fri, 22 May 2015 09:19:39 +0200 > Subject: Re: Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 > From: apevec at gmail.com > To: bderzhavets at hotmail.com > CC: rdo-list at redhat.com > > > fails immediately during running prescript puppet > > /sbin/chkconfig --add iptables > > No such directory > > Please try on F21, on F22 there are still puppet issues. > > Cheers, > Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri May 22 08:55:15 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 22 May 2015 04:55:15 -0400 Subject: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: , , , Message-ID: Sorry, issue with openstack-nova-novncproxy.service again, the rest seems to be OK. ( connected to VNC console via virt-manager) [root at ip-192-169-142-57 ~(keystone_admin)]# systemctl status openstack-nova-novncproxy.service -l ? openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled) Active: failed (Result: exit-code) since Fri 2015-05-22 11:47:08 MSK; 41s ago Process: 19781 ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE) Main PID: 19781 (code=exited, status=1/FAILURE) May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net nova-novncproxy[19781]: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net nova-novncproxy[19781]: from nova.cmd import baseproxy May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net nova-novncproxy[19781]: File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net nova-novncproxy[19781]: from nova.console import websocketproxy May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net nova-novncproxy[19781]: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 154, in May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net nova-novncproxy[19781]: websockify.ProxyRequestHandler): May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net nova-novncproxy[19781]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler' May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net systemd[1]: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net systemd[1]: Unit openstack-nova-novncproxy.service entered failed state. May 22 11:47:08 ip-192-169-142-57.ip.secureserver.net systemd[1]: openstack-nova-novncproxy.service failed. From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 22 May 2015 04:45:00 -0400 CC: rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 AIO packstack install on F21 works fine, except I was again forced to switch to testing repo manually. Thanks. Boris. > Date: Fri, 22 May 2015 09:19:39 +0200 > Subject: Re: Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 > From: apevec at gmail.com > To: bderzhavets at hotmail.com > CC: rdo-list at redhat.com > > > fails immediately during running prescript puppet > > /sbin/chkconfig --add iptables > > No such directory > > Please try on F21, on F22 there are still puppet issues. > > Cheers, > Alan _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Fri May 22 12:41:37 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 22 May 2015 14:41:37 +0200 Subject: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: 2015-05-22 10:55 GMT+02:00 Boris Derzhavets : > Sorry, issue with openstack-nova-novncproxy.service again, the rest seems > to be OK. > ( connected to VNC console via virt-manager) > nova-novncproxy[19781]: websockify.ProxyRequestHandler): > nova-novncproxy[19781]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler' Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to f21-updates, please create Bodhi update. We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. Cheers, Alan From ichi.sara at gmail.com Fri May 22 13:09:28 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Fri, 22 May 2015 15:09:28 +0200 Subject: [Rdo-list] [horizon] can't access the dashboard Message-ID: Hello there, I recently installed openstack kilo with packstack. The dashboard was working fine until I rebooted My machine, here I couldn't any more access the dashboard and I had the error that says to contact the administrator for more details. I googled my issue with horizon and found somewhere on that the problem is related to OPENSTACK-KEYSTONE_DEFAULT_ROLE and they suggested to define it as admin. I did, but my problem is still persisting. I enabled debug mode and here I got this errors in the browser after a failed attemp to login to my dashboard. In advance, thanks for your response. Sara ValidationError at /auth/login/ [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit \xeatre un nombre entier.'] Request Method: POST Request URL: http://192.168.5.34/dashboard/auth/login/ Django Version: 1.8.1 Exception Type: ValidationError Exception Value: [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit \xeatre un nombre entier.'] Exception Location: /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py in to_python, line 969 Python Executable: /usr/bin/python Python Version: 2.7.5 Python Path: ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib/python2.7/site-packages', '/usr/share/openstack-dashboard/openstack_dashboard'] Server time: ven, 22 Mai 2015 12:52:22 +0000 -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Fri May 22 13:14:02 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 22 May 2015 15:14:02 +0200 Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: 2015-05-22 10:45 GMT+02:00 Boris Derzhavets : > AIO packstack install on F21 works fine, except I was again forced to > switch to testing repo manually. Yeah, there's packstack change under review https://review.openstack.org/183351 which should fix this. Alan From apevec at gmail.com Fri May 22 13:20:12 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 22 May 2015 15:20:12 +0200 Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: <555F2AC9.3010608@draigBrady.com> References: <555F2AC9.3010608@draigBrady.com> Message-ID: > We can't do that, as it would break the icehouse packages in F21. > I.E. icehouse is incompat with websockify-0.6.0 as per: > https://bugzilla.redhat.com/show_bug.cgi?id=1220081 heh, Icehouse, I forgot about it... > I'm not keen on backporting changes to icehouse packages > to handle the incompat, and hopefully putting the > appropriate versions in the RDO repos will suffice. > > In that regard I also refer to the opposite issue on EL7 and: > https://review.gerrithub.io/#/c/232925/ I commented there, I think backporting would be better, but you're Nova maintainer. I did put 0.6.0 in RDO Kilo Fedora in f21-compat subfolder and will push that 0.5.1 downgrade to RDO Icehouse E7. Cheers, Alan From pbrady at redhat.com Fri May 22 13:22:17 2015 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Fri, 22 May 2015 14:22:17 +0100 Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: <555F2AC9.3010608@draigBrady.com> References: <555F2AC9.3010608@draigBrady.com> Message-ID: <555F2D89.7020908@redhat.com> On 22/05/15 14:10, P?draig Brady wrote: > On 22/05/15 13:41, Alan Pevec wrote: >> 2015-05-22 10:55 GMT+02:00 Boris Derzhavets : >>> Sorry, issue with openstack-nova-novncproxy.service again, the rest seems >>> to be OK. >>> ( connected to VNC console via virt-manager) >> >>> nova-novncproxy[19781]: websockify.ProxyRequestHandler): >>> nova-novncproxy[19781]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler' >> >> Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to >> f21-updates, please create Bodhi update. >> We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. > > We can't do that, as it would break the icehouse packages in F21. > I.E. icehouse is incompat with websockify-0.6.0 as per: > https://bugzilla.redhat.com/show_bug.cgi?id=1220081 > > I'm not keen on backporting changes to icehouse packages > to handle the incompat, and hopefully putting the > appropriate versions in the RDO repos will suffice. > > In that regard I also refer to the opposite issue on EL7 and: > https://review.gerrithub.io/#/c/232925/ I should have also mentioned that F22 was in the $subject and https://admin.fedoraproject.org/updates/FEDORA-2015-7274/python-websockify-0.6.0-2.fc22 should already be available. Perhaps a `dnf update` is required? By right we should have a Requires: python-websockify >= 0.6 in the nova spec file to auto update. cheers, P?draig. From apevec at gmail.com Fri May 22 14:21:27 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 22 May 2015 16:21:27 +0200 Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: <555F2D89.7020908@redhat.com> References: <555F2AC9.3010608@draigBrady.com> <555F2D89.7020908@redhat.com> Message-ID: > I should have also mentioned that F22 was in the $subject That was first attempt which failed due to puppet issue in f22 (puppet 4 which is compatible with f22 ruby is in progress) F21 should work now with https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/f22/f21-compat/python-websockify-0.6.0-2.fc21.noarch.rpm > and https://admin.fedoraproject.org/updates/FEDORA-2015-7274/python-websockify-0.6.0-2.fc22 > should already be available. yep, f22 is fine Alan From pgsousa at gmail.com Fri May 22 14:35:02 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 22 May 2015 15:35:02 +0100 Subject: [Rdo-list] RDO Manager overcloud VNC Deployment doesn't work Message-ID: Hi all, I've deployed the overcloud and launched a VM but I cannot VNC to the console. *Seeing in the logs I see this:* *2015-05-22 14:27:57.573 1140 TRACE nova File "/usr/lib64/python2.7/socket.py", line 224, in meth* *2015-05-22 14:27:57.573 1140 TRACE nova return getattr(self._sock,name)(*args)* *2015-05-22 14:27:57.573 1140 TRACE nova error: [Errno 98] Address already in use* *2015-05-22 14:27:57.573 1140 TRACE nova * My guess is that nova.conf in controllers nodes should have: nova.conf:novncproxy_base_url=http://VIP:6080/vnc_auto.html novncproxy_host=VIP Instead of: nova.conf:novncproxy_base_url=http://0.0.0.0:6080/vnc_auto.html novncproxy_host=0.0.0.0 Regards, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From P at draigbrady.com Fri May 22 13:10:33 2015 From: P at draigbrady.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Fri, 22 May 2015 14:10:33 +0100 Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: <555F2AC9.3010608@draigBrady.com> On 22/05/15 13:41, Alan Pevec wrote: > 2015-05-22 10:55 GMT+02:00 Boris Derzhavets : >> Sorry, issue with openstack-nova-novncproxy.service again, the rest seems >> to be OK. >> ( connected to VNC console via virt-manager) > >> nova-novncproxy[19781]: websockify.ProxyRequestHandler): >> nova-novncproxy[19781]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler' > > Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to > f21-updates, please create Bodhi update. > We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. We can't do that, as it would break the icehouse packages in F21. I.E. icehouse is incompat with websockify-0.6.0 as per: https://bugzilla.redhat.com/show_bug.cgi?id=1220081 I'm not keen on backporting changes to icehouse packages to handle the incompat, and hopefully putting the appropriate versions in the RDO repos will suffice. In that regard I also refer to the opposite issue on EL7 and: https://review.gerrithub.io/#/c/232925/ cheers, P?draig From marius at remote-lab.net Fri May 22 15:25:00 2015 From: marius at remote-lab.net (Marius Cornea) Date: Fri, 22 May 2015 18:25:00 +0300 Subject: [Rdo-list] RDO Manager overcloud VNC Deployment doesn't work In-Reply-To: References: Message-ID: Hi Pedro, There's a BZ open for this: https://bugzilla.redhat.com/show_bug.cgi?id=1221986 On Fri, May 22, 2015 at 5:35 PM, Pedro Sousa wrote: > Hi all, > > I've deployed the overcloud and launched a VM but I cannot VNC to the > console. > > Seeing in the logs I see this: > > 2015-05-22 14:27:57.573 1140 TRACE nova File > "/usr/lib64/python2.7/socket.py", line 224, in meth > 2015-05-22 14:27:57.573 1140 TRACE nova return > getattr(self._sock,name)(*args) > 2015-05-22 14:27:57.573 1140 TRACE nova error: [Errno 98] Address already in > use > 2015-05-22 14:27:57.573 1140 TRACE nova > > My guess is that nova.conf in controllers nodes should have: > > nova.conf:novncproxy_base_url=http://VIP:6080/vnc_auto.html > novncproxy_host=VIP > > Instead of: > > nova.conf:novncproxy_base_url=http://0.0.0.0:6080/vnc_auto.html > novncproxy_host=0.0.0.0 > > Regards, > Pedro Sousa > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From roxenham at redhat.com Fri May 22 16:20:38 2015 From: roxenham at redhat.com (Rhys Oxenham) Date: Fri, 22 May 2015 09:20:38 -0700 Subject: [Rdo-list] [horizon] can't access the dashboard In-Reply-To: References: Message-ID: <1B69E72A-7E3B-42DE-839E-A6BA896C3772@redhat.com> Hi Sara, To confirm it?s also an issue that I?m seeing, does the error go away when you access the page via a private browser window (or by clearing your cookies)? Cheers Rhys > On 22 May 2015, at 06:09, ICHIBA Sara wrote: > > Hello there, > > I recently installed openstack kilo with packstack. The dashboard was working fine until I rebooted My machine, here I couldn't any more access the dashboard and I had the error that says to contact the administrator for more details. > > I googled my issue with horizon and found somewhere on that the problem is related to OPENSTACK-KEYSTONE_DEFAULT_ROLE and they suggested to define it as admin. I did, but my problem is still persisting. I enabled debug mode and here I got this errors in the browser after a failed attemp to login to my dashboard. > > > In advance, thanks for your response. > Sara > > ValidationError at /auth/login/ > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit \xeatre un nombre entier.'] > Request Method: POST > Request URL: http://192.168.5.34/dashboard/auth/login/ > Django Version: 1.8.1 > Exception Type: ValidationError > Exception Value: > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit \xeatre un nombre entier.'] > Exception Location: /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py in to_python, line 969 > Python Executable: /usr/bin/python > Python Version: 2.7.5 > Python Path: > ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', > '/usr/lib64/python27.zip', > '/usr/lib64/python2.7', > '/usr/lib64/python2.7/plat-linux2', > '/usr/lib64/python2.7/lib-tk', > '/usr/lib64/python2.7/lib-old', > '/usr/lib64/python2.7/lib-dynload', > '/usr/lib64/python2.7/site-packages', > '/usr/lib/python2.7/site-packages', > '/usr/share/openstack-dashboard/openstack_dashboard'] > > Server time: ven, 22 Mai 2015 12:52:22 +0000 > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From jpena at redhat.com Fri May 22 16:26:55 2015 From: jpena at redhat.com (Javier =?ISO-8859-1?Q?Pe=F1a?=) Date: Fri, 22 May 2015 18:26:55 +0200 Subject: [Rdo-list] HA document using application-native tools and keepalived, updated for Kilo Message-ID: <1432312015.16976.5.camel@redhat.com> Dear all, Following the RDO Kilo release, I have updated the document describing an architecture for a highly available OpenStack setup using application-native options and keepalived. As part of the update, instructions for Sahara and Trove have been added, and several smaller details have been fixed. You can find the updated document at https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md . Of course, the Juno version is still available in a separate branch. Feedback (and PRs) are welcome. I expect to add detailed scenario files using phd (https://github.com/davidvossel/phd) in the future, but that is currently work in progress. Thanks, Javier From mrunge at redhat.com Fri May 22 17:06:55 2015 From: mrunge at redhat.com (Matthias Runge) Date: Fri, 22 May 2015 19:06:55 +0200 Subject: [Rdo-list] [horizon] can't access the dashboard In-Reply-To: <1B69E72A-7E3B-42DE-839E-A6BA896C3772@redhat.com> References: <1B69E72A-7E3B-42DE-839E-A6BA896C3772@redhat.com> Message-ID: <555F622F.8020208@redhat.com> On 22/05/15 18:20, Rhys Oxenham wrote: > Hi Sara, > > To confirm it?s also an issue that I?m seeing, does the error go away when you access the page via a private browser window (or by clearing your cookies)? > > Cheers > Rhys > This is something I fixed yesterday. The bug number is (still) https://bugzilla.redhat.com/show_bug.cgi?id=1218894 I must admit, I have no idea, when the fix will be merged into repos. Matthias From sgordon at redhat.com Fri May 22 21:25:07 2015 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 22 May 2015 17:25:07 -0400 (EDT) Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: <195895727.3775647.1432329877227.JavaMail.zimbra@redhat.com> Message-ID: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> Hi all, It's been pointed out to me that in our Kilo openstack-keystone package we ship a /usr/share/keystone/keystone-dist-paste.ini file but no /usr/share/keystone/keystone-paste.ini. Is this intentional? Asking in the context of https://review.openstack.org/#/c/185120 Thanks, Steve From sgordon at redhat.com Sat May 23 13:40:36 2015 From: sgordon at redhat.com (Steve Gordon) Date: Sat, 23 May 2015 09:40:36 -0400 (EDT) Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> References: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> Message-ID: <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Steve Gordon" > To: "rdo-list" > > Hi all, > > It's been pointed out to me that in our Kilo openstack-keystone package we > ship a /usr/share/keystone/keystone-dist-paste.ini file but no > /usr/share/keystone/keystone-paste.ini. Is this intentional? Asking in the > context of https://review.openstack.org/#/c/185120 > > Thanks, > > Steve Sorry, I meant there is no /etc/keystone/keystone-paste.ini. Thanks, Steve From outbackdingo at gmail.com Sat May 23 17:07:26 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Sun, 24 May 2015 03:07:26 +1000 Subject: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: So can someoone clarify for me what repo is successful on Fedora 22 that i dshould use for a fresh laptop On Fri, May 22, 2015 at 10:41 PM, Alan Pevec wrote: > 2015-05-22 10:55 GMT+02:00 Boris Derzhavets : > > Sorry, issue with openstack-nova-novncproxy.service again, the rest > seems > > to be OK. > > ( connected to VNC console via virt-manager) > > > nova-novncproxy[19781]: websockify.ProxyRequestHandler): > > nova-novncproxy[19781]: AttributeError: 'module' object has no attribute > 'ProxyRequestHandler' > > Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to > f21-updates, please create Bodhi update. > We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Sat May 23 18:10:02 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sat, 23 May 2015 20:10:02 +0200 Subject: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: I submitted the update with a karma threshold of 1 (I was able to reproduce the issue). https://admin.fedoraproject.org/updates/python-websockify-0.6.0-2.fc21 Regards, H. From Yaniv.Kaul at emc.com Sat May 23 19:56:51 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Sat, 23 May 2015 15:56:51 -0400 Subject: [Rdo-list] [RDO] CentOS Cloud SIG announces Kilo and Juno package repos In-Reply-To: <0000014d77219e1d-929991fa-64d8-4937-8f78-215812b4c479-000000@email.amazonses.com> References: <0000014d77219e1d-929991fa-64d8-4937-8f78-215812b4c479-000000@email.amazonses.com> Message-ID: <648473255763364B961A02AC3BE1060D03D0F2F033@MX19A.corp.emc.com> What's the relationship between those packages and those available @ https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/ ? Y. > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of RDO Forum > Sent: Thursday, May 21, 2015 6:40 PM > To: rdo-list > Subject: [Rdo-list] [RDO] CentOS Cloud SIG announces Kilo and Juno package > repos > > rbowen started a discussion. > > CentOS Cloud SIG announces Kilo and Juno package repos > > --- > Follow the link below to check it out: > https://www.rdoproject.org/forum/discussion/1017/centos-cloud-sig- > announces-kilo-and-juno-package-repos > > Have a great day! > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Sat May 23 20:21:44 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sat, 23 May 2015 22:21:44 +0200 Subject: [Rdo-list] [RDO] CentOS Cloud SIG announces Kilo and Juno package repos In-Reply-To: <648473255763364B961A02AC3BE1060D03D0F2F033@MX19A.corp.emc.com> References: <0000014d77219e1d-929991fa-64d8-4937-8f78-215812b4c479-000000@email.amazonses.com> <648473255763364B961A02AC3BE1060D03D0F2F033@MX19A.corp.emc.com> Message-ID: 2015-05-23 21:56 GMT+02:00 Kaul, Yaniv : > What's the relationship between those packages and those available @ https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/ ? > Y. > RDO repos are currently re-shipping builds from CentOS for EL7. We're in the process of switching to CentOS repositories, for instance, there's work to automate the publication process with sanity checks on repo level. H. From sgordon at redhat.com Sat May 23 23:53:22 2015 From: sgordon at redhat.com (Steve Gordon) Date: Sat, 23 May 2015 19:53:22 -0400 (EDT) Subject: [Rdo-list] [RDO] CentOS Cloud SIG announces Kilo and Juno package repos In-Reply-To: References: <0000014d77219e1d-929991fa-64d8-4937-8f78-215812b4c479-000000@email.amazonses.com> <648473255763364B961A02AC3BE1060D03D0F2F033@MX19A.corp.emc.com> Message-ID: <1109203220.3878867.1432425202027.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Ha?kel" > To: "Yaniv Kaul" > > 2015-05-23 21:56 GMT+02:00 Kaul, Yaniv : > > What's the relationship between those packages and those available @ > > https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/ ? > > Y. > > > > RDO repos are currently re-shipping builds from CentOS for EL7. > We're in the process of switching to CentOS repositories, for instance, > there's work to automate the publication process with sanity checks on > repo level. > > H. Tangential question, where can I find the corresponding SRPMs? Thanks, Steve From hguemar at fedoraproject.org Sun May 24 13:21:25 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sun, 24 May 2015 15:21:25 +0200 Subject: [Rdo-list] [RDO] CentOS Cloud SIG announces Kilo and Juno package repos In-Reply-To: <1109203220.3878867.1432425202027.JavaMail.zimbra@redhat.com> References: <0000014d77219e1d-929991fa-64d8-4937-8f78-215812b4c479-000000@email.amazonses.com> <648473255763364B961A02AC3BE1060D03D0F2F033@MX19A.corp.emc.com> <1109203220.3878867.1432425202027.JavaMail.zimbra@redhat.com> Message-ID: 2015-05-24 1:53 GMT+02:00 Steve Gordon : > > Tangential question, where can I find the corresponding SRPMs? > > Thanks, > > Steve Good question, I'm CC'ing Alan and KB as we need to address this topic. At the moment, the only way to retrieve source packages is through CBS: http://cbs.centos.org/repos/cloud7-openstack-common-testing/source/SRPMS/ http://cbs.centos.org/repos/cloud7-openstack-kilo-testing/source/SRPMS/ I advise against publicizing too much these repos as they are managed by Koji, they're *not* suitable for GA nor reliable. Moreover, we still have to improve how we handle sources between Fedora's dist-git, github and make it easier to find versioned sources. Regards, H. From hguemar at fedoraproject.org Sun May 24 13:28:09 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sun, 24 May 2015 15:28:09 +0200 Subject: [Rdo-list] [RDO] CentOS Cloud SIG announces Kilo and Juno package repos In-Reply-To: <1109203220.3878867.1432425202027.JavaMail.zimbra@redhat.com> References: <0000014d77219e1d-929991fa-64d8-4937-8f78-215812b4c479-000000@email.amazonses.com> <648473255763364B961A02AC3BE1060D03D0F2F033@MX19A.corp.emc.com> <1109203220.3878867.1432425202027.JavaMail.zimbra@redhat.com> Message-ID: 2015-05-24 1:53 GMT+02:00 Steve Gordon : > > Tangential question, where can I find the corresponding SRPMs? > > Thanks, > > Steve Good question, I'm CC'ing Alan and KB as we need to address this topic. At the moment, the only way to retrieve source packages is through CBS: http://cbs.centos.org/repos/cloud7-openstack-common-testing/source/SRPMS/ http://cbs.centos.org/repos/cloud7-openstack-kilo-testing/source/SRPMS/ I advise against publicizing too much these repos as they are managed by Koji, they're *not* suitable for GA nor reliable. Moreover, we still have to improve how we handle sources between Fedora's dist-git, github and make it easier to find versioned sources. Regards, H. From bderzhavets at hotmail.com Sun May 24 19:05:52 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 24 May 2015 15:05:52 -0400 Subject: [Rdo-list] RE(3) : Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: /sbin/chkconfig --add service doesn't work on Fedora 22 RC1 Workaround for for iptables service iptables save systemctl stop firewalld systemctl disable firewalld systemctl start iptables systemctl enable iptables Restart packstack. Failures will follow one by one chkconfig --add openstack-name returns 1 systemctl start openstack-name systemctl enable openstack-name Restart packstack.and so on .. and so on .. I quit procedure after nova-api was done 192.169.142.57_prescript.pp: [ DONE ] Applying 192.169.142.57_amqp.pp Applying 192.169.142.57_mariadb.pp 192.169.142.57_amqp.pp: [ DONE ] 192.169.142.57_mariadb.pp: [ DONE ] Applying 192.169.142.57_keystone.pp Applying 192.169.142.57_glance.pp Applying 192.169.142.57_cinder.pp 192.169.142.57_keystone.pp: [ DONE ] 192.169.142.57_cinder.pp: [ DONE ] 192.169.142.57_glance.pp: [ DONE ] Applying 192.169.142.57_api_nova.pp 192.169.142.57_api_nova.pp: [ DONE ] Applying 192.169.142.57_nova.pp 192.169.142.57_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.57_nova.pp Error: Could not enable openstack-ceilometer-compute: Execution of '/sbin/chkconfig --add openstack-ceilometer-compute' returned 1: error reading information on service openstack-ceilometer-compute: No such file or directory Next fix is :- systemctl start openstack-ceilometer-compute Restart packstack Boris. -------------------------------------------------------------------------------------------------------------------------- From: bderzhavets at hotmail.com To: apevec at gmail.com Date: Fri, 22 May 2015 01:58:20 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 Per https://trello.com/c/sfDsedeI/29-rdo-kilo-ga :- Fedora testing repo is now available at rdoproject.org/repos/openstack-kilo/testing/f22/ To enable it: yum install https://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm; yum-config-manager --enable openstack-kilo-testing --disable openstack-kilo Manually switched to testing repo to install openstack-packstack. # setenforce 0 # packstack --allinone fails immediately during running prescript puppet /sbin/chkconfig --add iptables No such directory Boris. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon May 25 08:10:57 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 25 May 2015 10:10:57 +0200 Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> References: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> Message-ID: <20150525081057.GB15019@tesla> On Sat, May 23, 2015 at 09:40:36AM -0400, Steve Gordon wrote: > ----- Original Message ----- > > From: "Steve Gordon" > > To: "rdo-list" > > > > Hi all, > > > > It's been pointed out to me that in our Kilo openstack-keystone package we > > ship a /usr/share/keystone/keystone-dist-paste.ini file but no > > /usr/share/keystone/keystone-paste.ini. Is this intentional? Asking in the > > context of https://review.openstack.org/#/c/185120 > > > > Thanks, > > > > Steve > > Sorry, I meant there is no /etc/keystone/keystone-paste.ini. Extracting the Keystone SRPM, and looking at 'keystone-dist.conf', it is referring to /usr/share: $ cat keystone-dist.conf [DEFAULT] log_file=/var/log/keystone/keystone.log use_stderr = False [database] connection=mysql://keystone:keystone at localhost/keystone [paste_deploy] config_file=/usr/share/keystone/keystone-dist-paste.ini So, RDO just seems to be sticking with upstream behavior. -- /kashyap From apevec at gmail.com Mon May 25 08:36:33 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 25 May 2015 10:36:33 +0200 Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: <20150525081057.GB15019@tesla> References: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> <20150525081057.GB15019@tesla> Message-ID: > So, RDO just seems to be sticking with upstream behavior. dist.conf is not upstream, it was introduced in the initial Fedora packaging specifically to enforce distribution defaults. Regarding paste.ini, the thinking was that this is NOT an user configurable file and should be treated as _code_ because it has full code paths for filters which could change between releases so RPM packaging is doing the right thing by keeping distro default out of /etc and overwriting it on upgrades. There's still an option for users to create paste.ini in /etc or wherever and point to it in /etc/keystone.conf [paste_deploy] section but this is problematic from the support perspective. Keystone project could consider providing something more supportable like Glance with paste_deploy.flavor instead of documenting editing of paste.ini. Cheers, Alan From apevec at gmail.com Mon May 25 08:44:35 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 25 May 2015 10:44:35 +0200 Subject: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: 2015-05-23 20:10 GMT+02:00 Ha?kel : > I submitted the update with a karma threshold of 1 (I was able to > reproduce the issue). > https://admin.fedoraproject.org/updates/python-websockify-0.6.0-2.fc21 Please revoke, or RDO Kilo on F21 compatibility we have it in https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/f22/f21-compat/python-websockify-0.6.0-2.fc21.noarch.rpm Cheers, Alan From apevec at gmail.com Mon May 25 08:48:24 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 25 May 2015 10:48:24 +0200 Subject: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: 2015-05-23 19:07 GMT+02:00 Outback Dingo : > So can someoone clarify for me what repo is successful on Fedora 22 that i > dshould use for a fresh laptop websockify issue discussed is only for F21 and solved by websockify update in RDO Kilo repo. There are known issues on Fedora 22 (see the other thread here on rdo-list) which will require puppet module changes, please use Fedora 21 for now: yum install https://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm yum-config-manager --disable openstack-kilo yum-config-manager --enable openstack-kilo-testing setenforce 0 From kchamart at redhat.com Mon May 25 09:00:43 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 25 May 2015 11:00:43 +0200 Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: References: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> <20150525081057.GB15019@tesla> Message-ID: <20150525090043.GC15019@tesla> On Mon, May 25, 2015 at 10:36:33AM +0200, Alan Pevec wrote: > > So, RDO just seems to be sticking with upstream behavior. > > dist.conf is not upstream, it was introduced in the initial Fedora > packaging specifically to enforce distribution defaults. > Regarding paste.ini, Ah, sorry -- was about to check Fedora's dist-git, but made the wrong conclusion, thanks for correcting. > the thinking was that this is NOT an user configurable file and should > be treated as _code_ because it has full code paths for filters which > could change between releases so RPM packaging is doing the right > thing by keeping distro default out of /etc and overwriting it on > upgrades. Yep, that sounds sensible. > There's still an option for users to create paste.ini in /etc or > wherever and point to it in /etc/keystone.conf [paste_deploy] section > but this is problematic from the support perspective. Keystone > project could consider providing something more supportable like > Glance with paste_deploy.flavor instead of documenting editing of > paste.ini. > > Cheers, Alan -- /kashyap From apevec at gmail.com Mon May 25 10:23:21 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 25 May 2015 12:23:21 +0200 Subject: [Rdo-list] [RDO] CentOS Cloud SIG announces Kilo and Juno package repos In-Reply-To: References: <0000014d77219e1d-929991fa-64d8-4937-8f78-215812b4c479-000000@email.amazonses.com> <648473255763364B961A02AC3BE1060D03D0F2F033@MX19A.corp.emc.com> <1109203220.3878867.1432425202027.JavaMail.zimbra@redhat.com> Message-ID: >> Tangential question, where can I find the corresponding SRPMs? > Good question, I'm CC'ing Alan and KB as we need to address this topic. Do we really need to publish SRPMs, wouldn't it be enough to point to dist-git, which is http://pkgs.fedoraproject.org/cgit/?q=openstack- for now? Cheers, Alan From pgsousa at gmail.com Mon May 25 11:09:18 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 25 May 2015 12:09:18 +0100 Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? Message-ID: Hi all, I've deployed rdo-manager in a virt env and everything is working fine except the vnc console which is alreday an open bug for that. Now I would like to change some parameters on my deployment, let's say I wan't to disable NeutronTunneling, I wan't to use VLAN for tenants and use 1500 MTU on dnsmasq. So I downloaded the plan: #tuskar plan-templates -O /tmp uuid changed plan.yaml, environment.yaml, provider-Controller-1.yaml, provider-Compute-1.yaml. than I ran the stack: # heat stack-create -f tmp/plan.yaml -e tmp/environment.yaml overcloud The overcloud is deployed fine but the values aren't changed. What I'm missing here? Thanks, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Mon May 25 11:36:31 2015 From: gfidente at redhat.com (Giulio Fidente) Date: Mon, 25 May 2015 13:36:31 +0200 Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? In-Reply-To: References: Message-ID: <5563093F.9000206@redhat.com> On 05/25/2015 01:09 PM, Pedro Sousa wrote: > Hi all, > > I've deployed rdo-manager in a virt env and everything is working fine > except the vnc console which is alreday an open bug for that. > > Now I would like to change some parameters on my deployment, let's say I > wan't to disable NeutronTunneling, I wan't to use VLAN for tenants and > use 1500 MTU on dnsmasq. > > So I downloaded the plan: > > #tuskar plan-templates -O /tmp uuid > > changed plan.yaml, environment.yaml, provider-Controller-1.yaml, > provider-Compute-1.yaml. > > than I ran the stack: > > # heat stack-create -f tmp/plan.yaml -e tmp/environment.yaml overcloud > > The overcloud is deployed fine but the values aren't changed. What I'm > missing here? hi, if you launch stack-create manually the newly created overcloud is not reprovisioned with the initial keystone endpoints/users/roles ... to get an usable overcloud you should launch instack-deploy-overcloud again so you can change the defaults for the various params by patching the tuskar plan with 'tuskar plan-update' see [1] yet some of these are automatically parsed from ENV vars, like NEUTRON_TUNNEL_TYPES and NEUTRON_NETWORK_TYPE see [2] the NeutronDnsmasqOptions param instead is not parsed from any ENV var, so you're forced to use 'tuskar plan-update' I'm adding a couple of guys on CC who migh help but, let us know how it goes! 1. https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L274 2. https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L205-L208 -- Giulio Fidente GPG KEY: 08D733BA From sgordon at redhat.com Mon May 25 12:29:11 2015 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 25 May 2015 08:29:11 -0400 (EDT) Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: References: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> <20150525081057.GB15019@tesla> Message-ID: <2028993815.4263576.1432556951633.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alan Pevec" > To: "Kashyap Chamarthy" , "Adam Young" > > > So, RDO just seems to be sticking with upstream behavior. > > dist.conf is not upstream, it was introduced in the initial Fedora > packaging specifically to enforce distribution defaults. > Regarding paste.ini, the thinking was that this is NOT an user > configurable file and should be treated as _code_ because it has full > code paths for filters which could change between releases so RPM > packaging is doing the right thing by keeping distro default out of > /etc and overwriting it on upgrades. There's still an option for users > to create paste.ini in /etc or wherever and point to it in > /etc/keystone.conf [paste_deploy] section but this is problematic from > the support perspective. > Keystone project could consider providing something more supportable > like Glance with paste_deploy.flavor instead of documenting editing of > paste.ini. > > Cheers, > Alan OK, but in this case the keystone-paste.ini we are using in RDO includes the admin_token_auth directives that the documentation patch is endeavoring to remove [1] and the RHEL-OSP 7 beta packages I took a look at don't. Which is correct? Thanks, Steve [1] https://review.openstack.org/#/c/185120/1/doc/install-guide/section_keystone-verify.xml From hbrock at redhat.com Mon May 25 12:55:57 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Mon, 25 May 2015 14:55:57 +0200 Subject: [Rdo-list] reconsidering midstream repos Message-ID: <20150525125555.GK4035@redhat.com> Seems like the midstream repos are causing us a lot of pain with little gain, at least in some cases. (For example it appears the t-h-t midstream exists to carry a single patch that enables mongodb on Centos.) Is it worth discussing whether we can eliminate some of these, especially for upstreams like t-h-t that aren't tightly tied to the OpenStack release schedule? /me ducks flying bricks --Hugh -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == RDO Manager: Install, configure, and scale OpenStack == == http://rdoproject.org == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From hguemar at fedoraproject.org Mon May 25 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 25 May 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150525150003.BBB6560A94F1@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-05-27 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From dtantsur at redhat.com Mon May 25 18:07:20 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 25 May 2015 20:07:20 +0200 Subject: [Rdo-list] FYI: ironic-discoverd undergoing renaming and official adoption Message-ID: <556364D8.1000700@redhat.com> Hi folks! Vancouver summit was extremely productive for the Ironic team, and this is one of the consequences: Good news is that ironic-discoverd was approved (by the TC) to be included into the Baremetal project. \o/ The complication is that upstream requested its rename. The reason is that word "discovery" is overloaded and causes confusions. According to upstream, the process we do is "inspection" or "introspection". "Discovery" refers to finding new nodes on network, and is considered out of scope for Ironic for now. ironic-discoverd will be split into 2 new packages: - ironic-inspector (python module ironic_inspector) - python-ironic-inspector-client (python module ironic_inspector_client) There will be no changes for RDO Kilo/RHOSP 7. For the next version we should be using new RPM packages (to be created and approved for Fedora): openstack-ironic-inspector and python-ironic-inspector-client. I'm still to learn the action items for the rename, but I will start it ASAP. Cheers, Dmitry From dtantsur at redhat.com Mon May 25 18:10:30 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 25 May 2015 20:10:30 +0200 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <20150525125555.GK4035@redhat.com> References: <20150525125555.GK4035@redhat.com> Message-ID: <55636596.303@redhat.com> On 05/25/2015 02:55 PM, Hugh O. Brock wrote: > Seems like the midstream repos are causing us a lot of pain with little > gain, at least in some cases. (For example it appears the t-h-t > midstream exists to carry a single patch that enables mongodb on > Centos.) Is it worth discussing whether we can eliminate some of these, > especially for upstreams like t-h-t that aren't tightly tied to the > OpenStack release schedule? And probably for clients. I've heard from upstream folks that stable/kilo on clients is not designed for distributions, only for CI. So we should probably consider moving forward (e.g. with ironicclient). > > /me ducks flying bricks > > --Hugh > From apevec at gmail.com Mon May 25 21:46:07 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 25 May 2015 23:46:07 +0200 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <55636596.303@redhat.com> References: <20150525125555.GK4035@redhat.com> <55636596.303@redhat.com> Message-ID: > And probably for clients. I've heard from upstream folks that stable/kilo on > clients is not designed for distributions, only for CI. Was this recorded in any design summit etherpad? If so, link please! I assume this is related to https://review.openstack.org/182672 and one issue discussed there is new dependencies which client projects could pick up on master, so clients intended to work on older branches must keep working with old dependencies. Cheers, Alan From apevec at gmail.com Mon May 25 22:17:57 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 26 May 2015 00:17:57 +0200 Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: <2028993815.4263576.1432556951633.JavaMail.zimbra@redhat.com> References: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> <20150525081057.GB15019@tesla> <2028993815.4263576.1432556951633.JavaMail.zimbra@redhat.com> Message-ID: 2015-05-25 14:29 GMT+02:00 Steve Gordon : > OK, but in this case the keystone-paste.ini we are using in RDO includes the admin_token_auth directives that the documentation patch is endeavoring to remove [1] and the RHEL-OSP 7 beta packages I took a look at don't. Which is correct? It is an upstream bug and potential security issue which should be fixed instead of working around in documentation and deployment tools: https://review.openstack.org/185464 Cheers, Alan > [1] https://review.openstack.org/#/c/185120/1/doc/install-guide/section_keystone-verify.xml From sgordon at redhat.com Mon May 25 22:31:21 2015 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 25 May 2015 18:31:21 -0400 (EDT) Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: References: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> <20150525081057.GB15019@tesla> <2028993815.4263576.1432556951633.JavaMail.zimbra@redhat.com> Message-ID: <54542652.4393465.1432593081626.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alan Pevec" > To: "Steve Gordon" , nkinder at redhat.com > > 2015-05-25 14:29 GMT+02:00 Steve Gordon : > > OK, but in this case the keystone-paste.ini we are using in RDO includes > > the admin_token_auth directives that the documentation patch is > > endeavoring to remove [1] and the RHEL-OSP 7 beta packages I took a look > > at don't. Which is correct? > > It is an upstream bug and potential security issue which should be > fixed instead of working around in documentation and deployment tools: > https://review.openstack.org/185464 > > Cheers, > Alan Thanks Alan, Just to be explicit you are saying it is enough if the documentation is updated to set admin_token = in the /etc/keystone/keystone.conf and the fact that /usr/share/keystone/keystone-dist-paste.ini contains admin_token_auth directives is irrelevant if this is the case? Thanks, Steve From apevec at gmail.com Mon May 25 22:54:47 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 26 May 2015 00:54:47 +0200 Subject: [Rdo-list] No /usr/share/keystone/keystone-paste.ini in openstack-keystone-2015.1.0-1.el7.noarch.rpm In-Reply-To: <54542652.4393465.1432593081626.JavaMail.zimbra@redhat.com> References: <551062873.3775683.1432329907100.JavaMail.zimbra@redhat.com> <893481042.3850660.1432388436190.JavaMail.zimbra@redhat.com> <20150525081057.GB15019@tesla> <2028993815.4263576.1432556951633.JavaMail.zimbra@redhat.com> <54542652.4393465.1432593081626.JavaMail.zimbra@redhat.com> Message-ID: > Just to be explicit you are saying it is enough if the documentation is updated to set admin_token = in the /etc/keystone/keystone.conf and the fact that /usr/share/keystone/keystone-dist-paste.ini contains admin_token_auth directives is irrelevant if this is the case? Once my proposed patch is merged it will be enough to remove or comment out admin_token in keystone.conf to disable it and this will be also default. Right now, default admin_token is ADMIN and this is a security issue which is worked around by the documentation you're pointing out. Cheers, Alan From bderzhavets at hotmail.com Mon May 25 23:32:30 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 25 May 2015 19:32:30 -0400 Subject: [Rdo-list] RE(4): RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: , , , , , Message-ID: You may install RDO Kilo on F22, I spent 4-5 hr to complete install. As soon as you get "chkconfig --add service" returns 1 Respond `systemctl enable service` && packstack --answer-file=./answer-file-xxx-yyyy.txt. More over ( for instance) when IP_nova.pp crashes , check `systemctl | grep nova` && enable all running nova services. Same hook works for glance,cinder,neutron,swift. It's a simple shell script accepting via command line parameter (nova,neutron,...). I took per single service care for iptables,httpd,mariadb,mongod,openvswitch,memcached It improves a knowledge of services been activated during packstack run. I am not kidding. I release that it will be fixed pretty soon, just wanted to get RDO Kilo running on F22 and got it. Boris. Date: Sun, 24 May 2015 03:07:26 +1000 Subject: Re: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 From: outbackdingo at gmail.com To: apevec at gmail.com CC: bderzhavets at hotmail.com; P at draigbrady.com; sross at redhat.com; rdo-list at redhat.com So can someoone clarify for me what repo is successful on Fedora 22 that i dshould use for a fresh laptop On Fri, May 22, 2015 at 10:41 PM, Alan Pevec wrote: 2015-05-22 10:55 GMT+02:00 Boris Derzhavets : > Sorry, issue with openstack-nova-novncproxy.service again, the rest seems > to be OK. > ( connected to VNC console via virt-manager) > nova-novncproxy[19781]: websockify.ProxyRequestHandler): > nova-novncproxy[19781]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler' Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to f21-updates, please create Bodhi update. We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. Cheers, Alan _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue May 26 04:09:53 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 26 May 2015 06:09:53 +0200 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: References: <20150525125555.GK4035@redhat.com> <55636596.303@redhat.com> Message-ID: <5563F211.4000605@redhat.com> On 05/25/2015 11:46 PM, Alan Pevec wrote: >> And probably for clients. I've heard from upstream folks that stable/kilo on >> clients is not designed for distributions, only for CI. > > Was this recorded in any design summit etherpad? If so, link please! > I assume this is related to https://review.openstack.org/182672 and > one issue discussed there is new dependencies which client projects > could pick up on master, so clients intended to work on older branches > must keep working with old dependencies. Nothing recorded, just got it from private conversations e.g. with Devananda. > > Cheers, > Alan > From ichi.sara at gmail.com Tue May 26 07:15:42 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 26 May 2015 09:15:42 +0200 Subject: [Rdo-list] [horizon] can't access the dashboard In-Reply-To: <1B69E72A-7E3B-42DE-839E-A6BA896C3772@redhat.com> References: <1B69E72A-7E3B-42DE-839E-A6BA896C3772@redhat.com> Message-ID: You're right. I removed the cookies associated with my dashboard and the error went away. Thanks 2015-05-22 18:20 GMT+02:00 Rhys Oxenham : > Hi Sara, > > To confirm it?s also an issue that I?m seeing, does the error go away when > you access the page via a private browser window (or by clearing your > cookies)? > > Cheers > Rhys > > > On 22 May 2015, at 06:09, ICHIBA Sara wrote: > > > > Hello there, > > > > I recently installed openstack kilo with packstack. The dashboard was > working fine until I rebooted My machine, here I couldn't any more access > the dashboard and I had the error that says to contact the administrator > for more details. > > > > I googled my issue with horizon and found somewhere on that the problem > is related to OPENSTACK-KEYSTONE_DEFAULT_ROLE and they suggested to define > it as admin. I did, but my problem is still persisting. I enabled debug > mode and here I got this errors in the browser after a failed attemp to > login to my dashboard. > > > > > > In advance, thanks for your response. > > Sara > > > > ValidationError at /auth/login/ > > > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit > \xeatre un nombre entier.'] > > Request Method: POST > > Request URL: http://192.168.5.34/dashboard/auth/login/ > > Django Version: 1.8.1 > > Exception Type: ValidationError > > Exception Value: > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit > \xeatre un nombre entier.'] > > Exception Location: > /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py in > to_python, line 969 > > Python Executable: /usr/bin/python > > Python Version: 2.7.5 > > Python Path: > > ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', > > '/usr/lib64/python27.zip', > > '/usr/lib64/python2.7', > > '/usr/lib64/python2.7/plat-linux2', > > '/usr/lib64/python2.7/lib-tk', > > '/usr/lib64/python2.7/lib-old', > > '/usr/lib64/python2.7/lib-dynload', > > '/usr/lib64/python2.7/site-packages', > > '/usr/lib/python2.7/site-packages', > > '/usr/share/openstack-dashboard/openstack_dashboard'] > > > > Server time: ven, 22 Mai 2015 12:52:22 +0000 > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Tue May 26 12:09:28 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 26 May 2015 14:09:28 +0200 Subject: [Rdo-list] [heat] Authorization failed Message-ID: Hey there, I installed the latest version of openstack with RDO kilo. and did all the necessary configuration to make heat work. I used this command to generate new credentials for heat as I had found the error authorization failed in my heat-engine.log : $ heat-keystone-setup-domain \ --stack-user-domain-name heat_user_domain \ --stack-domain-admin heat_domain_admin \ --stack-domain-admin-password heat_domain_password My problem is that I still have the same problem even if I had updated my heat.conf with these information and rebooted my openstack controller and compute. any ideas? 2015-05-26 13:48:40.380 5046 DEBUG keystoneclient.session [-] Request returned failure status: 401 request /usr/lib/python2.7/site-packages/keystoneclient/session.py:396 2015-05-26 13:48:40.380 5046 DEBUG keystoneclient.v3.client [-] Authorization failed. get_raw_token_from_identity_service /usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:279 2015-05-26 13:48:40.381 5046 ERROR heat.common.keystoneclient [-] Domain admin client authentication failed 2015-05-26 13:48:40.405 5046 INFO heat.engine.stack [-] Stack CREATE FAILED (scaleup_down): Authorization failed. 2015-05-26 13:48:40.406 5046 INFO heat.engine.service [-] Stack create failed, status FAILED 2015-05-26 13:48:40.408 5046 DEBUG heat.engine.stack_lock [-] Engine eb952064-9ddf-498a-b60a-5bd725fdbbef released lock on stack c4dfd3f5-405e-439c-89ba-b65f8add7e74 release /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:132 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Tue May 26 13:08:51 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 26 May 2015 15:08:51 +0200 Subject: [Rdo-list] [horizon] can't access the dashboard In-Reply-To: References: <1B69E72A-7E3B-42DE-839E-A6BA896C3772@redhat.com> Message-ID: this is quite annoying. I have to remove my cookies every time I wanna login. Do you know any other tricks to get rid of this error for once? 2015-05-26 9:15 GMT+02:00 ICHIBA Sara : > You're right. I removed the cookies associated with my dashboard and the > error went away. Thanks > > > 2015-05-22 18:20 GMT+02:00 Rhys Oxenham : > >> Hi Sara, >> >> To confirm it?s also an issue that I?m seeing, does the error go away >> when you access the page via a private browser window (or by clearing your >> cookies)? >> >> Cheers >> Rhys >> >> > On 22 May 2015, at 06:09, ICHIBA Sara wrote: >> > >> > Hello there, >> > >> > I recently installed openstack kilo with packstack. The dashboard was >> working fine until I rebooted My machine, here I couldn't any more access >> the dashboard and I had the error that says to contact the administrator >> for more details. >> > >> > I googled my issue with horizon and found somewhere on that the problem >> is related to OPENSTACK-KEYSTONE_DEFAULT_ROLE and they suggested to define >> it as admin. I did, but my problem is still persisting. I enabled debug >> mode and here I got this errors in the browser after a failed attemp to >> login to my dashboard. >> > >> > >> > In advance, thanks for your response. >> > Sara >> > >> > ValidationError at /auth/login/ >> > >> > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit >> \xeatre un nombre entier.'] >> > Request Method: POST >> > Request URL: http://192.168.5.34/dashboard/auth/login/ >> > Django Version: 1.8.1 >> > Exception Type: ValidationError >> > Exception Value: >> > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit >> \xeatre un nombre entier.'] >> > Exception Location: >> /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py in >> to_python, line 969 >> > Python Executable: /usr/bin/python >> > Python Version: 2.7.5 >> > Python Path: >> > ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', >> > '/usr/lib64/python27.zip', >> > '/usr/lib64/python2.7', >> > '/usr/lib64/python2.7/plat-linux2', >> > '/usr/lib64/python2.7/lib-tk', >> > '/usr/lib64/python2.7/lib-old', >> > '/usr/lib64/python2.7/lib-dynload', >> > '/usr/lib64/python2.7/site-packages', >> > '/usr/lib/python2.7/site-packages', >> > '/usr/share/openstack-dashboard/openstack_dashboard'] >> > >> > Server time: ven, 22 Mai 2015 12:52:22 +0000 >> > >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuric at redhat.com Tue May 26 13:14:34 2015 From: ekuric at redhat.com (Elvir Kuric) Date: Tue, 26 May 2015 15:14:34 +0200 Subject: [Rdo-list] [horizon] can't access the dashboard In-Reply-To: References: <1B69E72A-7E3B-42DE-839E-A6BA896C3772@redhat.com> Message-ID: <556471BA.3080507@redhat.com> On 05/26/2015 03:08 PM, ICHIBA Sara wrote: > this is quite annoying. I have to remove my cookies every time I wanna > login. Do you know any other tricks to get rid of this error for once? in https://bugzilla.redhat.com/show_bug.cgi?id=1218894 comment #9 seems to be an option. hth Elvir > > 2015-05-26 9:15 GMT+02:00 ICHIBA Sara >: > > You're right. I removed the cookies associated with my dashboard and > the error went away. Thanks > > > 2015-05-22 18:20 GMT+02:00 Rhys Oxenham >: > > Hi Sara, > > To confirm it?s also an issue that I?m seeing, does the error go > away when you access the page via a private browser window (or > by clearing your cookies)? > > Cheers > Rhys > > > On 22 May 2015, at 06:09, ICHIBA Sara > wrote: > > > > Hello there, > > > > I recently installed openstack kilo with packstack. The > dashboard was working fine until I rebooted My machine, here I > couldn't any more access the dashboard and I had the error that > says to contact the administrator for more details. > > > > I googled my issue with horizon and found somewhere on that > the problem is related to OPENSTACK-KEYSTONE_DEFAULT_ROLE and > they suggested to define it as admin. I did, but my problem is > still persisting. I enabled debug mode and here I got this > errors in the browser after a failed attemp to login to my > dashboard. > > > > > > In advance, thanks for your response. > > Sara > > > > ValidationError at /auth/login/ > > > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb > doit \xeatre un nombre entier.'] > > Request Method: POST > > Request URL: http://192.168.5.34/dashboard/auth/login/ > > Django Version: 1.8.1 > > Exception Type: ValidationError > > Exception Value: > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb > doit \xeatre un nombre entier.'] > > Exception Location: > /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py in to_python, line 969 > > Python Executable: /usr/bin/python > > Python Version: 2.7.5 > > Python Path: > > ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', > > '/usr/lib64/python27.zip', > > '/usr/lib64/python2.7', > > '/usr/lib64/python2.7/plat-linux2', > > '/usr/lib64/python2.7/lib-tk', > > '/usr/lib64/python2.7/lib-old', > > '/usr/lib64/python2.7/lib-dynload', > > '/usr/lib64/python2.7/site-packages', > > '/usr/lib/python2.7/site-packages', > > '/usr/share/openstack-dashboard/openstack_dashboard'] > > > > Server time: ven, 22 Mai 2015 12:52:22 +0000 > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ichi.sara at gmail.com Tue May 26 13:46:09 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 26 May 2015 15:46:09 +0200 Subject: [Rdo-list] Rdo-list Digest, Vol 26, Issue 76 In-Reply-To: References: Message-ID: I never fetched a package. What are the commands I need to do it? 2015-05-26 15:14 GMT+02:00 : > Send Rdo-list mailing list submissions to > rdo-list at redhat.com > > To subscribe or unsubscribe via the World Wide Web, visit > https://www.redhat.com/mailman/listinfo/rdo-list > or, via email, send a message with subject or body 'help' to > rdo-list-request at redhat.com > > You can reach the person managing the list at > rdo-list-owner at redhat.com > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Rdo-list digest..." > > > Today's Topics: > > 1. Re: [horizon] can't access the dashboard (ICHIBA Sara) > 2. [heat] Authorization failed (ICHIBA Sara) > 3. Re: [horizon] can't access the dashboard (ICHIBA Sara) > 4. Re: [horizon] can't access the dashboard (Elvir Kuric) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 26 May 2015 09:15:42 +0200 > From: ICHIBA Sara > To: Rhys Oxenham > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] [horizon] can't access the dashboard > Message-ID: > < > CADeXowg3E-DQUZU+wW51oWyv81XMz6qSZh004be11ftfT7oU+Q at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > You're right. I removed the cookies associated with my dashboard and the > error went away. Thanks > > > 2015-05-22 18:20 GMT+02:00 Rhys Oxenham : > > > Hi Sara, > > > > To confirm it?s also an issue that I?m seeing, does the error go away > when > > you access the page via a private browser window (or by clearing your > > cookies)? > > > > Cheers > > Rhys > > > > > On 22 May 2015, at 06:09, ICHIBA Sara wrote: > > > > > > Hello there, > > > > > > I recently installed openstack kilo with packstack. The dashboard was > > working fine until I rebooted My machine, here I couldn't any more access > > the dashboard and I had the error that says to contact the administrator > > for more details. > > > > > > I googled my issue with horizon and found somewhere on that the problem > > is related to OPENSTACK-KEYSTONE_DEFAULT_ROLE and they suggested to > define > > it as admin. I did, but my problem is still persisting. I enabled debug > > mode and here I got this errors in the browser after a failed attemp to > > login to my dashboard. > > > > > > > > > In advance, thanks for your response. > > > Sara > > > > > > ValidationError at /auth/login/ > > > > > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit > > \xeatre un nombre entier.'] > > > Request Method: POST > > > Request URL: http://192.168.5.34/dashboard/auth/login/ > > > Django Version: 1.8.1 > > > Exception Type: ValidationError > > > Exception Value: > > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit > > \xeatre un nombre entier.'] > > > Exception Location: > > /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py in > > to_python, line 969 > > > Python Executable: /usr/bin/python > > > Python Version: 2.7.5 > > > Python Path: > > > ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', > > > '/usr/lib64/python27.zip', > > > '/usr/lib64/python2.7', > > > '/usr/lib64/python2.7/plat-linux2', > > > '/usr/lib64/python2.7/lib-tk', > > > '/usr/lib64/python2.7/lib-old', > > > '/usr/lib64/python2.7/lib-dynload', > > > '/usr/lib64/python2.7/site-packages', > > > '/usr/lib/python2.7/site-packages', > > > '/usr/share/openstack-dashboard/openstack_dashboard'] > > > > > > Server time: ven, 22 Mai 2015 12:52:22 +0000 > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://www.redhat.com/archives/rdo-list/attachments/20150526/24654276/attachment.html > > > > ------------------------------ > > Message: 2 > Date: Tue, 26 May 2015 14:09:28 +0200 > From: ICHIBA Sara > To: rdo-list at redhat.com > Subject: [Rdo-list] [heat] Authorization failed > Message-ID: > < > CADeXowj83GUbiNYeLmGzPvdQ9uXR81i5K09uqPs4ZK7Y-TCejQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hey there, > > I installed the latest version of openstack with RDO kilo. and did all the > necessary configuration to make heat work. > > I used this command to generate new credentials for heat as I had found the > error authorization failed in my heat-engine.log : > > $ heat-keystone-setup-domain \ > > --stack-user-domain-name heat_user_domain \ > > --stack-domain-admin heat_domain_admin \ > --stack-domain-admin-password heat_domain_password > > My problem is that I still have the same problem even if I had updated my > heat.conf with these information and rebooted my openstack controller and > compute. > > any ideas? > > 2015-05-26 13:48:40.380 5046 DEBUG keystoneclient.session [-] Request > returned failure status: 401 request > /usr/lib/python2.7/site-packages/keystoneclient/session.py:396 > 2015-05-26 13:48:40.380 5046 DEBUG keystoneclient.v3.client [-] > Authorization failed. get_raw_token_from_identity_service > /usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:279 > 2015-05-26 13:48:40.381 5046 ERROR heat.common.keystoneclient [-] Domain > admin client authentication failed > 2015-05-26 13:48:40.405 5046 INFO heat.engine.stack [-] Stack CREATE FAILED > (scaleup_down): Authorization failed. > 2015-05-26 13:48:40.406 5046 INFO heat.engine.service [-] Stack create > failed, status FAILED > 2015-05-26 13:48:40.408 5046 DEBUG heat.engine.stack_lock [-] Engine > eb952064-9ddf-498a-b60a-5bd725fdbbef released lock on stack > c4dfd3f5-405e-439c-89ba-b65f8add7e74 release > /usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:132 > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://www.redhat.com/archives/rdo-list/attachments/20150526/4a2562eb/attachment.html > > > > ------------------------------ > > Message: 3 > Date: Tue, 26 May 2015 15:08:51 +0200 > From: ICHIBA Sara > To: Rhys Oxenham , rdo-list at redhat.com > Subject: Re: [Rdo-list] [horizon] can't access the dashboard > Message-ID: > < > CADeXowi-efPfV4pdn3jf+YNZi2QxUrv59DXJA+431Xaz4Fjudg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > this is quite annoying. I have to remove my cookies every time I wanna > login. Do you know any other tricks to get rid of this error for once? > > 2015-05-26 9:15 GMT+02:00 ICHIBA Sara : > > > You're right. I removed the cookies associated with my dashboard and the > > error went away. Thanks > > > > > > 2015-05-22 18:20 GMT+02:00 Rhys Oxenham : > > > >> Hi Sara, > >> > >> To confirm it?s also an issue that I?m seeing, does the error go away > >> when you access the page via a private browser window (or by clearing > your > >> cookies)? > >> > >> Cheers > >> Rhys > >> > >> > On 22 May 2015, at 06:09, ICHIBA Sara wrote: > >> > > >> > Hello there, > >> > > >> > I recently installed openstack kilo with packstack. The dashboard was > >> working fine until I rebooted My machine, here I couldn't any more > access > >> the dashboard and I had the error that says to contact the administrator > >> for more details. > >> > > >> > I googled my issue with horizon and found somewhere on that the > problem > >> is related to OPENSTACK-KEYSTONE_DEFAULT_ROLE and they suggested to > define > >> it as admin. I did, but my problem is still persisting. I enabled debug > >> mode and here I got this errors in the browser after a failed attemp to > >> login to my dashboard. > >> > > >> > > >> > In advance, thanks for your response. > >> > Sara > >> > > >> > ValidationError at /auth/login/ > >> > > >> > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit > >> \xeatre un nombre entier.'] > >> > Request Method: POST > >> > Request URL: http://192.168.5.34/dashboard/auth/login/ > >> > Django Version: 1.8.1 > >> > Exception Type: ValidationError > >> > Exception Value: > >> > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb doit > >> \xeatre un nombre entier.'] > >> > Exception Location: > >> /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py in > >> to_python, line 969 > >> > Python Executable: /usr/bin/python > >> > Python Version: 2.7.5 > >> > Python Path: > >> > ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', > >> > '/usr/lib64/python27.zip', > >> > '/usr/lib64/python2.7', > >> > '/usr/lib64/python2.7/plat-linux2', > >> > '/usr/lib64/python2.7/lib-tk', > >> > '/usr/lib64/python2.7/lib-old', > >> > '/usr/lib64/python2.7/lib-dynload', > >> > '/usr/lib64/python2.7/site-packages', > >> > '/usr/lib/python2.7/site-packages', > >> > '/usr/share/openstack-dashboard/openstack_dashboard'] > >> > > >> > Server time: ven, 22 Mai 2015 12:52:22 +0000 > >> > > >> > > >> > _______________________________________________ > >> > Rdo-list mailing list > >> > Rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >> > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://www.redhat.com/archives/rdo-list/attachments/20150526/82c6cce9/attachment.html > > > > ------------------------------ > > Message: 4 > Date: Tue, 26 May 2015 15:14:34 +0200 > From: Elvir Kuric > To: rdo-list at redhat.com > Subject: Re: [Rdo-list] [horizon] can't access the dashboard > Message-ID: <556471BA.3080507 at redhat.com> > Content-Type: text/plain; charset=windows-1252; format=flowed > > On 05/26/2015 03:08 PM, ICHIBA Sara wrote: > > this is quite annoying. I have to remove my cookies every time I wanna > > login. Do you know any other tricks to get rid of this error for once? > in https://bugzilla.redhat.com/show_bug.cgi?id=1218894 comment #9 seems > to be an option. > hth > Elvir > > > > 2015-05-26 9:15 GMT+02:00 ICHIBA Sara > >: > > > > You're right. I removed the cookies associated with my dashboard and > > the error went away. Thanks > > > > > > 2015-05-22 18:20 GMT+02:00 Rhys Oxenham > >: > > > > Hi Sara, > > > > To confirm it?s also an issue that I?m seeing, does the error go > > away when you access the page via a private browser window (or > > by clearing your cookies)? > > > > Cheers > > Rhys > > > > > On 22 May 2015, at 06:09, ICHIBA Sara > > wrote: > > > > > > Hello there, > > > > > > I recently installed openstack kilo with packstack. The > > dashboard was working fine until I rebooted My machine, here I > > couldn't any more access the dashboard and I had the error that > > says to contact the administrator for more details. > > > > > > I googled my issue with horizon and found somewhere on that > > the problem is related to OPENSTACK-KEYSTONE_DEFAULT_ROLE and > > they suggested to define it as admin. I did, but my problem is > > still persisting. I enabled debug mode and here I got this > > errors in the browser after a failed attemp to login to my > > dashboard. > > > > > > > > > In advance, thanks for your response. > > > Sara > > > > > > ValidationError at /auth/login/ > > > > > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb > > doit \xeatre un nombre entier.'] > > > Request Method: POST > > > Request URL: http://192.168.5.34/dashboard/auth/login/ > > > Django Version: 1.8.1 > > > Exception Type: ValidationError > > > Exception Value: > > > [u'La valeur \xab\xa0cdf7ff7189174b64983c9bd3e128099c\xa0\xbb > > doit \xeatre un nombre entier.'] > > > Exception Location: > > > /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py in > to_python, line 969 > > > Python Executable: /usr/bin/python > > > Python Version: 2.7.5 > > > Python Path: > > > > ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', > > > '/usr/lib64/python27.zip', > > > '/usr/lib64/python2.7', > > > '/usr/lib64/python2.7/plat-linux2', > > > '/usr/lib64/python2.7/lib-tk', > > > '/usr/lib64/python2.7/lib-old', > > > '/usr/lib64/python2.7/lib-dynload', > > > '/usr/lib64/python2.7/site-packages', > > > '/usr/lib/python2.7/site-packages', > > > '/usr/share/openstack-dashboard/openstack_dashboard'] > > > > > > Server time: ven, 22 Mai 2015 12:52:22 +0000 > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > ------------------------------ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > End of Rdo-list Digest, Vol 26, Issue 76 > **************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsedovic at redhat.com Tue May 26 13:54:36 2015 From: tsedovic at redhat.com (Tomas Sedovic) Date: Tue, 26 May 2015 15:54:36 +0200 Subject: [Rdo-list] RDO + floating IPs Message-ID: <55647B1C.9040208@redhat.com> Hey everyone, I tried to get RDO set up with floating IP addresses, but I'm running into problems I'm not sure how to debug (not that familiar with networking and Neutron). I followed these guides on a clean Fedora 21 x86_64 server: https://www.rdoproject.org/Quickstart https://www.rdoproject.org/Floating_IP_range This on a baremetal box in a network outside of my control but in which there are 5 extra IP addresses assigned to me. It's quite possible that I'm missing something or doing something wrong, so here's exactly what I did: yum update -y && reboot --- systemctl disable NetworkManager systemctl enable network systemctl stop NetworkManager && ifdown enp0s25 && systemctl start network yum install -y https://rdo.fedorapeople.org/rdo-release.rpm yum install -y openstack-packstack yum-config-manager --disable openstack-kilo yum-config-manager --enable openstack-kilo-testing setenforce 0 packstack --allinone --provision-all-in-one-ovs-bridge=n --os-heat-install=y --os-ceilometer-install=y That succeeded, so I added a keypair and made sure the "default" security group allows all traffic. So next onto the neutron stuff: source /root/keystonerc_admin neutron net-create mynetwork --router:external (got a new network with type vxlan and id 4a30e2cc-b9f3-4ebd-b7ab-c35e38c3c75c) For the subnet I need the CIDR, gateway and the floating IP range. I'm not sure if I'm getting these right: * CIDR: `ip addr`, find my network interface (enp0s25), get it's inet value: 10.40.128.44/20 * Gateway: `ip route | grep default` -> default via 10.40.143.254 dev enp0s25 * And these 5 IP addresses are assigned to me: 10.40.128.80-10.40.128.84 neutron subnet-create mynetwork 10.40.128.44/20 --name mysubnet --enable_dhcp=False --allocation_pool start=10.40.128.80,end=10.40.128.84 --gateway 10.40.143.254 (this creates a subnet with id 2e3d6966-a659-454c-8c38-d98ed3f105e5) neutron router-create myrouter (this creates a router with id fced82e6-9917-4053-a286-02838d0325fc) neutron router-gateway-set fced82e6-9917-4053-a286-02838d0325fc 4a30e2cc-b9f3-4ebd-b7ab-c35e38c3c75c (myrouter id and mynetwork id) neutron floatingip-create mynetwork With all that out of the way, I created another network+subnet for my VMs and added a myrouter's interface to it. Then I launched a cirros VM into that network and associated a floating IP from "mynetwork". The VM had some trouble with cloud-init: Starting network... udhcpc (v1.20.1) started Sending discover... Sending select for 192.168.0.4... Lease of 192.168.0.4 obtained, lease time 86400 cirros-ds 'net' up at 0.66 checking http://169.254.169.254/2009-04-04/instance-id failed 1/20: up 0.66. request failed failed 2/20: up 5.67. request failed failed 3/20: up 8.67. request failed failed 4/20: up 11.67. request failed ... once all 20 requests failed, it got to a login screen, but I could not ping or SSH into it: # ping 10.40.128.81 PING 10.40.128.81 (10.40.128.81) 56(84) bytes of data. From 10.40.128.44 icmp_seq=1 Destination Host Unreachable From 10.40.128.44 icmp_seq=2 Destination Host Unreachable From 10.40.128.44 icmp_seq=3 Destination Host Unreachable From 10.40.128.44 icmp_seq=4 Destination Host Unreachable # ssh cirros at 10.40.128.81 ssh: connect to host 10.40.128.81 port 22: No route to host Does anyone have any idea what I may be doing wrong? Thanks, Tomas From rbowen at redhat.com Tue May 26 14:14:57 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 26 May 2015 07:14:57 -0700 Subject: [Rdo-list] OpenStack Summit: Thank you! Message-ID: <55647FE1.3010707@redhat.com> Thank you so much to everyone that helped make RDO's presence at OpenStack summit great. Especially for people that came to the community meetup, people that presented, and everyone that covered for me at the RDO booth when I was too ill to do so myself. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From jjozwiak at redhat.com Tue May 26 14:27:22 2015 From: jjozwiak at redhat.com (Jon Jozwiak) Date: Tue, 26 May 2015 10:27:22 -0400 (EDT) Subject: [Rdo-list] RDO + floating IPs In-Reply-To: <55647B1C.9040208@redhat.com> References: <55647B1C.9040208@redhat.com> Message-ID: <1226319266.4318549.1432650442756.JavaMail.zimbra@redhat.com> > > ----- Original Message ----- > From: "Tomas Sedovic" > To: rdo-list at redhat.com > Sent: Tuesday, May 26, 2015 8:54:36 AM > Subject: [Rdo-list] RDO + floating IPs > > Hey everyone, > > I tried to get RDO set up with floating IP addresses, but I'm running > into problems I'm not sure how to debug (not that familiar with > networking and Neutron). > > I followed these guides on a clean Fedora 21 x86_64 server: > > https://www.rdoproject.org/Quickstart > https://www.rdoproject.org/Floating_IP_range > > This on a baremetal box in a network outside of my control but in which > there are 5 extra IP addresses assigned to me. It's quite possible that > I'm missing something or doing something wrong, so here's exactly what I > did: > > yum update -y && reboot > --- > systemctl disable NetworkManager > systemctl enable network > systemctl stop NetworkManager && ifdown enp0s25 && systemctl start network > > yum install -y https://rdo.fedorapeople.org/rdo-release.rpm > yum install -y openstack-packstack > yum-config-manager --disable openstack-kilo > yum-config-manager --enable openstack-kilo-testing > setenforce 0 > > packstack --allinone --provision-all-in-one-ovs-bridge=n --os-heat-install=y --os-ceilometer-install=y > > That succeeded, so I added a keypair and made sure the "default" > security group allows all traffic. > > So next onto the neutron stuff: > > source /root/keystonerc_admin > neutron net-create mynetwork --router:external > > (got a new network with type vxlan and id > 4a30e2cc-b9f3-4ebd-b7ab-c35e38c3c75c) > > For the subnet I need the CIDR, gateway and the floating IP range. I'm > not sure if I'm getting these right: > > * CIDR: `ip addr`, find my network interface (enp0s25), get it's inet > value: 10.40.128.44/20 > * Gateway: `ip route | grep default` -> default via 10.40.143.254 dev > enp0s25 > * And these 5 IP addresses are assigned to me: 10.40.128.80-10.40.128.84 > > neutron subnet-create mynetwork 10.40.128.44/20 --name mysubnet --enable_dhcp=False --allocation_pool start=10.40.128.80,end=10.40.128.84 --gateway 10.40.143.254 I'm not certain if this is the root cause or not, but I believe the subnet should be created as 10.40.128.0/20 rather than 10.40.128.44/20. > (this creates a subnet with id 2e3d6966-a659-454c-8c38-d98ed3f105e5) > > neutron router-create myrouter > > (this creates a router with id fced82e6-9917-4053-a286-02838d0325fc) > > neutron router-gateway-set fced82e6-9917-4053-a286-02838d0325fc > 4a30e2cc-b9f3-4ebd-b7ab-c35e38c3c75c > > (myrouter id and mynetwork id) > > neutron floatingip-create mynetwork > > > With all that out of the way, I created another network+subnet for my > VMs and added a myrouter's interface to it. Then I launched a cirros VM > into that network and associated a floating IP from "mynetwork". > > The VM had some trouble with cloud-init: > > Starting network... > udhcpc (v1.20.1) started > Sending discover... > Sending select for 192.168.0.4... > Lease of 192.168.0.4 obtained, lease time 86400 > cirros-ds 'net' up at 0.66 > checking http://169.254.169.254/2009-04-04/instance-id > failed 1/20: up 0.66. request failed > failed 2/20: up 5.67. request failed > failed 3/20: up 8.67. request failed > failed 4/20: up 11.67. request failed > ... > > once all 20 requests failed, it got to a login screen, but I could not > ping or SSH into it: > > # ping 10.40.128.81 > PING 10.40.128.81 (10.40.128.81) 56(84) bytes of data. > From 10.40.128.44 icmp_seq=1 Destination Host Unreachable > From 10.40.128.44 icmp_seq=2 Destination Host Unreachable > From 10.40.128.44 icmp_seq=3 Destination Host Unreachable > From 10.40.128.44 icmp_seq=4 Destination Host Unreachable > > # ssh cirros at 10.40.128.81 > ssh: connect to host 10.40.128.81 port 22: No route to host > > > Does anyone have any idea what I may be doing wrong? > > Thanks, > Tomas > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From bderzhavets at hotmail.com Tue May 26 14:34:34 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 26 May 2015 10:34:34 -0400 Subject: [Rdo-list] RE(4): RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: , , , , , , , , , , , Message-ID: Excuse my English 1. I took per single service care of iptables, httpd, rabbitmq-server, mariadb, mongod, openvswitch, memcached. 2. I realize that it will be fixed pretty soon, just wanted to get RDO Kilo running on F22 and got it. Boris From: bderzhavets at hotmail.com To: outbackdingo at gmail.com Date: Mon, 25 May 2015 19:32:30 -0400 CC: rdo-list at redhat.com; p at draigbrady.com Subject: [Rdo-list] RE(4): RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 You may install RDO Kilo on F22, I spent 4-5 hr to complete install. As soon as you get "chkconfig --add service" returns 1 Respond `systemctl enable service` && packstack --answer-file=./answer-file-xxx-yyyy.txt. More over ( for instance) when IP_nova.pp crashes , check `systemctl | grep nova` && enable all running nova services. Same hook works for glance,cinder,neutron,swift. It's a simple shell script accepting via command line parameter (nova,neutron,...). I took per single service care for iptables,httpd,mariadb,mongod,openvswitch,memcached It improves a knowledge of services been activated during packstack run. I am not kidding. I release that it will be fixed pretty soon, just wanted to get RDO Kilo running on F22 and got it. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifarkas at redhat.com Tue May 26 14:42:15 2015 From: ifarkas at redhat.com (Imre Farkas) Date: Tue, 26 May 2015 16:42:15 +0200 Subject: [Rdo-list] introspection and raid configuration Message-ID: <55648647.3030000@redhat.com> Hi all, I would like to gather some feedback on the current worfklow regarding the subject. RAID configuration on DRAC machines works as follows (it assumes there's no RAID volume on the machine): 1. user defines the deployment profile and the target raid configuration for each profile 2. ironic-discoverd introspects the node to gather facts 3. ach-match picks a deployment profile based on the gathered facts 4. instack triggers the raid configuration based on the selected profile A bug[1] has been discovered regarding step 2: ironic-discoverd fails because it tries to figure out the size of the local disk but it can't find any as no volume exists on the RAID controller yet. This is a chicken-and-egg problem because ironic-discoverd doesn't work if the RAID volume(s) has not been configured but the RAID configuration can't be triggered if ironic-discoverd hasn't gathered the facts about the node. Few possible workarounds: #1: make saving the local disk size optional in the standard plugin in ironic-discoverd. The downside is that the standard plugin is supposed to enroll nodes in ironic with all attributes necessary for scheduling. This assumption might fail with this solution. #2: run discovery multiple times with different set of plugins. The run before RAID configuration would exclude the standard plugin while the run afterwards could potentially exclude others. The parameters passed by the user to ironic-discoverd for each run need to be properly documented. It would slow down the installation because each run requires a reboot which takes a lot of time on bare metal. #3: name your suggestion! Any thoughts/preference on the above described workarounds? Thanks, Imre [1] https://bugzilla.redhat.com/show_bug.cgi?id=1222124 From kchamart at redhat.com Tue May 26 15:16:54 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 26 May 2015 17:16:54 +0200 Subject: [Rdo-list] RDO + floating IPs In-Reply-To: <55647B1C.9040208@redhat.com> References: <55647B1C.9040208@redhat.com> Message-ID: <20150526151654.GI15019@tesla> On Tue, May 26, 2015 at 03:54:36PM +0200, Tomas Sedovic wrote: > Hey everyone, > > I tried to get RDO set up with floating IP addresses, but I'm running into > problems I'm not sure how to debug (not that familiar with networking and > Neutron). > > I followed these guides on a clean Fedora 21 x86_64 server: > > https://www.rdoproject.org/Quickstart > https://www.rdoproject.org/Floating_IP_range > [. . .] > once all 20 requests failed, it got to a login screen, but I could not ping > or SSH into it: > > # ping 10.40.128.81 > PING 10.40.128.81 (10.40.128.81) 56(84) bytes of data. > From 10.40.128.44 icmp_seq=1 Destination Host Unreachable > From 10.40.128.44 icmp_seq=2 Destination Host Unreachable > From 10.40.128.44 icmp_seq=3 Destination Host Unreachable > From 10.40.128.44 icmp_seq=4 Destination Host Unreachable > > # ssh cirros at 10.40.128.81 > ssh: connect to host 10.40.128.81 port 22: No route to host It could be any no. of reasons, as I don't know what's going on in your network. But, your steps sound reasonably correct. Just for comparision, that's what I normally do: # Create new private network: $ neutron net-create $privnetname # Create a subnet neutron subnet-create $privnetname \ $subnetspace/24 \ --name $privsubnetname # Create a router neutron router-create $routername # Associate the router to the external network by setting its gateway # NOTE: This assumes the external network name is 'ext' export EXT_NET=$(neutron net-list | grep ext | awk '{print $2;}') export PRIV_NET=$(neutron subnet-list | grep $privsubnetname | awk '{print $2;}') export ROUTER_ID=$(neutron router-list | grep $routername | awk '{print $2;}' neutron router-gateway-set \ $ROUTER_ID $EXT_NET_ID neutron router-interface-add \ $ROUTER_ID $PRIV_NET_ID # Add Neutron security groups for this test tenant neutron security-group-rule-create \ --protocol icmp \ --direction ingress \ --remote-ip-prefix 0.0.0.0/0 \ default neutron security-group-rule-create \ --protocol tcp \ --port-range-min 22 \ --port-range-max 22 \ --direction ingress \ --remote-ip-prefix 0.0.0.0/0 \ default On a related note, all the above, inlcuding creating the Keystone tenant, user, etc is put together in this trivial script[1], which allows me to create tenant networks this way: $ ./create-new-tenant-network.sh \ demoten1 tuser1 \ 14.0.0.0 trouter1 \ priv-net1 priv-subnet1 It assumes your external network is named as "ext", but you can modify the script trivially to change that. [1] https://github.com/kashyapc/ostack-misc/blob/master/create-new-tenant-network.sh -- /kashyap From dtantsur at redhat.com Tue May 26 15:43:54 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 26 May 2015 17:43:54 +0200 Subject: [Rdo-list] introspection and raid configuration In-Reply-To: <55648647.3030000@redhat.com> References: <55648647.3030000@redhat.com> Message-ID: <556494BA.6020208@redhat.com> On 05/26/2015 04:42 PM, Imre Farkas wrote: > Hi all, > > I would like to gather some feedback on the current worfklow regarding > the subject. RAID configuration on DRAC machines works as follows (it > assumes there's no RAID volume on the machine): > 1. user defines the deployment profile and the target raid configuration > for each profile > 2. ironic-discoverd introspects the node to gather facts > 3. ach-match picks a deployment profile based on the gathered facts > 4. instack triggers the raid configuration based on the selected profile > > A bug[1] has been discovered regarding step 2: ironic-discoverd fails > because it tries to figure out the size of the local disk but it can't > find any as no volume exists on the RAID controller yet. This is a > chicken-and-egg problem because ironic-discoverd doesn't work if the > RAID volume(s) has not been configured but the RAID configuration can't > be triggered if ironic-discoverd hasn't gathered the facts about the node. > > Few possible workarounds: > #1: make saving the local disk size optional in the standard plugin in > ironic-discoverd. The downside is that the standard plugin is supposed > to enroll nodes in ironic with all attributes necessary for scheduling. > This assumption might fail with this solution. -1 will never get upstream for the reasons you stated > > #2: run discovery multiple times with different set of plugins. The run > before RAID configuration would exclude the standard plugin while the > run afterwards could potentially exclude others. The parameters passed > by the user to ironic-discoverd for each run need to be properly > documented. It would slow down the installation because each run > requires a reboot which takes a lot of time on bare metal. Possible, but better (IMO) idea below. > > #3: name your suggestion! #3. modify your existing root_device_hint plugin to insert a fake local_gb value (with issuing a warning), and then put it to the beginning of the plugin pipeline. WDYT? > > Any thoughts/preference on the above described workarounds? > > Thanks, > Imre > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1222124 From rbowen at redhat.com Tue May 26 16:05:32 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 26 May 2015 12:05:32 -0400 Subject: [Rdo-list] OpenStack meetups, week of May 26th Message-ID: <556499CC.3080105@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Wed May 27 in Johannesburg, ZA: Tech Meetup #6 - OpenStack Cloud Computing & Modern Computerization of Processes - http://www.meetup.com/Software-Arkitex-Tech-Hub/events/222136965/ * Wed May 27 in Hong Kong, HK: OpenStack Happy Hour - http://www.meetup.com/Hong-Kong-OpenStack-User-Group/events/222469689/ * Wed May 27 in Philadelphia, PA, US: Canonical Ubuntu OpenStack On Autopilot And OpenStack Networking With PLUMgrid - http://www.meetup.com/Philly-OpenStack-Meetup-Group/events/222031821/ * Wed May 27 in Palo Alto, CA, US: Kubernetes Deep Dive with focus on Networking - http://www.meetup.com/Docker-Networking/events/222326433/ * Wed May 27 in Antwerpen, BE: Post- Kilo Release / Vancouver Summit update - http://www.meetup.com/OpenStack-Belgium-Meetup/events/222066330/ * Thu May 28 in Prague, CZ: OpenStack Howto part 3 - Compute - http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/221143233/ * Thu May 28 in New York, NY, US: Canonical Ubuntu OpenStack On Autopilot And OpenStack Networking With PLUMgrid - http://www.meetup.com/OpenStack-New-York-Meetup/events/222031689/ * Thu May 28 in Austin, TX, US: Living in the worlds of VMware and OpenStack - http://www.meetup.com/OpenStack-Austin/events/222011721/ * Thu May 28 in Henrico, VA, US: OpenStack Richmond Meetup #2 - http://www.meetup.com/OpenStack-Richmond/events/222172145/ * Thu May 28 in Littleton, CO, US: Discuss and Learn about OpenStack - http://www.meetup.com/OpenStack-Denver/events/221977399/ * Thu May 28 in Cluj-Napoca, RO: OpenStack Kilo Release Overview - http://www.meetup.com/OpenStack-Cluj/events/221754372/ * Thu May 28 in Herriman, UT, US: SUSE Studio: Your Software Everywhere - http://www.meetup.com/openstack-utah/events/221751122/ * Thu May 28 in Pasadena, CA, US: Magnum ? OpenStack Containers Service. The May OpenStack LA Meetup. - http://www.meetup.com/OpenStack-LA/events/222333079/ * Thu May 28 in Boston, MA, US: Cinder squared - Let's meetup in Boston! - http://www.meetup.com/Openstack-Boston/events/221102308/ * Thu May 28 in Santa Monica, CA, US: X-POST: Magnum ? OpenStack Containers Service. The May OpenStack LA Meetup. - http://www.meetup.com/Docker-Los-Angeles/events/222742621/ * Sat May 30 in Bangalore, IN: OpenStack Meetup, New Delhi - http://www.meetup.com/Indian-OpenStack-User-Group/events/222413310/ * Sat May 30 in Beijing, CN: ??????? - http://www.meetup.com/China-OpenStack-User-Group/events/222640745/ * Mon Jun 1 in Porto Alegre, BR: FIT OpenStack Meetup P?s Summit 2015 - http://www.meetup.com/Openstack-Brasil/events/198832642/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rdo-info at redhat.com Tue May 26 17:21:13 2015 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 26 May 2015 17:21:13 +0000 Subject: [Rdo-list] [RDO] RDO Meetup, OpenStack Summit Vancouver Message-ID: <0000014d913e0f27-7e478825-60ce-4bf5-a780-4bd6de4ca318-000000@email.amazonses.com> rbowen started a discussion. RDO Meetup, OpenStack Summit Vancouver --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1018/rdo-meetup-openstack-summit-vancouver Have a great day! From pgsousa at gmail.com Tue May 26 17:28:03 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 26 May 2015 18:28:03 +0100 Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? In-Reply-To: <5563093F.9000206@redhat.com> References: <5563093F.9000206@redhat.com> Message-ID: Hi all, thanks to Giulio recommendations in #rdo I've managed to change some parameters: #heat stack-delete overcloud #export NEUTRON_TUNNEL_TYPES=vxlan #export NEUTRON_TUNNEL_TYPE=vxlan #export NEUTRON_NETWORK_TYPE=vxlan #instack-deploy-overcloud --tuskar This works for TUSKAR_PARAMETERS contained in the instack-deploy-overcloud script (please correct me if I'm wrong). My question is if it's possible to use VLAN for tenants, using a VLAN range and disable GRE/VXLAN tunneling. Thanks, Pedro Sousa On Mon, May 25, 2015 at 12:36 PM, Giulio Fidente wrote: > On 05/25/2015 01:09 PM, Pedro Sousa wrote: > >> Hi all, >> >> I've deployed rdo-manager in a virt env and everything is working fine >> except the vnc console which is alreday an open bug for that. >> >> Now I would like to change some parameters on my deployment, let's say I >> wan't to disable NeutronTunneling, I wan't to use VLAN for tenants and >> use 1500 MTU on dnsmasq. >> >> So I downloaded the plan: >> >> #tuskar plan-templates -O /tmp uuid >> >> changed plan.yaml, environment.yaml, provider-Controller-1.yaml, >> provider-Compute-1.yaml. >> >> than I ran the stack: >> >> # heat stack-create -f tmp/plan.yaml -e tmp/environment.yaml overcloud >> >> The overcloud is deployed fine but the values aren't changed. What I'm >> missing here? >> > > hi, > > if you launch stack-create manually the newly created overcloud is not > reprovisioned with the initial keystone endpoints/users/roles ... to get an > usable overcloud you should launch instack-deploy-overcloud again > > so you can change the defaults for the various params by patching the > tuskar plan with 'tuskar plan-update' see [1] > > yet some of these are automatically parsed from ENV vars, like > NEUTRON_TUNNEL_TYPES and NEUTRON_NETWORK_TYPE see [2] > > the NeutronDnsmasqOptions param instead is not parsed from any ENV var, so > you're forced to use 'tuskar plan-update' > > I'm adding a couple of guys on CC who migh help but, let us know how it > goes! > > 1. > https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L274 > > 2. > https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L205-L208 > -- > Giulio Fidente > GPG KEY: 08D733BA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbrock at redhat.com Tue May 26 17:39:09 2015 From: hbrock at redhat.com (Hugh Brock) Date: Tue, 26 May 2015 13:39:09 -0400 (EDT) Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? In-Reply-To: References: <5563093F.9000206@redhat.com> Message-ID: <0899b727-24aa-45c0-a206-b42df66afd89@redhat.com> It should be... But we haven't really tested it yet, to my knowledge. It's an important configuration that we want to support. If you are able to sort it out and past your results here, that would be great! -Hugh Sent from my mobile, please pardon the top posting. From: Pedro Sousa Sent: May 26, 2015 7:28 PM To: Giulio Fidente Cc: Marios Andreou;rdo-list at redhat.com;Jason Dobies Subject: Re: [Rdo-list] RDO-Manager ovecloud change existing plan, how? Hi all, thanks to Giulio recommendations in #rdo I've managed to change some parameters: #heat stack-delete overcloud #export NEUTRON_TUNNEL_TYPES=vxlan #export NEUTRON_TUNNEL_TYPE=vxlan #export NEUTRON_NETWORK_TYPE=vxlan #instack-deploy-overcloud --tuskar This works for TUSKAR_PARAMETERS contained in the instack-deploy-overcloud script (please correct me if I'm wrong). My question is if it's possible to use VLAN for tenants, using a VLAN range and disable GRE/VXLAN tunneling. Thanks, Pedro Sousa On Mon, May 25, 2015 at 12:36 PM, Giulio Fidente wrote: > On 05/25/2015 01:09 PM, Pedro Sousa wrote: > >> Hi all, >> >> I've deployed rdo-manager in a virt env and everything is working fine >> except the vnc console which is alreday an open bug for that. >> >> Now I would like to change some parameters on my deployment, let's say I >> wan't to disable NeutronTunneling, I wan't to use VLAN for tenants and >> use 1500 MTU on dnsmasq. >> >> So I downloaded the plan: >> >> #tuskar plan-templates -O /tmp uuid >> >> changed plan.yaml, environment.yaml, provider-Controller-1.yaml, >> provider-Compute-1.yaml. >> >> than I ran the stack: >> >> # heat stack-create -f tmp/plan.yaml -e tmp/environment.yaml overcloud >> >> The overcloud is deployed fine but the values aren't changed. What I'm >> missing here? >> > > hi, > > if you launch stack-create manually the newly created overcloud is not > reprovisioned with the initial keystone endpoints/users/roles ... to get an > usable overcloud you should launch instack-deploy-overcloud again > > so you can change the defaults for the various params by patching the > tuskar plan with 'tuskar plan-update' see [1] > > yet some of these are automatically parsed from ENV vars, like > NEUTRON_TUNNEL_TYPES and NEUTRON_NETWORK_TYPE see [2] > > the NeutronDnsmasqOptions param instead is not parsed from any ENV var, so > you're forced to use 'tuskar plan-update' > > I'm adding a couple of guys on CC who migh help but, let us know how it > goes! > > 1. > https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L274 > > 2. > https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L205-L208 > -- > Giulio Fidente > GPG KEY: 08D733BA > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From pgsousa at gmail.com Tue May 26 18:19:24 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 26 May 2015 19:19:24 +0100 Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? In-Reply-To: <0899b727-24aa-45c0-a206-b42df66afd89@redhat.com> References: <5563093F.9000206@redhat.com> <0899b727-24aa-45c0-a206-b42df66afd89@redhat.com> Message-ID: Hi Hugh, I've tried to change the plan: # tuskar plan-update -A Compute-1::NeutronEnableTunnelling=False c91b13a2-afd6-4eb2-9a78-46335190519d # tuskar plan-update -A Controller-1::NeutronEnableTunnelling=False c91b13a2-afd6-4eb2-9a78-46335190519d # export NEUTRON_NETWORK_TYPE=vlan But the stack failed, I also see that plan-update doesn't work: [stack at instack ~]$ heat stack-show 4dd74e83-e90f-437f-b8b5-ac45d6ada9db | grep Tunnel | | "Controller-1::NeutronEnableTunnelling": "True", Regards Pedro Sousa On Tue, May 26, 2015 at 6:39 PM, Hugh Brock wrote: > It should be... But we haven't really tested it yet, to my knowledge. It's > an important configuration that we want to support. > > If you are able to sort it out and past your results here, that would be > great! > > -Hugh > Sent from my mobile, please pardon the top posting. > > *From:* Pedro Sousa > *Sent:* May 26, 2015 7:28 PM > *To:* Giulio Fidente > *Cc:* Marios Andreou;rdo-list at redhat.com;Jason Dobies > *Subject:* Re: [Rdo-list] RDO-Manager ovecloud change existing plan, how? > > > Hi all, > > thanks to Giulio recommendations in #rdo I've managed to change some > parameters: > > #heat stack-delete overcloud > #export NEUTRON_TUNNEL_TYPES=vxlan > #export NEUTRON_TUNNEL_TYPE=vxlan > #export NEUTRON_NETWORK_TYPE=vxlan > #instack-deploy-overcloud --tuskar > > This works for TUSKAR_PARAMETERS contained in the > instack-deploy-overcloud script (please correct me if I'm wrong). > > My question is if it's possible to use VLAN for tenants, using a VLAN > range and disable GRE/VXLAN tunneling. > > Thanks, > Pedro Sousa > > > On Mon, May 25, 2015 at 12:36 PM, Giulio Fidente > wrote: > >> On 05/25/2015 01:09 PM, Pedro Sousa wrote: >> >>> Hi all, >>> >>> I've deployed rdo-manager in a virt env and everything is working fine >>> except the vnc console which is alreday an open bug for that. >>> >>> Now I would like to change some parameters on my deployment, let's say I >>> wan't to disable NeutronTunneling, I wan't to use VLAN for tenants and >>> use 1500 MTU on dnsmasq. >>> >>> So I downloaded the plan: >>> >>> #tuskar plan-templates -O /tmp uuid >>> >>> changed plan.yaml, environment.yaml, provider-Controller-1.yaml, >>> provider-Compute-1.yaml. >>> >>> than I ran the stack: >>> >>> # heat stack-create -f tmp/plan.yaml -e tmp/environment.yaml overcloud >>> >>> The overcloud is deployed fine but the values aren't changed. What I'm >>> missing here? >>> >> >> hi, >> >> if you launch stack-create manually the newly created overcloud is not >> reprovisioned with the initial keystone endpoints/users/roles ... to get an >> usable overcloud you should launch instack-deploy-overcloud again >> >> so you can change the defaults for the various params by patching the >> tuskar plan with 'tuskar plan-update' see [1] >> >> yet some of these are automatically parsed from ENV vars, like >> NEUTRON_TUNNEL_TYPES and NEUTRON_NETWORK_TYPE see [2] >> >> the NeutronDnsmasqOptions param instead is not parsed from any ENV var, >> so you're forced to use 'tuskar plan-update' >> >> I'm adding a couple of guys on CC who migh help but, let us know how it >> goes! >> >> 1. >> https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L274 >> >> 2. >> https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L205-L208 >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Tue May 26 19:06:24 2015 From: jslagle at redhat.com (James Slagle) Date: Tue, 26 May 2015 15:06:24 -0400 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <20150525125555.GK4035@redhat.com> References: <20150525125555.GK4035@redhat.com> Message-ID: <20150526190624.GC4975@teletran-1> On Mon, May 25, 2015 at 02:55:57PM +0200, Hugh O. Brock wrote: > Seems like the midstream repos are causing us a lot of pain with little > gain, at least in some cases. (For example it appears the t-h-t > midstream exists to carry a single patch that enables mongodb on > Centos.) Is it worth discussing whether we can eliminate some of these, > especially for upstreams like t-h-t that aren't tightly tied to the > OpenStack release schedule? > > /me ducks flying bricks No flying bricks from me anyway :) I agree with what you're saying, but would like to agree on what we're actually meaning. I think it's great that pretty much exactly what we said would happen has happened: as Kilo nears completion, the patches that we're having to carry for RDO-Manager is approaching 0. But before we say, let's kill the midstream repos...let's consider why we even have them to begin with. We didn't come up with the idea to fork a bunch of projects because it would make our *development* lives for RDO-Manager easier. In fact, it's exactly the opposite, they cause pain. So why do we have them? Quite simply it's because of the demand that we show, commit to, and support features in RDO-Manager prior to them necessarily merging in upstream OpenStack projects. If we can lift that requirement, we could likely do away with the midstream repos entirely. I know this is rdo-list, but in the spirit of being open, if we don't have midstream repos for RDO-Manager, then we shouldn't have them for OSP either, and we should only be committing to features as they land in upstream OpenStack. It doesn't make sense to me to drop the midstream repos for RDO-Manager, but then turn around and set them up privately for OSP. At least with them setup for RDO, we're doing the development of the management product in the open. In fact if we were to split like this, I think the pain would only intensify. It's pragmatic to say that anyone looking to productize OpenStack is going to have their own bug fixes/etc that they need, and maybe even some features. But setting up private forks as a general development principal to drive OpenStack based product development should be avoided at all costs. So yes, I'm +1 on dropping the midstream repos if they're no longer needed. But I'm -1 on then turning around and setting them up privately so that we can commit code to private forks because that code hasn't landed in upstream OpenStack yet. If we can agree that this isn't needed, then I think we can do without the midstream. Otherwise, if we're going to insist on forks to "land" features prior to them merging upstream, let's at least keep them public. There's a couple other facets to the discussion as well: The midstream repos is a place where we can preview where the management product is headed. For example, the Heat breakpoint work was available from the midstream repos much earlier than it ended up merging into Heat/heatclient. Some devs/users might find this sort of thing really useful, and could provide some early feedback for RDO-Manager. The midstream repos are also a place where we can quickly land reverts to unblock work. The Neutron regression that broke provisioning via Ironic this past cycle immediately comes to mind...it took a couple of weeks before the upstream revert even landed in Neutron. While that was broken, we carried the proposed revert in the midstream repos so that people could install a Neutron that actually worked for RDO-Manager. > > --Hugh > > -- > == Hugh Brock, hbrock at redhat.com == > == Senior Engineering Manager, Cloud Engineering == > == RDO Manager: Install, configure, and scale OpenStack == > == http://rdoproject.org == > > "I know that you believe you understand what you think I said, but I?m > not sure you realize that what you heard is not what I meant." > --Robert McCloskey > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From jason.dobies at redhat.com Tue May 26 19:07:37 2015 From: jason.dobies at redhat.com (Jay Dobies) Date: Tue, 26 May 2015 15:07:37 -0400 Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? In-Reply-To: References: <5563093F.9000206@redhat.com> <0899b727-24aa-45c0-a206-b42df66afd89@redhat.com> Message-ID: <5564C479.2030402@redhat.com> On 05/26/2015 02:19 PM, Pedro Sousa wrote: > Hi Hugh, > > I've tried to change the plan: > > # tuskar plan-update -A Compute-1::NeutronEnableTunnelling=False > c91b13a2-afd6-4eb2-9a78-46335190519d > # tuskar plan-update -A Controller-1::NeutronEnableTunnelling=False > c91b13a2-afd6-4eb2-9a78-46335190519d > # export NEUTRON_NETWORK_TYPE=vlan > > But the stack failed, I also see that plan-update doesn't work: It depends on what you did between the lines above and the line below. If you're making the updates above and then running instack-deploy-overcloud, it's not going to work. That script deletes the plan and recreates it, losing your updates in the process. That logic (role addition and plan create) is being moved out of instack-deploy-overcloud to an installation-time step to enable this sort of thing (not fully sure the state of that, but the UI needs the plan create to be done during install as well). > [stack at instack ~]$ heat stack-show 4dd74e83-e90f-437f-b8b5-ac45d6ada9db > | grep Tunnel > | | "Controller-1::NeutronEnableTunnelling": > "True", > > Regards > Pedro Sousa > > > > > On Tue, May 26, 2015 at 6:39 PM, Hugh Brock > wrote: > > It should be... But we haven't really tested it yet, to my > knowledge. It's an important configuration that we want to support. > > If you are able to sort it out and past your results here, that > would be great! > > -Hugh > > Sent from my mobile, please pardon the top posting. > > *From:* Pedro Sousa > > *Sent:* May 26, 2015 7:28 PM > *To:* Giulio Fidente > *Cc:* Marios Andreou;rdo-list at redhat.com > ;Jason Dobies > *Subject:* Re: [Rdo-list] RDO-Manager ovecloud change existing plan, > how? > > > Hi all, > > thanks to Giulio recommendations in #rdo I've managed to change some > parameters: > > #heat stack-delete overcloud > #export NEUTRON_TUNNEL_TYPES=vxlan > #export NEUTRON_TUNNEL_TYPE=vxlan > #export NEUTRON_NETWORK_TYPE=vxlan > #instack-deploy-overcloud --tuskar > > This works for TUSKAR_PARAMETERS contained in the > instack-deploy-overcloud script (please correct me if I'm wrong). > > My question is if it's possible to use VLAN for tenants, using a > VLAN range and disable GRE/VXLAN tunneling. > > Thanks, > Pedro Sousa > > > On Mon, May 25, 2015 at 12:36 PM, Giulio Fidente > > wrote: > > On 05/25/2015 01:09 PM, Pedro Sousa wrote: > > Hi all, > > I've deployed rdo-manager in a virt env and everything is > working fine > except the vnc console which is alreday an open bug for that. > > Now I would like to change some parameters on my deployment, > let's say I > wan't to disable NeutronTunneling, I wan't to use VLAN for > tenants and > use 1500 MTU on dnsmasq. > > So I downloaded the plan: > > #tuskar plan-templates -O /tmp uuid > > changed plan.yaml, environment.yaml, provider-Controller-1.yaml, > provider-Compute-1.yaml. > > than I ran the stack: > > # heat stack-create -f tmp/plan.yaml -e tmp/environment.yaml > overcloud > > The overcloud is deployed fine but the values aren't > changed. What I'm > missing here? > > > hi, > > if you launch stack-create manually the newly created overcloud > is not reprovisioned with the initial keystone > endpoints/users/roles ... to get an usable overcloud you should > launch instack-deploy-overcloud again > > so you can change the defaults for the various params by > patching the tuskar plan with 'tuskar plan-update' see [1] > > yet some of these are automatically parsed from ENV vars, like > NEUTRON_TUNNEL_TYPES and NEUTRON_NETWORK_TYPE see [2] > > the NeutronDnsmasqOptions param instead is not parsed from any > ENV var, so you're forced to use 'tuskar plan-update' > > I'm adding a couple of guys on CC who migh help but, let us know > how it goes! > > 1. > https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L274 > > 2. > https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L205-L208 > -- > Giulio Fidente > GPG KEY: 08D733BA > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From jason.dobies at redhat.com Tue May 26 19:23:15 2015 From: jason.dobies at redhat.com (Jay Dobies) Date: Tue, 26 May 2015 15:23:15 -0400 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <20150525125555.GK4035@redhat.com> References: <20150525125555.GK4035@redhat.com> Message-ID: <5564C823.9080104@redhat.com> On 05/25/2015 08:55 AM, Hugh O. Brock wrote: > Seems like the midstream repos are causing us a lot of pain with little > gain, at least in some cases. (For example it appears the t-h-t > midstream exists to carry a single patch that enables mongodb on > Centos.) It's not just a single patch for THT (in fact, I think that's one of the lesser examples of what you're getting at, whereas I don't think we've maintained a single Tuskar cherry pick for longer than a few days at most over the course of the release). At very least, there's another commit that's carried midstream that hasn't landed upstream to enable post deploy hooks for RHEL registration: https://review.openstack.org/#/c/172065/ That was first submitted on April 9th and hasn't yet landed. I'm not saying there isn't a problem in the length of that turn around time (that's an entirely separate discussion), but since we were under a time-based deadline to deliver the Satellite integration, it was cherry picked midstream. You could even argue that the presence of the midstream repos allowed that patch to sit around in limbo since we cherry picked and kinda forgot about it, which would be a +1 against them. You also can't really look at the situation today and the number of non-upstream commits and necessarily draw the conclusion that they aren't needed. Over the course of the past few sprints, the THT repo has carried more than one upstream cherry pick at a time. Arguably, it should be carrying the pacemaker and network stuff if they weren't split across so many different commits which made waiting on a rebase simpler (those are, however, interesting experience stories when considering the midstream repos). Slagle put it really well when he describes why we added them in the first place. Our scrum approach hasn't played well with the upstream community and trying to juggle both concurrently has led to, well, to the midstream repos, among other things :) Hindsight and lessons learned and what not, but something we need to think about and correct for OSP 8. This might be airing too much of our dirty laundry on rdo-list, but I think discussing the midstream repos is secondary to figuring out a better approach to our scrum, strongly time-based deliverables v. upstream. Once we pick a direction there, I think the question of if they are needed will be more clear. > Is it worth discussing whether we can eliminate some of these, > especially for upstreams like t-h-t that aren't tightly tied to the > OpenStack release schedule? > > /me ducks flying bricks > > --Hugh > From Kevin.Fox at pnnl.gov Tue May 26 19:23:43 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 26 May 2015 19:23:43 +0000 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <20150526190624.GC4975@teletran-1> References: <20150525125555.GK4035@redhat.com>, <20150526190624.GC4975@teletran-1> Message-ID: <1A3C52DFCD06494D8528644858247BF01A2126C1@EX10MBOX03.pnnl.gov> My 2 cents as an operator: I upgraded a production cloud to Juno. Juno had an unexpected regression... https://bugs.launchpad.net/neutron/+bug/1422476 Bug filed on: 2015-02-16 I proposed a working fix tested and working on our production cloud: Feb 18, 2015 2:01 PM The patch hit upstream Juno: 2015-05-07 12:48:47 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Upstream cares about the long term care and feeding of the code base more then the stability of it. If you follow the review, no one really commented on the fix, only nitpicked the unit tests. I'm all for the long term unit testing code being rock solid, but the fix should have gone in right away, and then care been spent on making nice unit tests. The project got what it needed, but the ops guys suffered in the mean time. If you don't provide a way to apply fixes before they get upstreamed, your going to get some very unhappy operators I think since upstream doesn't care. :/ Thanks, Kevin ________________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of James Slagle [jslagle at redhat.com] Sent: Tuesday, May 26, 2015 12:06 PM To: Hugh O. Brock Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] reconsidering midstream repos On Mon, May 25, 2015 at 02:55:57PM +0200, Hugh O. Brock wrote: > Seems like the midstream repos are causing us a lot of pain with little > gain, at least in some cases. (For example it appears the t-h-t > midstream exists to carry a single patch that enables mongodb on > Centos.) Is it worth discussing whether we can eliminate some of these, > especially for upstreams like t-h-t that aren't tightly tied to the > OpenStack release schedule? > > /me ducks flying bricks No flying bricks from me anyway :) I agree with what you're saying, but would like to agree on what we're actually meaning. I think it's great that pretty much exactly what we said would happen has happened: as Kilo nears completion, the patches that we're having to carry for RDO-Manager is approaching 0. But before we say, let's kill the midstream repos...let's consider why we even have them to begin with. We didn't come up with the idea to fork a bunch of projects because it would make our *development* lives for RDO-Manager easier. In fact, it's exactly the opposite, they cause pain. So why do we have them? Quite simply it's because of the demand that we show, commit to, and support features in RDO-Manager prior to them necessarily merging in upstream OpenStack projects. If we can lift that requirement, we could likely do away with the midstream repos entirely. I know this is rdo-list, but in the spirit of being open, if we don't have midstream repos for RDO-Manager, then we shouldn't have them for OSP either, and we should only be committing to features as they land in upstream OpenStack. It doesn't make sense to me to drop the midstream repos for RDO-Manager, but then turn around and set them up privately for OSP. At least with them setup for RDO, we're doing the development of the management product in the open. In fact if we were to split like this, I think the pain would only intensify. It's pragmatic to say that anyone looking to productize OpenStack is going to have their own bug fixes/etc that they need, and maybe even some features. But setting up private forks as a general development principal to drive OpenStack based product development should be avoided at all costs. So yes, I'm +1 on dropping the midstream repos if they're no longer needed. But I'm -1 on then turning around and setting them up privately so that we can commit code to private forks because that code hasn't landed in upstream OpenStack yet. If we can agree that this isn't needed, then I think we can do without the midstream. Otherwise, if we're going to insist on forks to "land" features prior to them merging upstream, let's at least keep them public. There's a couple other facets to the discussion as well: The midstream repos is a place where we can preview where the management product is headed. For example, the Heat breakpoint work was available from the midstream repos much earlier than it ended up merging into Heat/heatclient. Some devs/users might find this sort of thing really useful, and could provide some early feedback for RDO-Manager. The midstream repos are also a place where we can quickly land reverts to unblock work. The Neutron regression that broke provisioning via Ironic this past cycle immediately comes to mind...it took a couple of weeks before the upstream revert even landed in Neutron. While that was broken, we carried the proposed revert in the midstream repos so that people could install a Neutron that actually worked for RDO-Manager. > > --Hugh > > -- > == Hugh Brock, hbrock at redhat.com == > == Senior Engineering Manager, Cloud Engineering == > == RDO Manager: Install, configure, and scale OpenStack == > == http://rdoproject.org == > > "I know that you believe you understand what you think I said, but I?m > not sure you realize that what you heard is not what I meant." > --Robert McCloskey > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From rdo-info at redhat.com Tue May 26 20:20:26 2015 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 26 May 2015 20:20:26 +0000 Subject: [Rdo-list] [RDO] RDO blog roundup, week of May 26, 2016 Message-ID: <0000014d91e22366-597cded1-09d6-412c-8495-51efa0a86a66-000000@email.amazonses.com> rbowen started a discussion. RDO blog roundup, week of May 26, 2016 --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1019/rdo-blog-roundup-week-of-may-26-2016 Have a great day! From sgordon at redhat.com Tue May 26 21:23:53 2015 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 26 May 2015 17:23:53 -0400 (EDT) Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <1145142356.5019791.1432675274623.JavaMail.zimbra@redhat.com> Message-ID: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> Hi all, At the community meetup, which we held in a somewhat lightning talk focused format due to time constraints, we touched on the subject of packaging the big tent [1] and said that if something was under OpenStack governance we (as a community, not we as in Red Hat) would be willing to accept it into RDO assuming somebody was willing to package/maintain it. Now packaging isn't really my end of things so I have to admit I haven't been paying exhaustive attention to the discussion about opening up the packaging infrastructure to external contributions, but I have been approached by one or two people who would be interested in packaging projects that have recently been added to the OpenStack namespace and they either develop or maintain a key interest in. Is there a quickstart I can point such potential contributors at? Thanks, Steve [1] https://etherpad.openstack.org/p/RDO_Vancouver From sbaker at redhat.com Tue May 26 21:41:07 2015 From: sbaker at redhat.com (Steve Baker) Date: Wed, 27 May 2015 09:41:07 +1200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> Message-ID: <5564E873.4000209@redhat.com> On 27/05/15 09:23, Steve Gordon wrote: > Hi all, > > At the community meetup, which we held in a somewhat lightning talk focused format due to time constraints, we touched on the subject of packaging the big tent [1] and said that if something was under OpenStack governance we (as a community, not we as in Red Hat) would be willing to accept it into RDO assuming somebody was willing to package/maintain it. > > Now packaging isn't really my end of things so I have to admit I haven't been paying exhaustive attention to the discussion about opening up the packaging infrastructure to external contributions, but I have been approached by one or two people who would be interested in packaging projects that have recently been added to the OpenStack namespace and they either develop or maintain a key interest in. Is there a quickstart I can point such potential contributors at? > > Thanks, > > Steve > > [1] https://etherpad.openstack.org/p/RDO_Vancouver > Heat had a design summit session which resulted in agreeing to remove our contrib resources and bringing big-tent resources into the main heat tree. The flow on from this is that Liberty Heat will depend on many new python-*client projects that may not yet be packaged. We do have criteria for these resources coming in-tree, such as being in the openstack namespace, and being included in global-requirements.txt, but we should have some consideration for the impact this has on downstream packaging. So either we just insist that downstream package all these clients, or we come up with some further criteria for the in-tree resources for when their client imports should be optional. Any opinions from the RDO community would be most welcome. From hguemar at fedoraproject.org Tue May 26 23:42:51 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 27 May 2015 01:42:51 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> References: <1145142356.5019791.1432675274623.JavaMail.zimbra@redhat.com> <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> Message-ID: 2015-05-26 23:23 GMT+02:00 Steve Gordon : > Hi all, > > At the community meetup, which we held in a somewhat lightning talk focused format due to time constraints, we touched on the subject of packaging the big tent [1] and said that if something was under OpenStack governance we (as a community, not we as in Red Hat) would be willing to accept it into RDO assuming somebody was willing to package/maintain it. > > Now packaging isn't really my end of things so I have to admit I haven't been paying exhaustive attention to the discussion about opening up the packaging infrastructure to external contributions, but I have been approached by one or two people who would be interested in packaging projects that have recently been added to the OpenStack namespace and they either develop or maintain a key interest in. Is there a quickstart I can point such potential contributors at? > Hi Steve, we have an excellent documentation (thanks to Jakub!) that explains our workflow. https://www.rdoproject.org/packaging/rdo-packaging.html As for external contributors in packaging, you may address them directly to Alan and/or I. Our RDO packaging meetings every wednesday are a good starting point. Regards, H. > Thanks, > > Steve > > [1] https://etherpad.openstack.org/p/RDO_Vancouver > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From stdake at cisco.com Wed May 27 00:12:50 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Wed, 27 May 2015 00:12:50 +0000 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <5564E873.4000209@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> Message-ID: On 5/26/15, 2:41 PM, "Steve Baker" wrote: >On 27/05/15 09:23, Steve Gordon wrote: >> Hi all, >> >> At the community meetup, which we held in a somewhat lightning talk >>focused format due to time constraints, we touched on the subject of >>packaging the big tent [1] and said that if something was under >>OpenStack governance we (as a community, not we as in Red Hat) would be >>willing to accept it into RDO assuming somebody was willing to >>package/maintain it. >> >> Now packaging isn't really my end of things so I have to admit I >>haven't been paying exhaustive attention to the discussion about opening >>up the packaging infrastructure to external contributions, but I have >>been approached by one or two people who would be interested in >>packaging projects that have recently been added to the OpenStack >>namespace and they either develop or maintain a key interest in. Is >>there a quickstart I can point such potential contributors at? >> >> Thanks, >> >> Steve >> >> [1] https://etherpad.openstack.org/p/RDO_Vancouver >> >Heat had a design summit session which resulted in agreeing to remove >our contrib resources and bringing big-tent resources into the main heat >tree. The flow on from this is that Liberty Heat will depend on many new >python-*client projects that may not yet be packaged. > >We do have criteria for these resources coming in-tree, such as being in >the openstack namespace, and being included in global-requirements.txt, >but we should have some consideration for the impact this has on >downstream packaging. > >So either we just insist that downstream package all these clients, or >we come up with some further criteria for the in-tree resources for when >their client imports should be optional. > >Any opinions from the RDO community would be most welcome. Steve, Packaging a client library is at most a 1 hour job. Testing it is another matter however :) The only downside I see is there has to be someone willing to do the packaging, meaning someone has to care about the project from an RDO perspective. I?m happy to take on maintainership of the Magnum packages for RDO (not to include puppettizing them, because I am not learning puppet;), but for your proposal to work well, we need maintainers for all the things. If puppet isn?t a requirement for inclusion in RDO, then it should be fairly easy to find volunteers in thee upstream communities to do the job. Regards -steve > >_______________________________________________ >Rdo-list mailing list >Rdo-list at redhat.com >https://www.redhat.com/mailman/listinfo/rdo-list > >To unsubscribe: rdo-list-unsubscribe at redhat.com From ggillies at redhat.com Wed May 27 00:17:50 2015 From: ggillies at redhat.com (Graeme Gillies) Date: Wed, 27 May 2015 10:17:50 +1000 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> Message-ID: <55650D2E.1040201@redhat.com> On 05/27/2015 10:12 AM, Steven Dake (stdake) wrote: > > > On 5/26/15, 2:41 PM, "Steve Baker" wrote: > >> On 27/05/15 09:23, Steve Gordon wrote: >>> Hi all, >>> >>> At the community meetup, which we held in a somewhat lightning talk >>> focused format due to time constraints, we touched on the subject of >>> packaging the big tent [1] and said that if something was under >>> OpenStack governance we (as a community, not we as in Red Hat) would be >>> willing to accept it into RDO assuming somebody was willing to >>> package/maintain it. >>> >>> Now packaging isn't really my end of things so I have to admit I >>> haven't been paying exhaustive attention to the discussion about opening >>> up the packaging infrastructure to external contributions, but I have >>> been approached by one or two people who would be interested in >>> packaging projects that have recently been added to the OpenStack >>> namespace and they either develop or maintain a key interest in. Is >>> there a quickstart I can point such potential contributors at? >>> >>> Thanks, >>> >>> Steve >>> >>> [1] https://etherpad.openstack.org/p/RDO_Vancouver >>> >> Heat had a design summit session which resulted in agreeing to remove >> our contrib resources and bringing big-tent resources into the main heat >> tree. The flow on from this is that Liberty Heat will depend on many new >> python-*client projects that may not yet be packaged. >> >> We do have criteria for these resources coming in-tree, such as being in >> the openstack namespace, and being included in global-requirements.txt, >> but we should have some consideration for the impact this has on >> downstream packaging. >> >> So either we just insist that downstream package all these clients, or >> we come up with some further criteria for the in-tree resources for when >> their client imports should be optional. >> >> Any opinions from the RDO community would be most welcome. > > Steve, > > Packaging a client library is at most a 1 hour job. Testing it is another > matter however :) The only downside I see is there has to be someone > willing to do the packaging, meaning someone has to care about the project > from an RDO perspective. I?m happy to take on maintainership of the > Magnum packages for RDO (not to include puppettizing them, because I am > not learning puppet;), but for your proposal to work well, we need > maintainers for all the things. This is an interesting idea. As we start to get more projects under big tent and into RDO (and the projects themselves might be in various states in terms of maturity and stability), it might be worthwhile having some documentation somewhere on the RDO website on what big tent projects are in RDO, and which one or more people have taken the ownership of packaging and maintaining them. This might be at a level higher than RPMs itself. Essentially some form of process/governance to officially identify what's in RDO (to avoid duplicates) and give people a way of identifying who's responsible if patches/work is needed (and who to contact to offer help assisting maintenance). Does this make sense? > > If puppet isn?t a requirement for inclusion in RDO, then it should be > fairly easy to find volunteers in thee upstream communities to do the job. > > Regards > -steve > > > >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From pmyers at redhat.com Wed May 27 00:30:45 2015 From: pmyers at redhat.com (Perry Myers) Date: Tue, 26 May 2015 20:30:45 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55650D2E.1040201@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> Message-ID: <55651035.7000802@redhat.com> On 05/26/2015 08:17 PM, Graeme Gillies wrote: > On 05/27/2015 10:12 AM, Steven Dake (stdake) wrote: >> >> >> On 5/26/15, 2:41 PM, "Steve Baker" wrote: >> >>> On 27/05/15 09:23, Steve Gordon wrote: >>>> Hi all, >>>> >>>> At the community meetup, which we held in a somewhat lightning talk >>>> focused format due to time constraints, we touched on the subject of >>>> packaging the big tent [1] and said that if something was under >>>> OpenStack governance we (as a community, not we as in Red Hat) would be >>>> willing to accept it into RDO assuming somebody was willing to >>>> package/maintain it. >>>> >>>> Now packaging isn't really my end of things so I have to admit I >>>> haven't been paying exhaustive attention to the discussion about opening >>>> up the packaging infrastructure to external contributions, but I have >>>> been approached by one or two people who would be interested in >>>> packaging projects that have recently been added to the OpenStack >>>> namespace and they either develop or maintain a key interest in. Is >>>> there a quickstart I can point such potential contributors at? >>>> >>>> Thanks, >>>> >>>> Steve >>>> >>>> [1] https://etherpad.openstack.org/p/RDO_Vancouver >>>> >>> Heat had a design summit session which resulted in agreeing to remove >>> our contrib resources and bringing big-tent resources into the main heat >>> tree. The flow on from this is that Liberty Heat will depend on many new >>> python-*client projects that may not yet be packaged. >>> >>> We do have criteria for these resources coming in-tree, such as being in >>> the openstack namespace, and being included in global-requirements.txt, >>> but we should have some consideration for the impact this has on >>> downstream packaging. >>> >>> So either we just insist that downstream package all these clients, or >>> we come up with some further criteria for the in-tree resources for when >>> their client imports should be optional. >>> >>> Any opinions from the RDO community would be most welcome. >> >> Steve, >> >> Packaging a client library is at most a 1 hour job. Testing it is another >> matter however :) The only downside I see is there has to be someone >> willing to do the packaging, meaning someone has to care about the project >> from an RDO perspective. I?m happy to take on maintainership of the >> Magnum packages for RDO (not to include puppettizing them, because I am >> not learning puppet;), but for your proposal to work well, we need >> maintainers for all the things. > > This is an interesting idea. As we start to get more projects under big > tent and into RDO (and the projects themselves might be in various > states in terms of maturity and stability), it might be worthwhile > having some documentation somewhere on the RDO website on what big tent > projects are in RDO, and which one or more people have taken the > ownership of packaging and maintaining them. This might be at a level > higher than RPMs itself. Essentially some form of process/governance to > officially identify what's in RDO (to avoid duplicates) and give people > a way of identifying who's responsible if patches/work is needed (and > who to contact to offer help assisting maintenance). > > Does this make sense? +1 From mattdm at fedoraproject.org Wed May 27 00:42:29 2015 From: mattdm at fedoraproject.org (Matthew Miller) Date: Tue, 26 May 2015 20:42:29 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55650D2E.1040201@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> Message-ID: <20150527004228.GA14433@mattdm.org> On Wed, May 27, 2015 at 10:17:50AM +1000, Graeme Gillies wrote: > This is an interesting idea. As we start to get more projects under big > tent and into RDO (and the projects themselves might be in various > states in terms of maturity and stability), it might be worthwhile > having some documentation somewhere on the RDO website on what big tent > projects are in RDO, and which one or more people have taken the > ownership of packaging and maintaining them. This might be at a level > higher than RPMs itself. Essentially some form of process/governance to > officially identify what's in RDO (to avoid duplicates) and give people > a way of identifying who's responsible if patches/work is needed (and > who to contact to offer help assisting maintenance). Possibly this is too heavyweight, but it may be that pkdb2 could be of help here. It's a webapp (open source, naturally) for both documenting this and managing the associated ACLs. See for example -- Matthew Miller Fedora Project Leader From idezebi at gmail.com Wed May 27 00:56:57 2015 From: idezebi at gmail.com (idzzy) Date: Wed, 27 May 2015 09:56:57 +0900 Subject: [Rdo-list] No ironic pkg during packsack installation Message-ID: Hello, I'm trying to setup Kilo in three nodes (controller, compute, network). During packstack installation, following error message was output. --------------------------------- 10.32.37.47_ironic.pp: ? ? ? ? ? ? ? ? ? ? ? ? ? ?[ ERROR ] Applying Puppet manifests ? ? ? ? ? ? ? ? ? ? ? ? [ ERROR ] ERROR : Error appeared during Puppet run: 10.32.37.47_ironic.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-ironic-conductor' returned 1: Error: No matching Packages to list You will find full trace in log /var/tmp/packstack/20150526-164515-bTdbtG/manifests/10.32.37.47_ironic.pp.log Please check log file /var/tmp/packstack/20150526-164515-bTdbtG/openstack-setup.log for more information --------------------------------- Please see these logs from this link. /var/tmp/packstack/20150526-164515-bTdbtG/manifests/10.32.37.47_ironic.pp.log ? ? => ? ?https://www.dropbox.com/s/bdnu7aujwom97zg/10.32.37.47_ironic.pp.log?dl=0 /var/tmp/packstack/20150526-164515-bTdbtG/openstack-setup.log ? ? => ? ?https://www.dropbox.com/s/aysqstzjdlpweam/openstack-setup.log?dl=0 openstack-kilo and epel repo has been already added. --------------------------------- # yum repolist | awk '{print $1}' base/7/x86_64 epel/x86_64 extras/7/x86_64 openstack-kilo updates/7/x86_64 --------------------------------- But seems to not find openstack-ironic-* pkg --------------------------------- # yum search openstack-ironic Warning: No matches found for: openstack-ironic No matches found --------------------------------- Relevant bug (already fixed) https://bugzilla.redhat.com/show_bug.cgi?id=1187343 There are ironic puppet modules. --------------------------------- ?# ll /usr/share/openstack-puppet/modules/ironic/ total 40 drwxr-xr-x 2 root root ?4096 May 26 16:01 examples -rw-r--r-- 1 root root ? 758 May 16 00:36 Gemfile drwxr-xr-x 3 root root ?4096 May 26 16:01 lib -rw-r--r-- 1 root root 10143 May 16 00:36 LICENSE drwxr-xr-x 5 root root ?4096 May 26 16:01 manifests -rw-r--r-- 1 root root ?1312 May 16 00:36 metadata.json -rw-r--r-- 1 root root ? 303 May 16 00:36 Rakefile -rw-r--r-- 1 root root ?1751 May 16 00:36 README.md --------------------------------- How fix this issue? Any ides would be helpful for me. I?m also asking this to ask.openstack.org. https://ask.openstack.org/en/question/67465/rdo-kilo-no-pkg-of-ironic-conductor/ Thank you. ? idzzy From idezebi at gmail.com Wed May 27 03:30:55 2015 From: idezebi at gmail.com (idzzy) Date: Wed, 27 May 2015 12:30:55 +0900 Subject: [Rdo-list] No ironic pkg during packsack installation In-Reply-To: References: Message-ID: Hello, Resolved. Sorry confusing. If enable ironic, It seems to setup rdo-manager-repo manually before run packstack. curl -o /etc/yum.repos.d/rdo-manager-release.repo https://raw.githubusercontent.com/rdo-management/rdo-manager-release/master/rdo-manager-release.repo Thanks. ? idzzy On May 27, 2015 at 9:57:01 AM, idzzy (idezebi at gmail.com) wrote: > Hello, > > I'm trying to setup Kilo in three nodes (controller, compute, network). > During packstack installation, following error message was output. > > --------------------------------- > 10.32.37.47_ironic.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 10.32.37.47_ironic.pp > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-ironic-conductor' returned > 1: Error: No matching Packages to list > You will find full trace in log /var/tmp/packstack/20150526-164515-bTdbtG/manifests/10.32.37.47_ironic.pp.log > Please check log file /var/tmp/packstack/20150526-164515-bTdbtG/openstack-setup.log > for more information > --------------------------------- > > Please see these logs from this link. > /var/tmp/packstack/20150526-164515-bTdbtG/manifests/10.32.37.47_ironic.pp.log > => https://www.dropbox.com/s/bdnu7aujwom97zg/10.32.37.47_ironic.pp.log?dl=0 > /var/tmp/packstack/20150526-164515-bTdbtG/openstack-setup.log > => https://www.dropbox.com/s/aysqstzjdlpweam/openstack-setup.log?dl=0 > > > openstack-kilo and epel repo has been already added. > --------------------------------- > # yum repolist | awk '{print $1}' > base/7/x86_64 > epel/x86_64 > extras/7/x86_64 > openstack-kilo > updates/7/x86_64 > --------------------------------- > > But seems to not find openstack-ironic-* pkg > --------------------------------- > # yum search openstack-ironic > Warning: No matches found for: openstack-ironic > No matches found > --------------------------------- > > Relevant bug (already fixed) https://bugzilla.redhat.com/show_bug.cgi?id=1187343 > There are ironic puppet modules. > --------------------------------- > # ll /usr/share/openstack-puppet/modules/ironic/ > total 40 > drwxr-xr-x 2 root root 4096 May 26 16:01 examples > -rw-r--r-- 1 root root 758 May 16 00:36 Gemfile > drwxr-xr-x 3 root root 4096 May 26 16:01 lib > -rw-r--r-- 1 root root 10143 May 16 00:36 LICENSE > drwxr-xr-x 5 root root 4096 May 26 16:01 manifests > -rw-r--r-- 1 root root 1312 May 16 00:36 metadata.json > -rw-r--r-- 1 root root 303 May 16 00:36 Rakefile > -rw-r--r-- 1 root root 1751 May 16 00:36 README.md > --------------------------------- > > How fix this issue? Any ides would be helpful for me. > > I?m also asking this to ask.openstack.org. > https://ask.openstack.org/en/question/67465/rdo-kilo-no-pkg-of-ironic-conductor/ > > Thank you. > > ? > idzzy > From mrunge at redhat.com Wed May 27 06:49:07 2015 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 27 May 2015 08:49:07 +0200 Subject: [Rdo-list] [horizon] can't access the dashboard In-Reply-To: <556471BA.3080507@redhat.com> References: <1B69E72A-7E3B-42DE-839E-A6BA896C3772@redhat.com> <556471BA.3080507@redhat.com> Message-ID: <556568E3.7000209@redhat.com> On 26/05/15 15:14, Elvir Kuric wrote: > On 05/26/2015 03:08 PM, ICHIBA Sara wrote: >> this is quite annoying. I have to remove my cookies every time I wanna >> login. Do you know any other tricks to get rid of this error for once? > in https://bugzilla.redhat.com/show_bug.cgi?id=1218894 comment #9 seems > to be an option. > hth > Elvir >> I have been told, all requirements hit the repos now. yum update && systemctl restart httpd should fix login issues after keystone token timeout in horizon. Matthias From ifarkas at redhat.com Wed May 27 07:38:41 2015 From: ifarkas at redhat.com (Imre Farkas) Date: Wed, 27 May 2015 09:38:41 +0200 Subject: [Rdo-list] introspection and raid configuration In-Reply-To: <556494BA.6020208@redhat.com> References: <55648647.3030000@redhat.com> <556494BA.6020208@redhat.com> Message-ID: <55657481.5010407@redhat.com> On 05/26/2015 05:43 PM, Dmitry Tantsur wrote: > On 05/26/2015 04:42 PM, Imre Farkas wrote: >> Hi all, >> >> I would like to gather some feedback on the current worfklow regarding >> the subject. RAID configuration on DRAC machines works as follows (it >> assumes there's no RAID volume on the machine): >> 1. user defines the deployment profile and the target raid configuration >> for each profile >> 2. ironic-discoverd introspects the node to gather facts >> 3. ach-match picks a deployment profile based on the gathered facts >> 4. instack triggers the raid configuration based on the selected profile >> >> A bug[1] has been discovered regarding step 2: ironic-discoverd fails >> because it tries to figure out the size of the local disk but it can't >> find any as no volume exists on the RAID controller yet. This is a >> chicken-and-egg problem because ironic-discoverd doesn't work if the >> RAID volume(s) has not been configured but the RAID configuration can't >> be triggered if ironic-discoverd hasn't gathered the facts about the >> node. >> >> Few possible workarounds: >> #1: make saving the local disk size optional in the standard plugin in >> ironic-discoverd. The downside is that the standard plugin is supposed >> to enroll nodes in ironic with all attributes necessary for scheduling. >> This assumption might fail with this solution. > > -1 will never get upstream for the reasons you stated > >> >> #2: run discovery multiple times with different set of plugins. The run >> before RAID configuration would exclude the standard plugin while the >> run afterwards could potentially exclude others. The parameters passed >> by the user to ironic-discoverd for each run need to be properly >> documented. It would slow down the installation because each run >> requires a reboot which takes a lot of time on bare metal. > > Possible, but better (IMO) idea below. > >> >> #3: name your suggestion! > > #3. modify your existing root_device_hint plugin to insert a fake > local_gb value (with issuing a warning), and then put it to the > beginning of the plugin pipeline. WDYT? > I would avoid that for the very same reason as we rejected option #1: it would do things quite differently than it is expected. However, building on top of this idea, we could introduce a fake_local_gb plugin and enable it as default in instack. Thoughts? Imre >> >> Any thoughts/preference on the above described workarounds? >> >> Thanks, >> Imre >> >> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1222124 > From dtantsur at redhat.com Wed May 27 07:50:46 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 27 May 2015 09:50:46 +0200 Subject: [Rdo-list] introspection and raid configuration In-Reply-To: <55657481.5010407@redhat.com> References: <55648647.3030000@redhat.com> <556494BA.6020208@redhat.com> <55657481.5010407@redhat.com> Message-ID: <55657756.7020601@redhat.com> On 05/27/2015 09:38 AM, Imre Farkas wrote: > On 05/26/2015 05:43 PM, Dmitry Tantsur wrote: >> On 05/26/2015 04:42 PM, Imre Farkas wrote: >>> Hi all, >>> >>> I would like to gather some feedback on the current worfklow regarding >>> the subject. RAID configuration on DRAC machines works as follows (it >>> assumes there's no RAID volume on the machine): >>> 1. user defines the deployment profile and the target raid configuration >>> for each profile >>> 2. ironic-discoverd introspects the node to gather facts >>> 3. ach-match picks a deployment profile based on the gathered facts >>> 4. instack triggers the raid configuration based on the selected profile >>> >>> A bug[1] has been discovered regarding step 2: ironic-discoverd fails >>> because it tries to figure out the size of the local disk but it can't >>> find any as no volume exists on the RAID controller yet. This is a >>> chicken-and-egg problem because ironic-discoverd doesn't work if the >>> RAID volume(s) has not been configured but the RAID configuration can't >>> be triggered if ironic-discoverd hasn't gathered the facts about the >>> node. >>> >>> Few possible workarounds: >>> #1: make saving the local disk size optional in the standard plugin in >>> ironic-discoverd. The downside is that the standard plugin is supposed >>> to enroll nodes in ironic with all attributes necessary for scheduling. >>> This assumption might fail with this solution. >> >> -1 will never get upstream for the reasons you stated >> >>> >>> #2: run discovery multiple times with different set of plugins. The run >>> before RAID configuration would exclude the standard plugin while the >>> run afterwards could potentially exclude others. The parameters passed >>> by the user to ironic-discoverd for each run need to be properly >>> documented. It would slow down the installation because each run >>> requires a reboot which takes a lot of time on bare metal. >> >> Possible, but better (IMO) idea below. >> >>> >>> #3: name your suggestion! >> >> #3. modify your existing root_device_hint plugin to insert a fake >> local_gb value (with issuing a warning), and then put it to the >> beginning of the plugin pipeline. WDYT? >> > > I would avoid that for the very same reason as we rejected option #1: it > would do things quite differently than it is expected. However, building > on top of this idea, we could introduce a fake_local_gb plugin and > enable it as default in instack. Thoughts? ETOOMANYPLUGINS :D I would say it's fine for root_device_hint plugin. It's not enabled by default upstream, and it's a normal flow involving it. > > Imre > > >>> >>> Any thoughts/preference on the above described workarounds? >>> >>> Thanks, >>> Imre >>> >>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1222124 >> > From hguemar at fedoraproject.org Wed May 27 08:51:45 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 27 May 2015 10:51:45 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> Message-ID: 2015-05-27 2:12 GMT+02:00 Steven Dake (stdake) : > > > Steve, > > Packaging a client library is at most a 1 hour job. Testing it is another > matter however :) The only downside I see is there has to be someone > willing to do the packaging, meaning someone has to care about the project > from an RDO perspective. I?m happy to take on maintainership of the > Magnum packages for RDO (not to include puppettizing them, because I am > not learning puppet;), but for your proposal to work well, we need > maintainers for all the things. > You're welcome to take ownership for Magnum packages :) +1 for having identified Point of Contacts for every components, it won't work otherwise. > If puppet isn?t a requirement for inclusion in RDO, then it should be > fairly easy to find volunteers in thee upstream communities to do the job. > It's not a hard requirement, we're already including community contributed components (ie: GBP in Juno) without Puppet/packstack support. As a reviewer, I will enforce that default package configuration makes it easy to run and test services. Regards, H. > Regards > -steve > From flavio at redhat.com Wed May 27 09:14:29 2015 From: flavio at redhat.com (Flavio Percoco) Date: Wed, 27 May 2015 11:14:29 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> References: <1145142356.5019791.1432675274623.JavaMail.zimbra@redhat.com> <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> Message-ID: <20150527091428.GS15762@redhat.com> On 26/05/15 17:23 -0400, Steve Gordon wrote: >Hi all, > >At the community meetup, which we held in a somewhat lightning talk focused format due to time constraints, we touched on the subject of packaging the big tent [1] and said that if something was under OpenStack governance we (as a community, not we as in Red Hat) would be willing to accept it into RDO assuming somebody was willing to package/maintain it. > >Now packaging isn't really my end of things so I have to admit I haven't been paying exhaustive attention to the discussion about opening up the packaging infrastructure to external contributions, but I have been approached by one or two people who would be interested in packaging projects that have recently been added to the OpenStack namespace and they either develop or maintain a key interest in. Is there a quickstart I can point such potential contributors at? > >Thanks, > >Steve > >[1] https://etherpad.openstack.org/p/RDO_Vancouver I had a hallway talk with Zigo about this. He proposed a governance[0] changed and he was encouraged to communicate things to the mailing list[1]. I thought to bring these to this thread as a general note and to invite openstack specific discussions to happen in os-dev. Cheers, Flavio [0] https://review.openstack.org/#/c/185187/ [1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064848.html > >_______________________________________________ >Rdo-list mailing list >Rdo-list at redhat.com >https://www.redhat.com/mailman/listinfo/rdo-list > >To unsubscribe: rdo-list-unsubscribe at redhat.com -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From hguemar at fedoraproject.org Wed May 27 09:22:56 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 27 May 2015 11:22:56 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <20150527004228.GA14433@mattdm.org> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> Message-ID: 2015-05-27 2:42 GMT+02:00 Matthew Miller : > On Wed, May 27, 2015 at 10:17:50AM +1000, Graeme Gillies wrote: >> This is an interesting idea. As we start to get more projects under big >> tent and into RDO (and the projects themselves might be in various >> states in terms of maturity and stability), it might be worthwhile >> having some documentation somewhere on the RDO website on what big tent >> projects are in RDO, and which one or more people have taken the >> ownership of packaging and maintaining them. This might be at a level >> higher than RPMs itself. Essentially some form of process/governance to >> officially identify what's in RDO (to avoid duplicates) and give people >> a way of identifying who's responsible if patches/work is needed (and >> who to contact to offer help assisting maintenance). > > Possibly this is too heavyweight, but it may be that pkdb2 > could be of help here. It's a > webapp (open source, naturally) for both documenting this and managing > the associated ACLs. See for example > > Feature-wise, pkgdb2 does the job but one issue is that RDO is a layered product over two distros (CentOS, Fedora) and various releases. That's an unsupported scenario by pkgdb2, and since the CentOS rise, we're starting to have more and more cross-distro layered products. AFAIK, we don't have any better tool than pkgdb2 that integrates with dist-git and Koji. A single entry point would be a huge help for us, though. H. > -- > Matthew Miller > > Fedora Project Leader > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ifarkas at redhat.com Wed May 27 09:23:25 2015 From: ifarkas at redhat.com (Imre Farkas) Date: Wed, 27 May 2015 11:23:25 +0200 Subject: [Rdo-list] introspection and raid configuration In-Reply-To: <55657756.7020601@redhat.com> References: <55648647.3030000@redhat.com> <556494BA.6020208@redhat.com> <55657481.5010407@redhat.com> <55657756.7020601@redhat.com> Message-ID: <55658D0D.9010904@redhat.com> On 05/27/2015 09:50 AM, Dmitry Tantsur wrote: > On 05/27/2015 09:38 AM, Imre Farkas wrote: >> On 05/26/2015 05:43 PM, Dmitry Tantsur wrote: >>> On 05/26/2015 04:42 PM, Imre Farkas wrote: >>>> Hi all, >>>> >>>> I would like to gather some feedback on the current worfklow regarding >>>> the subject. RAID configuration on DRAC machines works as follows (it >>>> assumes there's no RAID volume on the machine): >>>> 1. user defines the deployment profile and the target raid >>>> configuration >>>> for each profile >>>> 2. ironic-discoverd introspects the node to gather facts >>>> 3. ach-match picks a deployment profile based on the gathered facts >>>> 4. instack triggers the raid configuration based on the selected >>>> profile >>>> >>>> A bug[1] has been discovered regarding step 2: ironic-discoverd fails >>>> because it tries to figure out the size of the local disk but it can't >>>> find any as no volume exists on the RAID controller yet. This is a >>>> chicken-and-egg problem because ironic-discoverd doesn't work if the >>>> RAID volume(s) has not been configured but the RAID configuration can't >>>> be triggered if ironic-discoverd hasn't gathered the facts about the >>>> node. >>>> >>>> Few possible workarounds: >>>> #1: make saving the local disk size optional in the standard plugin in >>>> ironic-discoverd. The downside is that the standard plugin is supposed >>>> to enroll nodes in ironic with all attributes necessary for scheduling. >>>> This assumption might fail with this solution. >>> >>> -1 will never get upstream for the reasons you stated >>> >>>> >>>> #2: run discovery multiple times with different set of plugins. The run >>>> before RAID configuration would exclude the standard plugin while the >>>> run afterwards could potentially exclude others. The parameters passed >>>> by the user to ironic-discoverd for each run need to be properly >>>> documented. It would slow down the installation because each run >>>> requires a reboot which takes a lot of time on bare metal. >>> >>> Possible, but better (IMO) idea below. >>> >>>> >>>> #3: name your suggestion! >>> >>> #3. modify your existing root_device_hint plugin to insert a fake >>> local_gb value (with issuing a warning), and then put it to the >>> beginning of the plugin pipeline. WDYT? >>> >> >> I would avoid that for the very same reason as we rejected option #1: it >> would do things quite differently than it is expected. However, building >> on top of this idea, we could introduce a fake_local_gb plugin and >> enable it as default in instack. Thoughts? > > ETOOMANYPLUGINS :D > > I would say it's fine for root_device_hint plugin. It's not enabled by > default upstream, and it's a normal flow involving it. > Patch proposed: https://review.openstack.org/#/c/185896/ >> >> Imre >> >> >>>> >>>> Any thoughts/preference on the above described workarounds? >>>> >>>> Thanks, >>>> Imre >>>> >>>> >>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1222124 >>> >> > From dtantsur at redhat.com Wed May 27 09:38:33 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 27 May 2015 11:38:33 +0200 Subject: [Rdo-list] introspection and raid configuration In-Reply-To: <55658D0D.9010904@redhat.com> References: <55648647.3030000@redhat.com> <556494BA.6020208@redhat.com> <55657481.5010407@redhat.com> <55657756.7020601@redhat.com> <55658D0D.9010904@redhat.com> Message-ID: <55659099.1020901@redhat.com> On 05/27/2015 11:23 AM, Imre Farkas wrote: > On 05/27/2015 09:50 AM, Dmitry Tantsur wrote: >> On 05/27/2015 09:38 AM, Imre Farkas wrote: >>> On 05/26/2015 05:43 PM, Dmitry Tantsur wrote: >>>> On 05/26/2015 04:42 PM, Imre Farkas wrote: >>>>> Hi all, >>>>> >>>>> I would like to gather some feedback on the current worfklow regarding >>>>> the subject. RAID configuration on DRAC machines works as follows (it >>>>> assumes there's no RAID volume on the machine): >>>>> 1. user defines the deployment profile and the target raid >>>>> configuration >>>>> for each profile >>>>> 2. ironic-discoverd introspects the node to gather facts >>>>> 3. ach-match picks a deployment profile based on the gathered facts >>>>> 4. instack triggers the raid configuration based on the selected >>>>> profile >>>>> >>>>> A bug[1] has been discovered regarding step 2: ironic-discoverd fails >>>>> because it tries to figure out the size of the local disk but it can't >>>>> find any as no volume exists on the RAID controller yet. This is a >>>>> chicken-and-egg problem because ironic-discoverd doesn't work if the >>>>> RAID volume(s) has not been configured but the RAID configuration >>>>> can't >>>>> be triggered if ironic-discoverd hasn't gathered the facts about the >>>>> node. >>>>> >>>>> Few possible workarounds: >>>>> #1: make saving the local disk size optional in the standard plugin in >>>>> ironic-discoverd. The downside is that the standard plugin is supposed >>>>> to enroll nodes in ironic with all attributes necessary for >>>>> scheduling. >>>>> This assumption might fail with this solution. >>>> >>>> -1 will never get upstream for the reasons you stated >>>> >>>>> >>>>> #2: run discovery multiple times with different set of plugins. The >>>>> run >>>>> before RAID configuration would exclude the standard plugin while the >>>>> run afterwards could potentially exclude others. The parameters passed >>>>> by the user to ironic-discoverd for each run need to be properly >>>>> documented. It would slow down the installation because each run >>>>> requires a reboot which takes a lot of time on bare metal. >>>> >>>> Possible, but better (IMO) idea below. >>>> >>>>> >>>>> #3: name your suggestion! >>>> >>>> #3. modify your existing root_device_hint plugin to insert a fake >>>> local_gb value (with issuing a warning), and then put it to the >>>> beginning of the plugin pipeline. WDYT? >>>> >>> >>> I would avoid that for the very same reason as we rejected option #1: it >>> would do things quite differently than it is expected. However, building >>> on top of this idea, we could introduce a fake_local_gb plugin and >>> enable it as default in instack. Thoughts? >> >> ETOOMANYPLUGINS :D >> >> I would say it's fine for root_device_hint plugin. It's not enabled by >> default upstream, and it's a normal flow involving it. >> > > Patch proposed: https://review.openstack.org/#/c/185896/ Is it fine if we carry it downstream for some time? During rename I don't want to land anything upstream. I hope to be finished in a couple of days. > >>> >>> Imre >>> >>> >>>>> >>>>> Any thoughts/preference on the above described workarounds? >>>>> >>>>> Thanks, >>>>> Imre >>>>> >>>>> >>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1222124 >>>> >>> >> > From pgsousa at gmail.com Wed May 27 10:01:17 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 27 May 2015 11:01:17 +0100 Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? In-Reply-To: <5564C479.2030402@redhat.com> References: <5563093F.9000206@redhat.com> <0899b727-24aa-45c0-a206-b42df66afd89@redhat.com> <5564C479.2030402@redhat.com> Message-ID: Hi Jay, thank you for you answer, in fact I was running instack-deploy-overcloud after those commands, I didn't realize it deleted the plan, so a couple of questions: - Should I run something like "heat stack-create -f tuskar_templates/plan.yaml -e tuskar_templates/environment.yaml" instead? Will it work? - Or should I wait for that to be ready in UI, if I understood correctly, and test it from there? Thanks, Pedro Sousa On Tue, May 26, 2015 at 8:07 PM, Jay Dobies wrote: > > > On 05/26/2015 02:19 PM, Pedro Sousa wrote: > >> Hi Hugh, >> >> I've tried to change the plan: >> >> # tuskar plan-update -A Compute-1::NeutronEnableTunnelling=False >> c91b13a2-afd6-4eb2-9a78-46335190519d >> # tuskar plan-update -A Controller-1::NeutronEnableTunnelling=False >> c91b13a2-afd6-4eb2-9a78-46335190519d >> # export NEUTRON_NETWORK_TYPE=vlan >> >> But the stack failed, I also see that plan-update doesn't work: >> > > It depends on what you did between the lines above and the line below. > > If you're making the updates above and then running > instack-deploy-overcloud, it's not going to work. That script deletes the > plan and recreates it, losing your updates in the process. > > That logic (role addition and plan create) is being moved out of > instack-deploy-overcloud to an installation-time step to enable this sort > of thing (not fully sure the state of that, but the UI needs the plan > create to be done during install as well). > > [stack at instack ~]$ heat stack-show 4dd74e83-e90f-437f-b8b5-ac45d6ada9db >> | grep Tunnel >> | | "Controller-1::NeutronEnableTunnelling": >> "True", >> >> Regards >> Pedro Sousa >> >> >> >> >> On Tue, May 26, 2015 at 6:39 PM, Hugh Brock > > wrote: >> >> It should be... But we haven't really tested it yet, to my >> knowledge. It's an important configuration that we want to support. >> >> If you are able to sort it out and past your results here, that >> would be great! >> >> -Hugh >> >> Sent from my mobile, please pardon the top posting. >> >> *From:* Pedro Sousa > >> *Sent:* May 26, 2015 7:28 PM >> *To:* Giulio Fidente >> *Cc:* Marios Andreou;rdo-list at redhat.com >> ;Jason Dobies >> *Subject:* Re: [Rdo-list] RDO-Manager ovecloud change existing plan, >> how? >> >> >> Hi all, >> >> thanks to Giulio recommendations in #rdo I've managed to change some >> parameters: >> >> #heat stack-delete overcloud >> #export NEUTRON_TUNNEL_TYPES=vxlan >> #export NEUTRON_TUNNEL_TYPE=vxlan >> #export NEUTRON_NETWORK_TYPE=vxlan >> #instack-deploy-overcloud --tuskar >> >> This works for TUSKAR_PARAMETERS contained in the >> instack-deploy-overcloud script (please correct me if I'm wrong). >> >> My question is if it's possible to use VLAN for tenants, using a >> VLAN range and disable GRE/VXLAN tunneling. >> >> Thanks, >> Pedro Sousa >> >> >> On Mon, May 25, 2015 at 12:36 PM, Giulio Fidente >> > wrote: >> >> On 05/25/2015 01:09 PM, Pedro Sousa wrote: >> >> Hi all, >> >> I've deployed rdo-manager in a virt env and everything is >> working fine >> except the vnc console which is alreday an open bug for that. >> >> Now I would like to change some parameters on my deployment, >> let's say I >> wan't to disable NeutronTunneling, I wan't to use VLAN for >> tenants and >> use 1500 MTU on dnsmasq. >> >> So I downloaded the plan: >> >> #tuskar plan-templates -O /tmp uuid >> >> changed plan.yaml, environment.yaml, >> provider-Controller-1.yaml, >> provider-Compute-1.yaml. >> >> than I ran the stack: >> >> # heat stack-create -f tmp/plan.yaml -e tmp/environment.yaml >> overcloud >> >> The overcloud is deployed fine but the values aren't >> changed. What I'm >> missing here? >> >> >> hi, >> >> if you launch stack-create manually the newly created overcloud >> is not reprovisioned with the initial keystone >> endpoints/users/roles ... to get an usable overcloud you should >> launch instack-deploy-overcloud again >> >> so you can change the defaults for the various params by >> patching the tuskar plan with 'tuskar plan-update' see [1] >> >> yet some of these are automatically parsed from ENV vars, like >> NEUTRON_TUNNEL_TYPES and NEUTRON_NETWORK_TYPE see [2] >> >> the NeutronDnsmasqOptions param instead is not parsed from any >> ENV var, so you're forced to use 'tuskar plan-update' >> >> I'm adding a couple of guys on CC who migh help but, let us know >> how it goes! >> >> 1. >> >> https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L274 >> >> 2. >> >> https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L205-L208 >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsedovic at redhat.com Wed May 27 10:13:04 2015 From: tsedovic at redhat.com (Tomas Sedovic) Date: Wed, 27 May 2015 12:13:04 +0200 Subject: [Rdo-list] RDO + floating IPs In-Reply-To: <1226319266.4318549.1432650442756.JavaMail.zimbra@redhat.com> References: <55647B1C.9040208@redhat.com> <1226319266.4318549.1432650442756.JavaMail.zimbra@redhat.com> Message-ID: <556598B0.90400@redhat.com> On 05/26/2015 04:27 PM, Jon Jozwiak wrote: >> >> ----- Original Message ----- >> From: "Tomas Sedovic" >> To: rdo-list at redhat.com >> Sent: Tuesday, May 26, 2015 8:54:36 AM >> Subject: [Rdo-list] RDO + floating IPs >> For the subnet I need the CIDR, gateway and the floating IP range. I'm >> not sure if I'm getting these right: >> >> * CIDR: `ip addr`, find my network interface (enp0s25), get it's inet >> value: 10.40.128.44/20 >> * Gateway: `ip route | grep default` -> default via 10.40.143.254 dev >> enp0s25 >> * And these 5 IP addresses are assigned to me: 10.40.128.80-10.40.128.84 >> >> neutron subnet-create mynetwork 10.40.128.44/20 --name mysubnet > --enable_dhcp=False --allocation_pool > start=10.40.128.80,end=10.40.128.84 --gateway 10.40.143.254 > > I'm not certain if this is the root cause or not, but I believe the subnet should be created as 10.40.128.0/20 rather than 10.40.128.44/20. Thanks. It didn't seem to have any effect but I'll keep using it in the future. > > > From tsedovic at redhat.com Wed May 27 10:13:08 2015 From: tsedovic at redhat.com (Tomas Sedovic) Date: Wed, 27 May 2015 12:13:08 +0200 Subject: [Rdo-list] RDO + floating IPs In-Reply-To: <20150526151654.GI15019@tesla> References: <55647B1C.9040208@redhat.com> <20150526151654.GI15019@tesla> Message-ID: <556598B4.5030705@redhat.com> On 05/26/2015 05:16 PM, Kashyap Chamarthy wrote: > On Tue, May 26, 2015 at 03:54:36PM +0200, Tomas Sedovic wrote: >> Hey everyone, >> >> I tried to get RDO set up with floating IP addresses, but I'm running into >> problems I'm not sure how to debug (not that familiar with networking and >> Neutron). >> >> I followed these guides on a clean Fedora 21 x86_64 server: >> >> https://www.rdoproject.org/Quickstart >> https://www.rdoproject.org/Floating_IP_range >> > [. . .] > >> once all 20 requests failed, it got to a login screen, but I could not ping >> or SSH into it: >> >> # ping 10.40.128.81 >> PING 10.40.128.81 (10.40.128.81) 56(84) bytes of data. >> From 10.40.128.44 icmp_seq=1 Destination Host Unreachable >> From 10.40.128.44 icmp_seq=2 Destination Host Unreachable >> From 10.40.128.44 icmp_seq=3 Destination Host Unreachable >> From 10.40.128.44 icmp_seq=4 Destination Host Unreachable >> >> # ssh cirros at 10.40.128.81 >> ssh: connect to host 10.40.128.81 port 22: No route to host > > It could be any no. of reasons, as I don't know what's going on in your > network. But, your steps sound reasonably correct. Just for comparision, > that's what I normally do: > > # Create new private network: > $ neutron net-create $privnetname > > # Create a subnet > neutron subnet-create $privnetname \ > $subnetspace/24 \ > --name $privsubnetname > > # Create a router > neutron router-create $routername > > # Associate the router to the external network by setting its gateway > # NOTE: This assumes the external network name is 'ext' > > export EXT_NET=$(neutron net-list | grep ext | awk '{print $2;}') > export PRIV_NET=$(neutron subnet-list | grep $privsubnetname | awk '{print $2;}') > export ROUTER_ID=$(neutron router-list | grep $routername | awk '{print $2;}' > > neutron router-gateway-set \ > $ROUTER_ID $EXT_NET_ID > > neutron router-interface-add \ > $ROUTER_ID $PRIV_NET_ID > > > # Add Neutron security groups for this test tenant > neutron security-group-rule-create \ > --protocol icmp \ > --direction ingress \ > --remote-ip-prefix 0.0.0.0/0 \ > default > > neutron security-group-rule-create \ > --protocol tcp \ > --port-range-min 22 \ > --port-range-max 22 \ > --direction ingress \ > --remote-ip-prefix 0.0.0.0/0 \ > default > > > On a related note, all the above, inlcuding creating the Keystone > tenant, user, etc is put together in this trivial script[1], which > allows me to create tenant networks this way: > > $ ./create-new-tenant-network.sh \ > demoten1 tuser1 \ > 14.0.0.0 trouter1 \ > priv-net1 priv-subnet1 > > It assumes your external network is named as "ext", but you can modify > the script trivially to change that. > > > [1] https://github.com/kashyapc/ostack-misc/blob/master/create-new-tenant-network.sh Thanks Kashyab, much appreciated. I've tried all this out, but the result seems to be the same (timeouts in cloud-init, the VM is unreachable). When I switched the router's gateway from "ext" to "public" (a network created by packstack) and booted the VM in my private network, it got to the login screen immediately and the floating IP was pingable through `ip netns exec`. Changing the gateway back to "ext", I got the timeouts again. That seems to indicate that the issue is related to "ext" rather then the way I set up a private network or boot the VM. There doesn't seem to be a significant difference between "ext" and "public" networks and their subnets: # neutron net-show public +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 5d2a0846-4244-4d3b-ad68-033a18224459 | | mtu | 0 | | name | public | | provider:network_type | vxlan | | provider:physical_network | | | provider:segmentation_id | 10 | | router:external | True | | shared | True | | status | ACTIVE | | subnets | 5285ff33-1bed-449b-b629-8ecc5ec0f642 | | tenant_id | 3c7799abd0af430696428247d377ceaf | +---------------------------+--------------------------------------+ # neutron net-show ext +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 376e6c88-4752-476b-8feb-ae3346a98006 | | mtu | 0 | | name | ext | | provider:network_type | vxlan | | provider:physical_network | | | provider:segmentation_id | 12 | | router:external | True | | shared | False | | status | ACTIVE | | subnets | db336afd-8d41-4938-97ac-39ec912597df | | tenant_id | 3c7799abd0af430696428247d377ceaf | +---------------------------+--------------------------------------+ # neutron subnet-show public_subnet +-------------------+--------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------+ | allocation_pools | {"start": "172.24.4.226", "end": "172.24.4.238"} | | cidr | 172.24.4.224/28 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 172.24.4.225 | | host_routes | | | id | 5285ff33-1bed-449b-b629-8ecc5ec0f642 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | public_subnet | | network_id | 5d2a0846-4244-4d3b-ad68-033a18224459 | | subnetpool_id | | | tenant_id | 3c7799abd0af430696428247d377ceaf | +-------------------+--------------------------------------------------+ # neutron subnet-show ext_subnet +-------------------+--------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------+ | allocation_pools | {"start": "10.40.128.80", "end": "10.40.128.84"} | | cidr | 10.40.128.0/20 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 10.40.143.254 | | host_routes | | | id | db336afd-8d41-4938-97ac-39ec912597df | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | ext_subnet | | network_id | 376e6c88-4752-476b-8feb-ae3346a98006 | | subnetpool_id | | | tenant_id | 3c7799abd0af430696428247d377ceaf | +-------------------+--------------------------------------------------+ I've also seen this: https://www.rdoproject.org/Neutron_with_existing_external_network Tried to follow it some time ago, but whenever I got to the `service network restart`, I got disconnected from my box and it was unreachable even after reboot. Is there anything else that jumps at you? Or do you have any ideas how to investigate this further? I was also thinking I could change "public"'s subnet to the floating IP range I have available, but I worry that may screw everything up. Is it worth a try? Thanks, Tomas > From roxenham at redhat.com Wed May 27 10:28:03 2015 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 27 May 2015 11:28:03 +0100 Subject: [Rdo-list] RDO + floating IPs In-Reply-To: <556598B4.5030705@redhat.com> References: <55647B1C.9040208@redhat.com> <20150526151654.GI15019@tesla> <556598B4.5030705@redhat.com> Message-ID: <99964837-F33F-4A83-9BD1-391131CB46B6@redhat.com> > On 27 May 2015, at 11:13, Tomas Sedovic wrote: > > On 05/26/2015 05:16 PM, Kashyap Chamarthy wrote: >> On Tue, May 26, 2015 at 03:54:36PM +0200, Tomas Sedovic wrote: >>> Hey everyone, >>> >>> I tried to get RDO set up with floating IP addresses, but I'm running into >>> problems I'm not sure how to debug (not that familiar with networking and >>> Neutron). >>> >>> I followed these guides on a clean Fedora 21 x86_64 server: >>> >>> https://www.rdoproject.org/Quickstart >>> https://www.rdoproject.org/Floating_IP_range >>> >> [. . .] >> >>> once all 20 requests failed, it got to a login screen, but I could not ping >>> or SSH into it: >>> >>> # ping 10.40.128.81 >>> PING 10.40.128.81 (10.40.128.81) 56(84) bytes of data. >>> From 10.40.128.44 icmp_seq=1 Destination Host Unreachable >>> From 10.40.128.44 icmp_seq=2 Destination Host Unreachable >>> From 10.40.128.44 icmp_seq=3 Destination Host Unreachable >>> From 10.40.128.44 icmp_seq=4 Destination Host Unreachable >>> >>> # ssh cirros at 10.40.128.81 >>> ssh: connect to host 10.40.128.81 port 22: No route to host >> >> It could be any no. of reasons, as I don't know what's going on in your >> network. But, your steps sound reasonably correct. Just for comparision, >> that's what I normally do: >> >> # Create new private network: >> $ neutron net-create $privnetname >> >> # Create a subnet >> neutron subnet-create $privnetname \ >> $subnetspace/24 \ >> --name $privsubnetname >> >> # Create a router >> neutron router-create $routername >> >> # Associate the router to the external network by setting its gateway >> # NOTE: This assumes the external network name is 'ext' >> >> export EXT_NET=$(neutron net-list | grep ext | awk '{print $2;}') >> export PRIV_NET=$(neutron subnet-list | grep $privsubnetname | awk '{print $2;}') >> export ROUTER_ID=$(neutron router-list | grep $routername | awk '{print $2;}' >> >> neutron router-gateway-set \ >> $ROUTER_ID $EXT_NET_ID >> >> neutron router-interface-add \ >> $ROUTER_ID $PRIV_NET_ID >> >> >> # Add Neutron security groups for this test tenant >> neutron security-group-rule-create \ >> --protocol icmp \ >> --direction ingress \ >> --remote-ip-prefix 0.0.0.0/0 \ >> default >> >> neutron security-group-rule-create \ >> --protocol tcp \ >> --port-range-min 22 \ >> --port-range-max 22 \ >> --direction ingress \ >> --remote-ip-prefix 0.0.0.0/0 \ >> default >> >> >> On a related note, all the above, inlcuding creating the Keystone >> tenant, user, etc is put together in this trivial script[1], which >> allows me to create tenant networks this way: >> >> $ ./create-new-tenant-network.sh \ >> demoten1 tuser1 \ >> 14.0.0.0 trouter1 \ >> priv-net1 priv-subnet1 >> >> It assumes your external network is named as "ext", but you can modify >> the script trivially to change that. >> >> >> [1] https://github.com/kashyapc/ostack-misc/blob/master/create-new-tenant-network.sh > > Thanks Kashyab, much appreciated. I've tried all this out, but the result seems to be the same (timeouts in cloud-init, the VM is unreachable). So it seems there?s two problems here, correct me if I?m wrong - 1) VM?s getting access to the metadata service 2) VM?s accessible via their floating IP?s from the outside I would say that (2) is the more important one we need to fix right now. If it?s pingable from the namespace then your overlays (or inter-node communication) and DHCP is working OK. That means that it?s likely the link between the external network bridge and the outside. From the output below, it suggests that you?ve defined your external network as a non-provider network. Therefore, you have to tell the L3 agent the specific bridge it needs to use to route traffic. In your L3 agent configuration file you?ll have ?external_network_bridge? option. This will need to be set to ?br-ex? (or the name of your external bridge) for the flows to be set up correctly. If this is blank, you?ll need to recreate your external network as a provider network, and ensure that you have the correct bridge mappings enabled on your Neutron network node. So I guess my question is this? what is ?external_network_bridge? set to in /etc/neutron/l3-agent.ini? [root at stack-node1 ~]# grep external_network_bridge /etc/neutron/l3_agent.ini # When external_network_bridge is set, each L3 agent can be associated # networks, both the external_network_bridge and gateway_external_network_id # external_network_bridge = br-ex external_network_bridge = br-ex Cheers Rhys > > > When I switched the router's gateway from "ext" to "public" (a network created by packstack) and booted the VM in my private network, it got to the login screen immediately and the floating IP was pingable through `ip netns exec`. Changing the gateway back to "ext", I got the timeouts again. That seems to indicate that the issue is related to "ext" rather then the way I set up a private network or boot the VM. > > There doesn't seem to be a significant difference between "ext" and "public" networks and their subnets: > > # neutron net-show public > +---------------------------+--------------------------------------+ > | Field | Value | > +---------------------------+--------------------------------------+ > | admin_state_up | True | > | id | 5d2a0846-4244-4d3b-ad68-033a18224459 | > | mtu | 0 | > | name | public | > | provider:network_type | vxlan | > | provider:physical_network | | > | provider:segmentation_id | 10 | > | router:external | True | > | shared | True | > | status | ACTIVE | > | subnets | 5285ff33-1bed-449b-b629-8ecc5ec0f642 | > | tenant_id | 3c7799abd0af430696428247d377ceaf | > +---------------------------+--------------------------------------+ > # neutron net-show ext > +---------------------------+--------------------------------------+ > | Field | Value | > +---------------------------+--------------------------------------+ > | admin_state_up | True | > | id | 376e6c88-4752-476b-8feb-ae3346a98006 | > | mtu | 0 | > | name | ext | > | provider:network_type | vxlan | > | provider:physical_network | | > | provider:segmentation_id | 12 | > | router:external | True | > | shared | False | > | status | ACTIVE | > | subnets | db336afd-8d41-4938-97ac-39ec912597df | > | tenant_id | 3c7799abd0af430696428247d377ceaf | > +---------------------------+--------------------------------------+ > # neutron subnet-show public_subnet > +-------------------+--------------------------------------------------+ > | Field | Value | > +-------------------+--------------------------------------------------+ > | allocation_pools | {"start": "172.24.4.226", "end": "172.24.4.238"} | > | cidr | 172.24.4.224/28 | > | dns_nameservers | | > | enable_dhcp | False | > | gateway_ip | 172.24.4.225 | > | host_routes | | > | id | 5285ff33-1bed-449b-b629-8ecc5ec0f642 | > | ip_version | 4 | > | ipv6_address_mode | | > | ipv6_ra_mode | | > | name | public_subnet | > | network_id | 5d2a0846-4244-4d3b-ad68-033a18224459 | > | subnetpool_id | | > | tenant_id | 3c7799abd0af430696428247d377ceaf | > +-------------------+--------------------------------------------------+ > # neutron subnet-show ext_subnet > +-------------------+--------------------------------------------------+ > | Field | Value | > +-------------------+--------------------------------------------------+ > | allocation_pools | {"start": "10.40.128.80", "end": "10.40.128.84"} | > | cidr | 10.40.128.0/20 | > | dns_nameservers | | > | enable_dhcp | False | > | gateway_ip | 10.40.143.254 | > | host_routes | | > | id | db336afd-8d41-4938-97ac-39ec912597df | > | ip_version | 4 | > | ipv6_address_mode | | > | ipv6_ra_mode | | > | name | ext_subnet | > | network_id | 376e6c88-4752-476b-8feb-ae3346a98006 | > | subnetpool_id | | > | tenant_id | 3c7799abd0af430696428247d377ceaf | > +-------------------+--------------------------------------------------+ > > > I've also seen this: https://www.rdoproject.org/Neutron_with_existing_external_network > > Tried to follow it some time ago, but whenever I got to the `service network restart`, I got disconnected from my box and it was unreachable even after reboot. > > > Is there anything else that jumps at you? Or do you have any ideas how to investigate this further? > > I was also thinking I could change "public"'s subnet to the floating IP range I have available, but I worry that may screw everything up. Is it worth a try? > > Thanks, > Tomas > > >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ifarkas at redhat.com Wed May 27 10:30:13 2015 From: ifarkas at redhat.com (Imre Farkas) Date: Wed, 27 May 2015 12:30:13 +0200 Subject: [Rdo-list] introspection and raid configuration In-Reply-To: <55659099.1020901@redhat.com> References: <55648647.3030000@redhat.com> <556494BA.6020208@redhat.com> <55657481.5010407@redhat.com> <55657756.7020601@redhat.com> <55658D0D.9010904@redhat.com> <55659099.1020901@redhat.com> Message-ID: <55659CB5.8040501@redhat.com> On 05/27/2015 11:38 AM, Dmitry Tantsur wrote: > On 05/27/2015 11:23 AM, Imre Farkas wrote: >> On 05/27/2015 09:50 AM, Dmitry Tantsur wrote: >>> On 05/27/2015 09:38 AM, Imre Farkas wrote: >>>> On 05/26/2015 05:43 PM, Dmitry Tantsur wrote: >>>>> On 05/26/2015 04:42 PM, Imre Farkas wrote: >>>>>> Hi all, >>>>>> >>>>>> I would like to gather some feedback on the current worfklow >>>>>> regarding >>>>>> the subject. RAID configuration on DRAC machines works as follows (it >>>>>> assumes there's no RAID volume on the machine): >>>>>> 1. user defines the deployment profile and the target raid >>>>>> configuration >>>>>> for each profile >>>>>> 2. ironic-discoverd introspects the node to gather facts >>>>>> 3. ach-match picks a deployment profile based on the gathered facts >>>>>> 4. instack triggers the raid configuration based on the selected >>>>>> profile >>>>>> >>>>>> A bug[1] has been discovered regarding step 2: ironic-discoverd fails >>>>>> because it tries to figure out the size of the local disk but it >>>>>> can't >>>>>> find any as no volume exists on the RAID controller yet. This is a >>>>>> chicken-and-egg problem because ironic-discoverd doesn't work if the >>>>>> RAID volume(s) has not been configured but the RAID configuration >>>>>> can't >>>>>> be triggered if ironic-discoverd hasn't gathered the facts about the >>>>>> node. >>>>>> >>>>>> Few possible workarounds: >>>>>> #1: make saving the local disk size optional in the standard >>>>>> plugin in >>>>>> ironic-discoverd. The downside is that the standard plugin is >>>>>> supposed >>>>>> to enroll nodes in ironic with all attributes necessary for >>>>>> scheduling. >>>>>> This assumption might fail with this solution. >>>>> >>>>> -1 will never get upstream for the reasons you stated >>>>> >>>>>> >>>>>> #2: run discovery multiple times with different set of plugins. The >>>>>> run >>>>>> before RAID configuration would exclude the standard plugin while the >>>>>> run afterwards could potentially exclude others. The parameters >>>>>> passed >>>>>> by the user to ironic-discoverd for each run need to be properly >>>>>> documented. It would slow down the installation because each run >>>>>> requires a reboot which takes a lot of time on bare metal. >>>>> >>>>> Possible, but better (IMO) idea below. >>>>> >>>>>> >>>>>> #3: name your suggestion! >>>>> >>>>> #3. modify your existing root_device_hint plugin to insert a fake >>>>> local_gb value (with issuing a warning), and then put it to the >>>>> beginning of the plugin pipeline. WDYT? >>>>> >>>> >>>> I would avoid that for the very same reason as we rejected option >>>> #1: it >>>> would do things quite differently than it is expected. However, >>>> building >>>> on top of this idea, we could introduce a fake_local_gb plugin and >>>> enable it as default in instack. Thoughts? >>> >>> ETOOMANYPLUGINS :D >>> >>> I would say it's fine for root_device_hint plugin. It's not enabled by >>> default upstream, and it's a normal flow involving it. >>> >> >> Patch proposed: https://review.openstack.org/#/c/185896/ > > Is it fine if we carry it downstream for some time? During rename I > don't want to land anything upstream. I hope to be finished in a couple > of days. > Yes, that was my thinking as well. >> >>>> >>>> Imre >>>> >>>> >>>>>> >>>>>> Any thoughts/preference on the above described workarounds? >>>>>> >>>>>> Thanks, >>>>>> Imre >>>>>> >>>>>> >>>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1222124 >>>>> >>>> >>> >> > From tsedovic at redhat.com Wed May 27 10:42:35 2015 From: tsedovic at redhat.com (Tomas Sedovic) Date: Wed, 27 May 2015 12:42:35 +0200 Subject: [Rdo-list] RDO + floating IPs In-Reply-To: <99964837-F33F-4A83-9BD1-391131CB46B6@redhat.com> References: <55647B1C.9040208@redhat.com> <20150526151654.GI15019@tesla> <556598B4.5030705@redhat.com> <99964837-F33F-4A83-9BD1-391131CB46B6@redhat.com> Message-ID: <55659F9B.7020202@redhat.com> On 05/27/2015 12:28 PM, Rhys Oxenham wrote: > >> On 27 May 2015, at 11:13, Tomas Sedovic wrote: >> >> On 05/26/2015 05:16 PM, Kashyap Chamarthy wrote: >>> On Tue, May 26, 2015 at 03:54:36PM +0200, Tomas Sedovic wrote: >>>> Hey everyone, >>>> >>>> I tried to get RDO set up with floating IP addresses, but I'm running into >>>> problems I'm not sure how to debug (not that familiar with networking and >>>> Neutron). >>>> >>>> I followed these guides on a clean Fedora 21 x86_64 server: >>>> >>>> https://www.rdoproject.org/Quickstart >>>> https://www.rdoproject.org/Floating_IP_range >>>> >>> [. . .] >>> >>>> once all 20 requests failed, it got to a login screen, but I could not ping >>>> or SSH into it: >>>> >>>> # ping 10.40.128.81 >>>> PING 10.40.128.81 (10.40.128.81) 56(84) bytes of data. >>>> From 10.40.128.44 icmp_seq=1 Destination Host Unreachable >>>> From 10.40.128.44 icmp_seq=2 Destination Host Unreachable >>>> From 10.40.128.44 icmp_seq=3 Destination Host Unreachable >>>> From 10.40.128.44 icmp_seq=4 Destination Host Unreachable >>>> >>>> # ssh cirros at 10.40.128.81 >>>> ssh: connect to host 10.40.128.81 port 22: No route to host >>> >>> It could be any no. of reasons, as I don't know what's going on in your >>> network. But, your steps sound reasonably correct. Just for comparision, >>> that's what I normally do: >>> >>> # Create new private network: >>> $ neutron net-create $privnetname >>> >>> # Create a subnet >>> neutron subnet-create $privnetname \ >>> $subnetspace/24 \ >>> --name $privsubnetname >>> >>> # Create a router >>> neutron router-create $routername >>> >>> # Associate the router to the external network by setting its gateway >>> # NOTE: This assumes the external network name is 'ext' >>> >>> export EXT_NET=$(neutron net-list | grep ext | awk '{print $2;}') >>> export PRIV_NET=$(neutron subnet-list | grep $privsubnetname | awk '{print $2;}') >>> export ROUTER_ID=$(neutron router-list | grep $routername | awk '{print $2;}' >>> >>> neutron router-gateway-set \ >>> $ROUTER_ID $EXT_NET_ID >>> >>> neutron router-interface-add \ >>> $ROUTER_ID $PRIV_NET_ID >>> >>> >>> # Add Neutron security groups for this test tenant >>> neutron security-group-rule-create \ >>> --protocol icmp \ >>> --direction ingress \ >>> --remote-ip-prefix 0.0.0.0/0 \ >>> default >>> >>> neutron security-group-rule-create \ >>> --protocol tcp \ >>> --port-range-min 22 \ >>> --port-range-max 22 \ >>> --direction ingress \ >>> --remote-ip-prefix 0.0.0.0/0 \ >>> default >>> >>> >>> On a related note, all the above, inlcuding creating the Keystone >>> tenant, user, etc is put together in this trivial script[1], which >>> allows me to create tenant networks this way: >>> >>> $ ./create-new-tenant-network.sh \ >>> demoten1 tuser1 \ >>> 14.0.0.0 trouter1 \ >>> priv-net1 priv-subnet1 >>> >>> It assumes your external network is named as "ext", but you can modify >>> the script trivially to change that. >>> >>> >>> [1] https://github.com/kashyapc/ostack-misc/blob/master/create-new-tenant-network.sh >> >> Thanks Kashyab, much appreciated. I've tried all this out, but the result seems to be the same (timeouts in cloud-init, the VM is unreachable). > > So it seems there?s two problems here, correct me if I?m wrong - > > 1) VM?s getting access to the metadata service > > 2) VM?s accessible via their floating IP?s from the outside > > I would say that (2) is the more important one we need to fix right now. If it?s pingable from the namespace then your overlays (or inter-node communication) and DHCP is working OK. That means that it?s likely the link between the external network bridge and the outside. From the output below, it suggests that you?ve defined your external network as a non-provider network. Therefore, you have to tell the L3 agent the specific bridge it needs to use to route traffic. Thanks. For the record, it's only pingable when I route my private network to "public". When the router's gateway is set to "ext", the floating IP isn't pingable at all (including from the namespace): ip netns exec qrouter-f2bfd294-c90c-4c98-9b6d-b33e28b7c9ef ping 10.40.128.84 connect: Network is unreachable (f2bfd...9ef is the ID of the router between the private and external network, 10.40.128.84 is the floting IP). > > In your L3 agent configuration file you?ll have ?external_network_bridge? option. This will need to be set to ?br-ex? (or the name of your external bridge) for the flows to be set up correctly. If this is blank, you?ll need to recreate your external network as a provider network, and ensure that you have the correct bridge mappings enabled on your Neutron network node. > > So I guess my question is this? what is ?external_network_bridge? set to in /etc/neutron/l3-agent.ini? > > [root at stack-node1 ~]# grep external_network_bridge /etc/neutron/l3_agent.ini > # When external_network_bridge is set, each L3 agent can be associated > # networks, both the external_network_bridge and gateway_external_network_id > # external_network_bridge = br-ex > external_network_bridge = br-ex It is indeed set to br-ex. > > Cheers > Rhys > >> >> >> When I switched the router's gateway from "ext" to "public" (a network created by packstack) and booted the VM in my private network, it got to the login screen immediately and the floating IP was pingable through `ip netns exec`. Changing the gateway back to "ext", I got the timeouts again. That seems to indicate that the issue is related to "ext" rather then the way I set up a private network or boot the VM. >> >> There doesn't seem to be a significant difference between "ext" and "public" networks and their subnets: >> >> # neutron net-show public >> +---------------------------+--------------------------------------+ >> | Field | Value | >> +---------------------------+--------------------------------------+ >> | admin_state_up | True | >> | id | 5d2a0846-4244-4d3b-ad68-033a18224459 | >> | mtu | 0 | >> | name | public | >> | provider:network_type | vxlan | >> | provider:physical_network | | >> | provider:segmentation_id | 10 | >> | router:external | True | >> | shared | True | >> | status | ACTIVE | >> | subnets | 5285ff33-1bed-449b-b629-8ecc5ec0f642 | >> | tenant_id | 3c7799abd0af430696428247d377ceaf | >> +---------------------------+--------------------------------------+ >> # neutron net-show ext >> +---------------------------+--------------------------------------+ >> | Field | Value | >> +---------------------------+--------------------------------------+ >> | admin_state_up | True | >> | id | 376e6c88-4752-476b-8feb-ae3346a98006 | >> | mtu | 0 | >> | name | ext | >> | provider:network_type | vxlan | >> | provider:physical_network | | >> | provider:segmentation_id | 12 | >> | router:external | True | >> | shared | False | >> | status | ACTIVE | >> | subnets | db336afd-8d41-4938-97ac-39ec912597df | >> | tenant_id | 3c7799abd0af430696428247d377ceaf | >> +---------------------------+--------------------------------------+ >> # neutron subnet-show public_subnet >> +-------------------+--------------------------------------------------+ >> | Field | Value | >> +-------------------+--------------------------------------------------+ >> | allocation_pools | {"start": "172.24.4.226", "end": "172.24.4.238"} | >> | cidr | 172.24.4.224/28 | >> | dns_nameservers | | >> | enable_dhcp | False | >> | gateway_ip | 172.24.4.225 | >> | host_routes | | >> | id | 5285ff33-1bed-449b-b629-8ecc5ec0f642 | >> | ip_version | 4 | >> | ipv6_address_mode | | >> | ipv6_ra_mode | | >> | name | public_subnet | >> | network_id | 5d2a0846-4244-4d3b-ad68-033a18224459 | >> | subnetpool_id | | >> | tenant_id | 3c7799abd0af430696428247d377ceaf | >> +-------------------+--------------------------------------------------+ >> # neutron subnet-show ext_subnet >> +-------------------+--------------------------------------------------+ >> | Field | Value | >> +-------------------+--------------------------------------------------+ >> | allocation_pools | {"start": "10.40.128.80", "end": "10.40.128.84"} | >> | cidr | 10.40.128.0/20 | >> | dns_nameservers | | >> | enable_dhcp | False | >> | gateway_ip | 10.40.143.254 | >> | host_routes | | >> | id | db336afd-8d41-4938-97ac-39ec912597df | >> | ip_version | 4 | >> | ipv6_address_mode | | >> | ipv6_ra_mode | | >> | name | ext_subnet | >> | network_id | 376e6c88-4752-476b-8feb-ae3346a98006 | >> | subnetpool_id | | >> | tenant_id | 3c7799abd0af430696428247d377ceaf | >> +-------------------+--------------------------------------------------+ >> >> >> I've also seen this: https://www.rdoproject.org/Neutron_with_existing_external_network >> >> Tried to follow it some time ago, but whenever I got to the `service network restart`, I got disconnected from my box and it was unreachable even after reboot. >> >> >> Is there anything else that jumps at you? Or do you have any ideas how to investigate this further? >> >> I was also thinking I could change "public"'s subnet to the floating IP range I have available, but I worry that may screw everything up. Is it worth a try? >> >> Thanks, >> Tomas >> >> >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From apevec at gmail.com Wed May 27 11:41:27 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 27 May 2015 13:41:27 +0200 Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting - CANCEL today Message-ID: > RDO packaging meeting on 2015-05-27 from 15:00:00 to 16:00:00 UTC > At rdo at irc.freenode.net > > The meeting will be about: > RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) > > Every week on #rdo on freenode I'd like to cancel today's meeting unless there's an urgent topic to discuss (if so please add it to above etherpad now) I'm still trying to digest summit discussions and collect info from folks who attended in order to produce meaningful agenda for the discussion next week! Cheers, Alan From jason.dobies at redhat.com Wed May 27 13:10:01 2015 From: jason.dobies at redhat.com (Jay Dobies) Date: Wed, 27 May 2015 09:10:01 -0400 Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? In-Reply-To: References: <5563093F.9000206@redhat.com> <0899b727-24aa-45c0-a206-b42df66afd89@redhat.com> <5564C479.2030402@redhat.com> Message-ID: <5565C229.9000804@redhat.com> On 05/27/2015 06:01 AM, Pedro Sousa wrote: > Hi Jay, > > thank you for you answer, in fact I was running instack-deploy-overcloud > after those commands, I didn't realize it deleted the plan, so a couple > of questions: > > - Should I run something like "heat stack-create -f > tuskar_templates/plan.yaml -e tuskar_templates/environment.yaml" > instead? Will it work? Almost. You'll need to download the templates with your configuration changes from Tuskar first: tuskar plan-templates -O tuskar_templates $PLAN_ID So to recap what you're doing: - Updating the plan configuration in Tuskar (the calls in your original e-mail) - Downloading the latest copy of the plan and its configuration from Tuskar (the call I listed above) - Send that to Heat to create the stack (the call you listed above) > - Or should I wait for that to be ready in UI, if I understood > correctly, and test it from there? > > > > Thanks, > Pedro Sousa > > On Tue, May 26, 2015 at 8:07 PM, Jay Dobies > wrote: > > > > On 05/26/2015 02:19 PM, Pedro Sousa wrote: > > Hi Hugh, > > I've tried to change the plan: > > # tuskar plan-update -A Compute-1::NeutronEnableTunnelling=False > c91b13a2-afd6-4eb2-9a78-46335190519d > # tuskar plan-update -A Controller-1::NeutronEnableTunnelling=False > c91b13a2-afd6-4eb2-9a78-46335190519d > # export NEUTRON_NETWORK_TYPE=vlan > > But the stack failed, I also see that plan-update doesn't work: > > > It depends on what you did between the lines above and the line below. > > If you're making the updates above and then running > instack-deploy-overcloud, it's not going to work. That script > deletes the plan and recreates it, losing your updates in the process. > > That logic (role addition and plan create) is being moved out of > instack-deploy-overcloud to an installation-time step to enable this > sort of thing (not fully sure the state of that, but the UI needs > the plan create to be done during install as well). > > [stack at instack ~]$ heat stack-show > 4dd74e83-e90f-437f-b8b5-ac45d6ada9db > | grep Tunnel > | | "Controller-1::NeutronEnableTunnelling": > "True", > > Regards > Pedro Sousa > > > > > On Tue, May 26, 2015 at 6:39 PM, Hugh Brock > >> wrote: > > It should be... But we haven't really tested it yet, to my > knowledge. It's an important configuration that we want to > support. > > If you are able to sort it out and past your results here, that > would be great! > > -Hugh > > Sent from my mobile, please pardon the top posting. > > *From:* Pedro Sousa >> > *Sent:* May 26, 2015 7:28 PM > *To:* Giulio Fidente > *Cc:* Marios Andreou;rdo-list at redhat.com > > >;Jason Dobies > *Subject:* Re: [Rdo-list] RDO-Manager ovecloud change > existing plan, > how? > > > Hi all, > > thanks to Giulio recommendations in #rdo I've managed to > change some > parameters: > > #heat stack-delete overcloud > #export NEUTRON_TUNNEL_TYPES=vxlan > #export NEUTRON_TUNNEL_TYPE=vxlan > #export NEUTRON_NETWORK_TYPE=vxlan > #instack-deploy-overcloud --tuskar > > This works for TUSKAR_PARAMETERS contained in the > instack-deploy-overcloud script (please correct me if I'm > wrong). > > My question is if it's possible to use VLAN for tenants, > using a > VLAN range and disable GRE/VXLAN tunneling. > > Thanks, > Pedro Sousa > > > On Mon, May 25, 2015 at 12:36 PM, Giulio Fidente > > >> wrote: > > On 05/25/2015 01:09 PM, Pedro Sousa wrote: > > Hi all, > > I've deployed rdo-manager in a virt env and > everything is > working fine > except the vnc console which is alreday an open bug > for that. > > Now I would like to change some parameters on my > deployment, > let's say I > wan't to disable NeutronTunneling, I wan't to use > VLAN for > tenants and > use 1500 MTU on dnsmasq. > > So I downloaded the plan: > > #tuskar plan-templates -O /tmp uuid > > changed plan.yaml, environment.yaml, > provider-Controller-1.yaml, > provider-Compute-1.yaml. > > than I ran the stack: > > # heat stack-create -f tmp/plan.yaml -e > tmp/environment.yaml > overcloud > > The overcloud is deployed fine but the values aren't > changed. What I'm > missing here? > > > hi, > > if you launch stack-create manually the newly created > overcloud > is not reprovisioned with the initial keystone > endpoints/users/roles ... to get an usable overcloud > you should > launch instack-deploy-overcloud again > > so you can change the defaults for the various params by > patching the tuskar plan with 'tuskar plan-update' see [1] > > yet some of these are automatically parsed from ENV > vars, like > NEUTRON_TUNNEL_TYPES and NEUTRON_NETWORK_TYPE see [2] > > the NeutronDnsmasqOptions param instead is not parsed > from any > ENV var, so you're forced to use 'tuskar plan-update' > > I'm adding a couple of guys on CC who migh help but, > let us know > how it goes! > > 1. > https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L274 > > 2. > https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L205-L208 > -- > Giulio Fidente > GPG KEY: 08D733BA > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From mattdm at fedoraproject.org Wed May 27 13:12:18 2015 From: mattdm at fedoraproject.org (Matthew Miller) Date: Wed, 27 May 2015 09:12:18 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> Message-ID: <20150527131218.GA26054@mattdm.org> On Wed, May 27, 2015 at 11:22:56AM +0200, Ha?kel wrote: > Feature-wise, pkgdb2 does the job but one issue is that RDO is a layered > product over two distros (CentOS, Fedora) and various releases. > That's an unsupported scenario by pkgdb2, and since the CentOS rise, > we're starting to have more and more cross-distro layered products. Not sure I follow that ? it seems similar to the existing pkgdb EPEL and Fedora branches, isn't it? -- Matthew Miller Fedora Project Leader From lars at redhat.com Wed May 27 15:18:54 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 27 May 2015 11:18:54 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <20150527004228.GA14433@mattdm.org> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> Message-ID: <20150527151854.GA27921@redhat.com> On Tue, May 26, 2015 at 08:42:29PM -0400, Matthew Miller wrote: > > officially identify what's in RDO (to avoid duplicates) and give people > > a way of identifying who's responsible if patches/work is needed (and > > who to contact to offer help assisting maintenance). > > Possibly this is too heavyweight, but it may be that pkdb2 > could be of help here. +1, I was just going to write and say that (a) yes, absolutely, we need that and (b) isn't there any way we can take advantage of the existing Fedora infrastructure, which does just about all of this already? :) -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From pgsousa at gmail.com Wed May 27 18:12:21 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 27 May 2015 19:12:21 +0100 Subject: [Rdo-list] RDO-Manager ovecloud change existing plan, how? In-Reply-To: <5565C229.9000804@redhat.com> References: <5563093F.9000206@redhat.com> <0899b727-24aa-45c0-a206-b42df66afd89@redhat.com> <5564C479.2030402@redhat.com> <5565C229.9000804@redhat.com> Message-ID: Hi Jay, you're right, in fact it applies the changes according to your instructions, however the stack failed. This is what I did: 1. heat stack-delete overcloud 2. tuskar plan-update -A Compute-1::NeutronEnableTunnelling=False 6f124eac-3926-4e48-9e22-06791a74651f 3. tuskar plan-update -A Controller-1::NeutronEnableTunnelling=False 6f124eac-3926-4e48-9e22-06791a74651f 4. tuskar plan-update -A Controller-1::NeutronNetworkType=vlan 6f124eac-3926-4e48-9e22-06791a74651f 5. tuskar plan-update -A Compute-1::NeutronNetworkType=vlan 6f124eac-3926-4e48-9e22-06791a74651f 6. tuskar plan-templates -O /home/stack/aprov 6f124eac-3926-4e48-9e22-06791a74651f 7. heat stack-create -f aprov/plan.yaml -e aprov/environment.yaml overcloud As I explained my idea is to use Vlans for tenants users. After the failed deployment I logged in into overcloud controller and I saw neutron installed but It didn't created /etc/neutron/plugin.ini. Also I see that I only have br-ex bridge, no br-int. What am I missing here? Do I need to changed something else? Thanks, Pedro Sousa On Wed, May 27, 2015 at 2:10 PM, Jay Dobies wrote: > > > On 05/27/2015 06:01 AM, Pedro Sousa wrote: > >> Hi Jay, >> >> thank you for you answer, in fact I was running instack-deploy-overcloud >> after those commands, I didn't realize it deleted the plan, so a couple >> of questions: >> >> - Should I run something like "heat stack-create -f >> tuskar_templates/plan.yaml -e tuskar_templates/environment.yaml" >> instead? Will it work? >> > > Almost. You'll need to download the templates with your configuration > changes from Tuskar first: > > tuskar plan-templates -O tuskar_templates $PLAN_ID > > So to recap what you're doing: > > - Updating the plan configuration in Tuskar (the calls in your original > e-mail) > - Downloading the latest copy of the plan and its configuration from > Tuskar (the call I listed above) > - Send that to Heat to create the stack (the call you listed above) > > - Or should I wait for that to be ready in UI, if I understood >> correctly, and test it from there? >> >> >> >> Thanks, >> Pedro Sousa >> >> On Tue, May 26, 2015 at 8:07 PM, Jay Dobies > > wrote: >> >> >> >> On 05/26/2015 02:19 PM, Pedro Sousa wrote: >> >> Hi Hugh, >> >> I've tried to change the plan: >> >> # tuskar plan-update -A Compute-1::NeutronEnableTunnelling=False >> c91b13a2-afd6-4eb2-9a78-46335190519d >> # tuskar plan-update -A >> Controller-1::NeutronEnableTunnelling=False >> c91b13a2-afd6-4eb2-9a78-46335190519d >> # export NEUTRON_NETWORK_TYPE=vlan >> >> But the stack failed, I also see that plan-update doesn't work: >> >> >> It depends on what you did between the lines above and the line below. >> >> If you're making the updates above and then running >> instack-deploy-overcloud, it's not going to work. That script >> deletes the plan and recreates it, losing your updates in the process. >> >> That logic (role addition and plan create) is being moved out of >> instack-deploy-overcloud to an installation-time step to enable this >> sort of thing (not fully sure the state of that, but the UI needs >> the plan create to be done during install as well). >> >> [stack at instack ~]$ heat stack-show >> 4dd74e83-e90f-437f-b8b5-ac45d6ada9db >> | grep Tunnel >> | | >> "Controller-1::NeutronEnableTunnelling": >> "True", >> >> Regards >> Pedro Sousa >> >> >> >> >> On Tue, May 26, 2015 at 6:39 PM, Hugh Brock > >> >> wrote: >> >> It should be... But we haven't really tested it yet, to my >> knowledge. It's an important configuration that we want to >> support. >> >> If you are able to sort it out and past your results here, >> that >> would be great! >> >> -Hugh >> >> Sent from my mobile, please pardon the top posting. >> >> *From:* Pedro Sousa > > >> >> *Sent:* May 26, 2015 7:28 PM >> *To:* Giulio Fidente >> *Cc:* Marios Andreou;rdo-list at redhat.com >> >> > >;Jason Dobies >> *Subject:* Re: [Rdo-list] RDO-Manager ovecloud change >> existing plan, >> how? >> >> >> Hi all, >> >> thanks to Giulio recommendations in #rdo I've managed to >> change some >> parameters: >> >> #heat stack-delete overcloud >> #export NEUTRON_TUNNEL_TYPES=vxlan >> #export NEUTRON_TUNNEL_TYPE=vxlan >> #export NEUTRON_NETWORK_TYPE=vxlan >> #instack-deploy-overcloud --tuskar >> >> This works for TUSKAR_PARAMETERS contained in the >> instack-deploy-overcloud script (please correct me if I'm >> wrong). >> >> My question is if it's possible to use VLAN for tenants, >> using a >> VLAN range and disable GRE/VXLAN tunneling. >> >> Thanks, >> Pedro Sousa >> >> >> On Mon, May 25, 2015 at 12:36 PM, Giulio Fidente >> >> >> wrote: >> >> On 05/25/2015 01:09 PM, Pedro Sousa wrote: >> >> Hi all, >> >> I've deployed rdo-manager in a virt env and >> everything is >> working fine >> except the vnc console which is alreday an open bug >> for that. >> >> Now I would like to change some parameters on my >> deployment, >> let's say I >> wan't to disable NeutronTunneling, I wan't to use >> VLAN for >> tenants and >> use 1500 MTU on dnsmasq. >> >> So I downloaded the plan: >> >> #tuskar plan-templates -O /tmp uuid >> >> changed plan.yaml, environment.yaml, >> provider-Controller-1.yaml, >> provider-Compute-1.yaml. >> >> than I ran the stack: >> >> # heat stack-create -f tmp/plan.yaml -e >> tmp/environment.yaml >> overcloud >> >> The overcloud is deployed fine but the values aren't >> changed. What I'm >> missing here? >> >> >> hi, >> >> if you launch stack-create manually the newly created >> overcloud >> is not reprovisioned with the initial keystone >> endpoints/users/roles ... to get an usable overcloud >> you should >> launch instack-deploy-overcloud again >> >> so you can change the defaults for the various params by >> patching the tuskar plan with 'tuskar plan-update' see >> [1] >> >> yet some of these are automatically parsed from ENV >> vars, like >> NEUTRON_TUNNEL_TYPES and NEUTRON_NETWORK_TYPE see [2] >> >> the NeutronDnsmasqOptions param instead is not parsed >> from any >> ENV var, so you're forced to use 'tuskar plan-update' >> >> I'm adding a couple of guys on CC who migh help but, >> let us know >> how it goes! >> >> 1. >> >> https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L274 >> >> 2. >> >> https://github.com/rdo-management/instack-undercloud/blob/master/scripts/instack-deploy-overcloud#L205-L208 >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Wed May 27 19:21:12 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 27 May 2015 21:21:12 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <20150527131218.GA26054@mattdm.org> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527131218.GA26054@mattdm.org> Message-ID: 2015-05-27 15:12 GMT+02:00 Matthew Miller : > > Not sure I follow that ? it seems similar to the existing pkgdb EPEL > and Fedora branches, isn't it? > We're not using EPEL anymore for many reasons. 1. we can't ship multiple versions of OpenStack on RHEL/CentOS through EPEL (we support currently 2 on EL7) 2. update policy is compelling us to maintain releases that are not supported upstream 3. we moved to CentOS Community Build System for EL7 builds. If there was a possibility to improve that setup, I'm willing to give it a shot but we'd have at best to manage 2 instances of pkgdb. > > -- > Matthew Miller > > Fedora Project Leader From hguemar at fedoraproject.org Wed May 27 19:22:19 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 27 May 2015 21:22:19 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <20150527151854.GA27921@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> Message-ID: 2015-05-27 17:18 GMT+02:00 Lars Kellogg-Stedman : > On Tue, May 26, 2015 at 08:42:29PM -0400, Matthew Miller wrote: >> > officially identify what's in RDO (to avoid duplicates) and give people >> > a way of identifying who's responsible if patches/work is needed (and >> > who to contact to offer help assisting maintenance). >> >> Possibly this is too heavyweight, but it may be that pkdb2 >> could be of help here. > > +1, I was just going to write and say that (a) yes, absolutely, we > need that and (b) isn't there any way we can take advantage of the > existing Fedora infrastructure, which does just about all of this > already? :) > We actually do, RDO sources are hosted in Fedora dist-git so we already leverage that infrastructure but that wouldn't extend to CentOS CBS Koji instance. H. > -- > Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mattdm at fedoraproject.org Wed May 27 19:24:44 2015 From: mattdm at fedoraproject.org (Matthew Miller) Date: Wed, 27 May 2015 15:24:44 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527131218.GA26054@mattdm.org> Message-ID: <20150527192444.GA852@mattdm.org> On Wed, May 27, 2015 at 09:21:12PM +0200, Ha?kel wrote: > > Not sure I follow that ? it seems similar to the existing pkgdb EPEL > > and Fedora branches, isn't it? [...] > If there was a possibility to improve that setup, I'm willing to give > it a shot but we'd have at best to manage 2 instances > of pkgdb. Oh, yeah, I was assuming a dedicated instance... On the theory that two instances of the same thing is more desirable than one instance of this and another instance of that. -- Matthew Miller Fedora Project Leader From ggillies at redhat.com Wed May 27 21:44:05 2015 From: ggillies at redhat.com (Graeme Gillies) Date: Thu, 28 May 2015 07:44:05 +1000 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <20150527192444.GA852@mattdm.org> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527131218.GA26054@mattdm.org> <20150527192444.GA852@mattdm.org> Message-ID: <55663AA5.5070607@redhat.com> On 05/28/2015 05:24 AM, Matthew Miller wrote: > On Wed, May 27, 2015 at 09:21:12PM +0200, Ha?kel wrote: >>> Not sure I follow that ? it seems similar to the existing pkgdb EPEL >>> and Fedora branches, isn't it? > [...] >> If there was a possibility to improve that setup, I'm willing to give >> it a shot but we'd have at best to manage 2 instances >> of pkgdb. > > Oh, yeah, I was assuming a dedicated instance... On the theory that two > instances of the same thing is more desirable than one instance of this > and another instance of that. > While I think having an instance of pkgdb to maintain the ever growing package base in RDO is a good thing (especially as people start needing different acls for different packages for different projects), I still feel it's worthwhile having something higher level (even if it's just a wiki page) tracking the big tent projects themselves by name, and who is responsible for them (maybe links to some of the packages in pkgdb). Could also include a small description of the project, link to the git repo, and maybe a link to the bugzilla bug component for it. >From a user perspective pkgdb is a bit too low level (trying to figure out the package name for a project can be non-trivial sometimes, and some projects are complicated and have more than one "main package"). All I want to know is "Is Designate in RDO? If not, has someone committed to packaging it? What's its progress? If it's in RDO, who's responsible for it? How do I flag problems?". Then if someone (perhaps from a project itself) wants to get it into RDO, we can have a small workflow for getting it review, dist git branches setup, and the page officially updated to say this project is incoming to RDO (and then updated when it's officially in). I'll volunteer to help run/maintain such a page/db if people think it's a good idea Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From pmyers at redhat.com Wed May 27 21:47:00 2015 From: pmyers at redhat.com (Perry Myers) Date: Wed, 27 May 2015 17:47:00 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55663AA5.5070607@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527131218.GA26054@mattdm.org> <20150527192444.GA852@mattdm.org> <55663AA5.5070607@redhat.com> Message-ID: <55663B54.50301@redhat.com> On 05/27/2015 05:44 PM, Graeme Gillies wrote: > On 05/28/2015 05:24 AM, Matthew Miller wrote: >> On Wed, May 27, 2015 at 09:21:12PM +0200, Ha?kel wrote: >>>> Not sure I follow that ? it seems similar to the existing pkgdb EPEL >>>> and Fedora branches, isn't it? >> [...] >>> If there was a possibility to improve that setup, I'm willing to give >>> it a shot but we'd have at best to manage 2 instances >>> of pkgdb. >> >> Oh, yeah, I was assuming a dedicated instance... On the theory that two >> instances of the same thing is more desirable than one instance of this >> and another instance of that. >> > > While I think having an instance of pkgdb to maintain the ever growing > package base in RDO is a good thing (especially as people start needing > different acls for different packages for different projects), I still > feel it's worthwhile having something higher level (even if it's just a > wiki page) tracking the big tent projects themselves by name, and who is > responsible for them (maybe links to some of the packages in pkgdb). > Could also include a small description of the project, link to the git > repo, and maybe a link to the bugzilla bug component for it. > >>From a user perspective pkgdb is a bit too low level (trying to figure > out the package name for a project can be non-trivial sometimes, and > some projects are complicated and have more than one "main package"). > All I want to know is "Is Designate in RDO? If not, has someone > committed to packaging it? What's its progress? If it's in RDO, who's > responsible for it? How do I flag problems?". > > Then if someone (perhaps from a project itself) wants to get it into > RDO, we can have a small workflow for getting it review, dist git > branches setup, and the page officially updated to say this project is > incoming to RDO (and then updated when it's officially in). > > I'll volunteer to help run/maintain such a page/db if people think it's > a good idea +1, I have no objection to someone investigating pkgdb usage, but I agree with all of your points above. This is more about high level status and advertising what's in RDO and what's planned on getting into RDO Go ahead and start a page. I think this is something we'd want linked off of the main rdoproject.org page From Kevin.Fox at pnnl.gov Wed May 27 23:51:19 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 27 May 2015 23:51:19 +0000 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55663AA5.5070607@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527131218.GA26054@mattdm.org> <20150527192444.GA852@mattdm.org>,<55663AA5.5070607@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01A2140B1@EX10MBOX03.pnnl.gov> +1. I had to ask who was working on the Barbican RPMS today. Would have saved some folks some trouble if I could have just looked it up. :) Thanks, Kevin ________________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of Graeme Gillies [ggillies at redhat.com] Sent: Wednesday, May 27, 2015 2:44 PM To: Matthew Miller; Ha?kel Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Packaging the big tent (or at least part of it) On 05/28/2015 05:24 AM, Matthew Miller wrote: > On Wed, May 27, 2015 at 09:21:12PM +0200, Ha?kel wrote: >>> Not sure I follow that ? it seems similar to the existing pkgdb EPEL >>> and Fedora branches, isn't it? > [...] >> If there was a possibility to improve that setup, I'm willing to give >> it a shot but we'd have at best to manage 2 instances >> of pkgdb. > > Oh, yeah, I was assuming a dedicated instance... On the theory that two > instances of the same thing is more desirable than one instance of this > and another instance of that. > While I think having an instance of pkgdb to maintain the ever growing package base in RDO is a good thing (especially as people start needing different acls for different packages for different projects), I still feel it's worthwhile having something higher level (even if it's just a wiki page) tracking the big tent projects themselves by name, and who is responsible for them (maybe links to some of the packages in pkgdb). Could also include a small description of the project, link to the git repo, and maybe a link to the bugzilla bug component for it. >From a user perspective pkgdb is a bit too low level (trying to figure out the package name for a project can be non-trivial sometimes, and some projects are complicated and have more than one "main package"). All I want to know is "Is Designate in RDO? If not, has someone committed to packaging it? What's its progress? If it's in RDO, who's responsible for it? How do I flag problems?". Then if someone (perhaps from a project itself) wants to get it into RDO, we can have a small workflow for getting it review, dist git branches setup, and the page officially updated to say this project is incoming to RDO (and then updated when it's officially in). I'll volunteer to help run/maintain such a page/db if people think it's a good idea Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From kchamart at redhat.com Thu May 28 08:23:15 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 28 May 2015 10:23:15 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55663AA5.5070607@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527131218.GA26054@mattdm.org> <20150527192444.GA852@mattdm.org> <55663AA5.5070607@redhat.com> Message-ID: <20150528082315.GC17090@tesla.redhat.com> On Thu, May 28, 2015 at 07:44:05AM +1000, Graeme Gillies wrote: > On 05/28/2015 05:24 AM, Matthew Miller wrote: > > On Wed, May 27, 2015 at 09:21:12PM +0200, Ha?kel wrote: > >>> Not sure I follow that ? it seems similar to the existing pkgdb EPEL > >>> and Fedora branches, isn't it? > > [...] > >> If there was a possibility to improve that setup, I'm willing to give > >> it a shot but we'd have at best to manage 2 instances > >> of pkgdb. > > > > Oh, yeah, I was assuming a dedicated instance... On the theory that two > > instances of the same thing is more desirable than one instance of this > > and another instance of that. > > > > While I think having an instance of pkgdb to maintain the ever growing > package base in RDO is a good thing (especially as people start needing > different acls for different packages for different projects), I still > feel it's worthwhile having something higher level (even if it's just a > wiki page) tracking the big tent projects themselves by name, and who is > responsible for them (maybe links to some of the packages in pkgdb). > Could also include a small description of the project, link to the git > repo, and maybe a link to the bugzilla bug component for it. > > >From a user perspective pkgdb is a bit too low level (trying to figure > out the package name for a project can be non-trivial sometimes, and > some projects are complicated and have more than one "main package"). > All I want to know is "Is Designate in RDO? If not, has someone > committed to packaging it? What's its progress? If it's in RDO, who's > responsible for it? How do I flag problems?". Agreed with your points from a user perspective. But in the meantime, `pkgdb-cli` is not totally unusable though, I've found it useful to query for some of the questions you pose.: Install: $ dnf install packagedb-cli -y Enumerate all the packages in the OpenStaack namespace (but that doesn't enumerate clients though): $ pkgdb-cli list *openstack* To find out ACLs on a certain package (this also provides the point of contact details): $ pkgdb-cli acl openstack-nova To request for ACL on a certain package: $ pkgdb-cli request $NAME-OF-PACKAGE $ACTION Where $ACTION can be any of: 'watchbugzilla', 'watchcommits', 'commit', 'approveacls', 'all'. The help options of the CLI has all the details. But I agree, it's not entirely intuitive. -- /kashyap From dtantsur at redhat.com Thu May 28 08:34:29 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 28 May 2015 10:34:29 +0200 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <20150526190624.GC4975@teletran-1> References: <20150525125555.GK4035@redhat.com> <20150526190624.GC4975@teletran-1> Message-ID: <5566D315.1080700@redhat.com> On 05/26/2015 09:06 PM, James Slagle wrote: > On Mon, May 25, 2015 at 02:55:57PM +0200, Hugh O. Brock wrote: >> Seems like the midstream repos are causing us a lot of pain with little >> gain, at least in some cases. (For example it appears the t-h-t >> midstream exists to carry a single patch that enables mongodb on >> Centos.) Is it worth discussing whether we can eliminate some of these, >> especially for upstreams like t-h-t that aren't tightly tied to the >> OpenStack release schedule? >> >> /me ducks flying bricks > > No flying bricks from me anyway :) > > I agree with what you're saying, but would like to agree on what we're actually > meaning. I think it's great that pretty much exactly what we said would happen > has happened: as Kilo nears completion, the patches that we're having to carry > for RDO-Manager is approaching 0. For the record: it's not the case for Ironic, Ironic client and even discoverd. Also 1 Nova patch won't go away. > > But before we say, let's kill the midstream repos...let's consider why we even > have them to begin with. > > We didn't come up with the idea to fork a bunch of projects because it would > make our *development* lives for RDO-Manager easier. In fact, it's exactly the > opposite, they cause pain. > > So why do we have them? Quite simply it's because of the demand that we show, > commit to, and support features in RDO-Manager prior to them necessarily > merging in upstream OpenStack projects. > > If we can lift that requirement, we could likely do away with the midstream > repos entirely. I know this is rdo-list, but in the spirit of being open, if > we don't have midstream repos for RDO-Manager, then we shouldn't have them for > OSP either, and we should only be committing to features as they land in > upstream OpenStack. > > It doesn't make sense to me to drop the midstream repos for RDO-Manager, but > then turn around and set them up privately for OSP. At least with them setup > for RDO, we're doing the development of the management product in the open. In > fact if we were to split like this, I think the pain would only intensify. > > It's pragmatic to say that anyone looking to productize OpenStack is going to > have their own bug fixes/etc that they need, and maybe even some features. But > setting up private forks as a general development principal to drive OpenStack > based product development should be avoided at all costs. > > So yes, I'm +1 on dropping the midstream repos if they're no longer needed. > > But I'm -1 on then turning around and setting them up privately so that we can > commit code to private forks because that code hasn't landed in upstream > OpenStack yet. If we can agree that this isn't needed, then I think we can do > without the midstream. Otherwise, if we're going to insist on forks to "land" > features prior to them merging upstream, let's at least keep them public. > > There's a couple other facets to the discussion as well: > > The midstream repos is a place where we can preview where the management > product is headed. For example, the Heat breakpoint work was available from the > midstream repos much earlier than it ended up merging into Heat/heatclient. > Some devs/users might find this sort of thing really useful, and could provide > some early feedback for RDO-Manager. > > The midstream repos are also a place where we can quickly land reverts to > unblock work. The Neutron regression that broke provisioning via Ironic this > past cycle immediately comes to mind...it took a couple of weeks before the > upstream revert even landed in Neutron. While that was broken, we carried the > proposed revert in the midstream repos so that people could install a Neutron > that actually worked for RDO-Manager. > >> >> --Hugh >> >> -- >> == Hugh Brock, hbrock at redhat.com == >> == Senior Engineering Manager, Cloud Engineering == >> == RDO Manager: Install, configure, and scale OpenStack == >> == http://rdoproject.org == >> >> "I know that you believe you understand what you think I said, but I?m >> not sure you realize that what you heard is not what I meant." >> --Robert McCloskey >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > -- James Slagle > -- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From apevec at gmail.com Thu May 28 08:53:44 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 28 May 2015 10:53:44 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <20150528082315.GC17090@tesla.redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527131218.GA26054@mattdm.org> <20150527192444.GA852@mattdm.org> <55663AA5.5070607@redhat.com> <20150528082315.GC17090@tesla.redhat.com> Message-ID: > Enumerate all the packages in the OpenStaack namespace (but that > doesn't enumerate clients though): > > $ pkgdb-cli list *openstack* nitpick, quotes are required to avoid shell expansion: pkgdb-cli list '*openstack*' But this will list only packages already _in_ Fedora and use-case is to discover yet-unpackaged or in-review packages too and we might end up with out-of-Fedora packages in RDO. We actually do have RDO metadata in https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml which we could extend to cover WIP packages, Jakub could you include this in your rdoinfo restructuring? Issue to solve is that we use rdoinfo to drive Delorean Trunk i.e. currently Delorean builds everything listed under packages: section. Maybe adding status flag would work e.g. - project: barbican review-link: https://bugzilla.redhat.com/show_bug.cgi?id=1190269 status: - fedora: no - cbs: yes - trunk: yes ... and then change Delorean to filter based on status/trunk. More suggestions welcome! Cheers, Alan From hbrock at redhat.com Thu May 28 08:53:54 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Thu, 28 May 2015 10:53:54 +0200 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <5566D315.1080700@redhat.com> References: <20150525125555.GK4035@redhat.com> <20150526190624.GC4975@teletran-1> <5566D315.1080700@redhat.com> Message-ID: <20150528085353.GB6300@redhat.com> On Thu, May 28, 2015 at 10:34:29AM +0200, Dmitry Tantsur wrote: > On 05/26/2015 09:06 PM, James Slagle wrote: > >On Mon, May 25, 2015 at 02:55:57PM +0200, Hugh O. Brock wrote: > >>Seems like the midstream repos are causing us a lot of pain with little > >>gain, at least in some cases. (For example it appears the t-h-t > >>midstream exists to carry a single patch that enables mongodb on > >>Centos.) Is it worth discussing whether we can eliminate some of these, > >>especially for upstreams like t-h-t that aren't tightly tied to the > >>OpenStack release schedule? > >> > >>/me ducks flying bricks > > > >No flying bricks from me anyway :) > > > >I agree with what you're saying, but would like to agree on what we're actually > >meaning. I think it's great that pretty much exactly what we said would happen > >has happened: as Kilo nears completion, the patches that we're having to carry > >for RDO-Manager is approaching 0. > > For the record: it's not the case for Ironic, Ironic client and even > discoverd. Also 1 Nova patch won't go away. > > > > >But before we say, let's kill the midstream repos...let's consider why we even > >have them to begin with. > > > >We didn't come up with the idea to fork a bunch of projects because it would > >make our *development* lives for RDO-Manager easier. In fact, it's exactly the > >opposite, they cause pain. > > > >So why do we have them? Quite simply it's because of the demand that we show, > >commit to, and support features in RDO-Manager prior to them necessarily > >merging in upstream OpenStack projects. > > > >If we can lift that requirement, we could likely do away with the midstream > >repos entirely. I know this is rdo-list, but in the spirit of being open, if > >we don't have midstream repos for RDO-Manager, then we shouldn't have them for > >OSP either, and we should only be committing to features as they land in > >upstream OpenStack. > > > >It doesn't make sense to me to drop the midstream repos for RDO-Manager, but > >then turn around and set them up privately for OSP. At least with them setup > >for RDO, we're doing the development of the management product in the open. In > >fact if we were to split like this, I think the pain would only intensify. > > > >It's pragmatic to say that anyone looking to productize OpenStack is going to > >have their own bug fixes/etc that they need, and maybe even some features. But > >setting up private forks as a general development principal to drive OpenStack > >based product development should be avoided at all costs. > > > >So yes, I'm +1 on dropping the midstream repos if they're no longer needed. > > > >But I'm -1 on then turning around and setting them up privately so that we can > >commit code to private forks because that code hasn't landed in upstream > >OpenStack yet. If we can agree that this isn't needed, then I think we can do > >without the midstream. Otherwise, if we're going to insist on forks to "land" > >features prior to them merging upstream, let's at least keep them public. I very much agree on *not* making things worse by doing a bunch of secret-squirrel stuff in private repos. And you are right that the big-picture purpose of having the midstream repos is in fact to demonstrate (and build on) features we believe will land upstream, before they have actually landed. The "slower" the upstream project is, the more we need to consider taking the pain of having a midstream repo for that project. What is giving me pause is that it seems like we have midstream repos for projects that are not "slow" at all. The tripleo heat templates are a great example of this; there is little or no delay for our commits on this project, and in fact we benefit by pushing there first because of the tripleo-CI coverage. So why have a midstream repo for it, why not work entirely upstream? The same thing goes for the os-* tools, and a number of other repos I believe. I think we probably do benefit from maintaining midstreams of Heat and Ironic (I *think*), but I question the rest of them. Let me know please if I'm way off-base here. > >There's a couple other facets to the discussion as well: > > > >The midstream repos is a place where we can preview where the management > >product is headed. For example, the Heat breakpoint work was available from the > >midstream repos much earlier than it ended up merging into Heat/heatclient. > >Some devs/users might find this sort of thing really useful, and could provide > >some early feedback for RDO-Manager. > > > >The midstream repos are also a place where we can quickly land reverts to > >unblock work. The Neutron regression that broke provisioning via Ironic this > >past cycle immediately comes to mind...it took a couple of weeks before the > >upstream revert even landed in Neutron. While that was broken, we carried the > >proposed revert in the midstream repos so that people could install a Neutron > >that actually worked for RDO-Manager. Agree, this makes sense. So is it correct to say that the more unwieldy an upstream project is, the more likely it is we need a mid-stream repo for it? Thanks, --Hugh -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == RDO Manager: Install, configure, and scale OpenStack == == http://rdoproject.org == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From ihrachys at redhat.com Thu May 28 09:24:46 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 28 May 2015 11:24:46 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> Message-ID: <5566DEDE.1040900@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/27/2015 09:22 PM, Ha?kel wrote: > 2015-05-27 17:18 GMT+02:00 Lars Kellogg-Stedman : >> On Tue, May 26, 2015 at 08:42:29PM -0400, Matthew Miller wrote: >>>> officially identify what's in RDO (to avoid duplicates) and >>>> give people a way of identifying who's responsible if >>>> patches/work is needed (and who to contact to offer help >>>> assisting maintenance). >>> >>> Possibly this is too heavyweight, but it may be that pkdb2 >>> could be of help >>> here. >> >> +1, I was just going to write and say that (a) yes, absolutely, >> we need that and (b) isn't there any way we can take advantage of >> the existing Fedora infrastructure, which does just about all of >> this already? :) >> > > We actually do, RDO sources are hosted in Fedora dist-git so we > already leverage that infrastructure but that wouldn't extend to > CentOS CBS Koji instance. Don't we plan to drop Fedora dist-git? My understanding from RDO meetup at the summit is that that's the plan, unless someone has clear objections (like Fedora Infra relying on packages to run their infrastructure; though in that case we would already have a problem, since we dropped non-rawhide branches anyway). Also, we have packages that are not in Fedora (openstack-designate, openstack-neutron-*aas, python-networking-*). Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVZt7eAAoJEC5aWaUY1u57tGUH/1kMsJ3CqU16A4D9lOyjgEtk memJXGdHp3etV2aQOXZB9k+7aAn8mGRSj3nHZJzOdLJFhkMvO4vWpzG2FOgyP3hP XmhPkUwSVwJP2eNFJ8mU3wWzksnL/DOlbTbwkCxrP2eQMkK7Z7RuYypfZq/ks1/5 nz0DC29saEvUwy2hlo6kbTUOIvCSkjuUkCjqDJZYVANU4mYYY6Is+5Cz8V4bN1NR ofdhOSCv8p6gjVJgLM3cULwRyCEhzBMp136wXyLvqZQa2/sSZOIL2JVo8GRkX63i Zbv/8yMR2N+FkeMBzXsO63q+CS9LFZ7bJ42Es7Qam1FWTR4JNjl7nLyhOcoCkP8= =805t -----END PGP SIGNATURE----- From majopela at redhat.com Thu May 28 09:50:36 2015 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Thu, 28 May 2015 11:50:36 +0200 Subject: [Rdo-list] RDO + floating IPs In-Reply-To: <55659F9B.7020202@redhat.com> References: <55647B1C.9040208@redhat.com> <20150526151654.GI15019@tesla> <556598B4.5030705@redhat.com> <99964837-F33F-4A83-9BD1-391131CB46B6@redhat.com> <55659F9B.7020202@redhat.com> Message-ID: Ok, I jumped into Tomas host, to check what was going on. ifcfg-br-ex and ifcfg-enxxxx was not properly configured, that explains a bit? Then I stumbled into: # neutron net-create ext_net --provider:network_type flat --provider:physical_network external1 --router:external Invalid input for operation: network_type value 'flat' not supported. that is the *correct* way to create an external network in flat, otherwise the default segmentation will be used (vxlan in this case) which? although it works because we?re forcing br-ex as our external bridge, it wouldn?t work at all if we were using several external networks. It seems that packstack is setting: [root at dell-t5810ws-rdo-02 neutron(keystone_admin)]# grep vxlan * -R plugin.ini:# type_drivers = local,flat,vlan,gre,vxlan [ml2] plugin.ini:type_drivers = vxlan plugin.ini:# Example: type_drivers = flat,vlan,gre,vxlan plugin.ini:tenant_network_types = vxlan while it may be: [ml2] plugin.ini:type_drivers = vxlan, flat, vlan plugin.ini:# Example: type_drivers = flat,vlan,gre,vxlan plugin.ini:tenant_network_types = vxlan to allow other network types also, while telling neutron that tenant segmentation is vxlan by default we need to fix this in packstack/quickstack/puppet modules. After setting that correctly I can do: # source keystonerc_admin # neutron net-create ext_net --provider:network_type flat --provider:physical_network external1 ?router:external # neutron subnet-create ext_net 10.40.128.44/20 --name extsubnet \ --enable-dhcp=False --allocation_pool start=10.40.128.80,end=10.40.128.84 \ --gateway 10.40.143.254 # source keystonerc_demo # neutron neutron subnet-create private --gateway 192.168.123.1 192.168.123.0/24 --name private_subnet # neutron subnet-create private --gateway 192.168.123.1 192.168.123.0/24 --name private_subnet # neutron router-create router # neutron router-gateway-set router ext_net # neutron router-interface-add router private_subnet I believe we may fix the type_drivers setting, and then we should fix packstack to deploy the demo with ext-net as ?flat?, and not default segmentation. Miguel ?ngel Ajo On Wednesday, 27 de May de 2015 at 12:42, Tomas Sedovic wrote: > On 05/27/2015 12:28 PM, Rhys Oxenham wrote: > > > > > On 27 May 2015, at 11:13, Tomas Sedovic wrote: > > > > > > On 05/26/2015 05:16 PM, Kashyap Chamarthy wrote: > > > > On Tue, May 26, 2015 at 03:54:36PM +0200, Tomas Sedovic wrote: > > > > > Hey everyone, > > > > > > > > > > I tried to get RDO set up with floating IP addresses, but I'm running into > > > > > problems I'm not sure how to debug (not that familiar with networking and > > > > > Neutron). > > > > > > > > > > I followed these guides on a clean Fedora 21 x86_64 server: > > > > > > > > > > https://www.rdoproject.org/Quickstart > > > > > https://www.rdoproject.org/Floating_IP_range > > > > > > > > > > > > > [. . .] > > > > > > > > > once all 20 requests failed, it got to a login screen, but I could not ping > > > > > or SSH into it: > > > > > > > > > > # ping 10.40.128.81 > > > > > PING 10.40.128.81 (10.40.128.81) 56(84) bytes of data. > > > > > From 10.40.128.44 icmp_seq=1 Destination Host Unreachable > > > > > From 10.40.128.44 icmp_seq=2 Destination Host Unreachable > > > > > From 10.40.128.44 icmp_seq=3 Destination Host Unreachable > > > > > From 10.40.128.44 icmp_seq=4 Destination Host Unreachable > > > > > > > > > > # ssh cirros at 10.40.128.81 (mailto:cirros at 10.40.128.81) > > > > > ssh: connect to host 10.40.128.81 port 22: No route to host > > > > > > > > > > > > > > > > > It could be any no. of reasons, as I don't know what's going on in your > > > > network. But, your steps sound reasonably correct. Just for comparision, > > > > that's what I normally do: > > > > > > > > # Create new private network: > > > > $ neutron net-create $privnetname > > > > > > > > # Create a subnet > > > > neutron subnet-create $privnetname \ > > > > $subnetspace/24 \ > > > > --name $privsubnetname > > > > > > > > # Create a router > > > > neutron router-create $routername > > > > > > > > # Associate the router to the external network by setting its gateway > > > > # NOTE: This assumes the external network name is 'ext' > > > > > > > > export EXT_NET=$(neutron net-list | grep ext | awk '{print $2;}') > > > > export PRIV_NET=$(neutron subnet-list | grep $privsubnetname | awk '{print $2;}') > > > > export ROUTER_ID=$(neutron router-list | grep $routername | awk '{print $2;}' > > > > > > > > neutron router-gateway-set \ > > > > $ROUTER_ID $EXT_NET_ID > > > > > > > > neutron router-interface-add \ > > > > $ROUTER_ID $PRIV_NET_ID > > > > > > > > > > > > # Add Neutron security groups for this test tenant > > > > neutron security-group-rule-create \ > > > > --protocol icmp \ > > > > --direction ingress \ > > > > --remote-ip-prefix 0.0.0.0/0 \ > > > > default > > > > > > > > neutron security-group-rule-create \ > > > > --protocol tcp \ > > > > --port-range-min 22 \ > > > > --port-range-max 22 \ > > > > --direction ingress \ > > > > --remote-ip-prefix 0.0.0.0/0 \ > > > > default > > > > > > > > > > > > On a related note, all the above, inlcuding creating the Keystone > > > > tenant, user, etc is put together in this trivial script[1], which > > > > allows me to create tenant networks this way: > > > > > > > > $ ./create-new-tenant-network.sh (http://create-new-tenant-network.sh) \ > > > > demoten1 tuser1 \ > > > > 14.0.0.0 trouter1 \ > > > > priv-net1 priv-subnet1 > > > > > > > > It assumes your external network is named as "ext", but you can modify > > > > the script trivially to change that. > > > > > > > > > > > > [1] https://github.com/kashyapc/ostack-misc/blob/master/create-new-tenant-network.sh > > > > > > Thanks Kashyab, much appreciated. I've tried all this out, but the result seems to be the same (timeouts in cloud-init, the VM is unreachable). > > > > So it seems there?s two problems here, correct me if I?m wrong - > > > > 1) VM?s getting access to the metadata service > > > > 2) VM?s accessible via their floating IP?s from the outside > > > > I would say that (2) is the more important one we need to fix right now. If it?s pingable from the namespace then your overlays (or inter-node communication) and DHCP is working OK. That means that it?s likely the link between the external network bridge and the outside. From the output below, it suggests that you?ve defined your external network as a non-provider network. Therefore, you have to tell the L3 agent the specific bridge it needs to use to route traffic. > > Thanks. For the record, it's only pingable when I route my private > network to "public". When the router's gateway is set to "ext", the > floating IP isn't pingable at all (including from the namespace): > > ip netns exec qrouter-f2bfd294-c90c-4c98-9b6d-b33e28b7c9ef ping 10.40.128.84 > connect: Network is unreachable > > (f2bfd...9ef is the ID of the router between the private and external > network, 10.40.128.84 is the floting IP). > > > > > > In your L3 agent configuration file you?ll have ?external_network_bridge? option. This will need to be set to ?br-ex? (or the name of your external bridge) for the flows to be set up correctly. If this is blank, you?ll need to recreate your external network as a provider network, and ensure that you have the correct bridge mappings enabled on your Neutron network node. > > > > So I guess my question is this? what is ?external_network_bridge? set to in /etc/neutron/l3-agent.ini? > > > > [root at stack-node1 ~]# grep external_network_bridge /etc/neutron/l3_agent.ini > > # When external_network_bridge is set, each L3 agent can be associated > > # networks, both the external_network_bridge and gateway_external_network_id > > # external_network_bridge = br-ex > > external_network_bridge = br-ex > > > > > It is indeed set to br-ex. > > > > > Cheers > > Rhys > > > > > > > > > > > When I switched the router's gateway from "ext" to "public" (a network created by packstack) and booted the VM in my private network, it got to the login screen immediately and the floating IP was pingable through `ip netns exec`. Changing the gateway back to "ext", I got the timeouts again. That seems to indicate that the issue is related to "ext" rather then the way I set up a private network or boot the VM. > > > > > > There doesn't seem to be a significant difference between "ext" and "public" networks and their subnets: > > > > > > # neutron net-show public > > > +---------------------------+--------------------------------------+ > > > | Field | Value | > > > +---------------------------+--------------------------------------+ > > > | admin_state_up | True | > > > | id | 5d2a0846-4244-4d3b-ad68-033a18224459 | > > > | mtu | 0 | > > > | name | public | > > > | provider:network_type | vxlan | > > > | provider:physical_network | | > > > | provider:segmentation_id | 10 | > > > | router:external | True | > > > | shared | True | > > > | status | ACTIVE | > > > | subnets | 5285ff33-1bed-449b-b629-8ecc5ec0f642 | > > > | tenant_id | 3c7799abd0af430696428247d377ceaf | > > > +---------------------------+--------------------------------------+ > > > # neutron net-show ext > > > +---------------------------+--------------------------------------+ > > > | Field | Value | > > > +---------------------------+--------------------------------------+ > > > | admin_state_up | True | > > > | id | 376e6c88-4752-476b-8feb-ae3346a98006 | > > > | mtu | 0 | > > > | name | ext | > > > | provider:network_type | vxlan | > > > | provider:physical_network | | > > > | provider:segmentation_id | 12 | > > > | router:external | True | > > > | shared | False | > > > | status | ACTIVE | > > > | subnets | db336afd-8d41-4938-97ac-39ec912597df | > > > | tenant_id | 3c7799abd0af430696428247d377ceaf | > > > +---------------------------+--------------------------------------+ > > > # neutron subnet-show public_subnet > > > +-------------------+--------------------------------------------------+ > > > | Field | Value | > > > +-------------------+--------------------------------------------------+ > > > | allocation_pools | {"start": "172.24.4.226", "end": "172.24.4.238"} | > > > | cidr | 172.24.4.224/28 | > > > | dns_nameservers | | > > > | enable_dhcp | False | > > > | gateway_ip | 172.24.4.225 | > > > | host_routes | | > > > | id | 5285ff33-1bed-449b-b629-8ecc5ec0f642 | > > > | ip_version | 4 | > > > | ipv6_address_mode | | > > > | ipv6_ra_mode | | > > > | name | public_subnet | > > > | network_id | 5d2a0846-4244-4d3b-ad68-033a18224459 | > > > | subnetpool_id | | > > > | tenant_id | 3c7799abd0af430696428247d377ceaf | > > > +-------------------+--------------------------------------------------+ > > > # neutron subnet-show ext_subnet > > > +-------------------+--------------------------------------------------+ > > > | Field | Value | > > > +-------------------+--------------------------------------------------+ > > > | allocation_pools | {"start": "10.40.128.80", "end": "10.40.128.84"} | > > > | cidr | 10.40.128.0/20 | > > > | dns_nameservers | | > > > | enable_dhcp | False | > > > | gateway_ip | 10.40.143.254 | > > > | host_routes | | > > > | id | db336afd-8d41-4938-97ac-39ec912597df | > > > | ip_version | 4 | > > > | ipv6_address_mode | | > > > | ipv6_ra_mode | | > > > | name | ext_subnet | > > > | network_id | 376e6c88-4752-476b-8feb-ae3346a98006 | > > > | subnetpool_id | | > > > | tenant_id | 3c7799abd0af430696428247d377ceaf | > > > +-------------------+--------------------------------------------------+ > > > > > > > > > I've also seen this: https://www.rdoproject.org/Neutron_with_existing_external_network > > > > > > Tried to follow it some time ago, but whenever I got to the `service network restart`, I got disconnected from my box and it was unreachable even after reboot. > > > > > > > > > Is there anything else that jumps at you? Or do you have any ideas how to investigate this further? > > > > > > I was also thinking I could change "public"'s subnet to the floating IP range I have available, but I worry that may screw everything up. Is it worth a try? > > > > > > Thanks, > > > Tomas > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com (mailto:Rdo-list at redhat.com) > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com (mailto:rdo-list-unsubscribe at redhat.com) > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com (mailto:Rdo-list at redhat.com) > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com (mailto:rdo-list-unsubscribe at redhat.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu May 28 10:58:00 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 28 May 2015 12:58:00 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <5566DEDE.1040900@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> Message-ID: > Don't we plan to drop Fedora dist-git? My understanding from RDO > meetup at the summit is that that's the plan, unless someone has clear I was not at meetup but that's not my conclusion from the notes in etherpad https://etherpad.openstack.org/p/RDO_Vancouver Rich, do you have an usable voice recording of the meetup or more detailed notes? Before the meetup I did add few lines at the bottom of == Packaging == section regarding RDO==Delorean, but I didn't see any comments. We cannot say RDO==Delorean and github/openstack-packages rpm- are not enough to be used as dist-git. > objections (like Fedora Infra relying on packages to run their > infrastructure; though in that case we would already have a problem, > since we dropped non-rawhide branches anyway). We did not drop anything, yet, until we have alternative home for dist-git. > Also, we have packages that are not in Fedora (openstack-designate, > openstack-neutron-*aas, python-networking-*). Until we have dist-git ready in git.centos.org please do open Fedora review requests so we can have dist-git for them. Cheers, Alan From dtantsur at redhat.com Thu May 28 11:32:46 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 28 May 2015 13:32:46 +0200 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <20150525125555.GK4035@redhat.com> References: <20150525125555.GK4035@redhat.com> Message-ID: <5566FCDE.5040806@redhat.com> On 05/25/2015 02:55 PM, Hugh O. Brock wrote: > Seems like the midstream repos are causing us a lot of pain with little > gain, at least in some cases. (For example it appears the t-h-t > midstream exists to carry a single patch that enables mongodb on > Centos.) Is it worth discussing whether we can eliminate some of these, > especially for upstreams like t-h-t that aren't tightly tied to the > OpenStack release schedule? I have to bring one more thing here: the way we handle patches is error-prone and opaque. `git push -f` is rarely a good way to solve problems. It's easy to forget a patch on rebase, it requires explicit coordination between people doing rebases, it does not (by default) provide history of changes to patch series, it's hard to revert to previous rebase if you forget to tag it. Delorean can also be confused with multi-patch rebase. Can we consider some specialized solution for handling patch stacks? I've heard Rackspace is using Ply [1], looks nice at first glance. It's idea is to store patch series in a separate repo - which looks like what we want. So we can use our fork only for mgt-* branch and have all patches tracked outside. Then we (maybe) could even use gerrit for patches (which I don't like but some people dream of)! [1] https://github.com/rconradharris/ply > > /me ducks flying bricks > > --Hugh > From hbrock at redhat.com Thu May 28 11:36:09 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Thu, 28 May 2015 13:36:09 +0200 Subject: [Rdo-list] reconsidering midstream repos In-Reply-To: <5566FCDE.5040806@redhat.com> References: <20150525125555.GK4035@redhat.com> <5566FCDE.5040806@redhat.com> Message-ID: <20150528113608.GD6300@redhat.com> On Thu, May 28, 2015 at 01:32:46PM +0200, Dmitry Tantsur wrote: > On 05/25/2015 02:55 PM, Hugh O. Brock wrote: > >Seems like the midstream repos are causing us a lot of pain with little > >gain, at least in some cases. (For example it appears the t-h-t > >midstream exists to carry a single patch that enables mongodb on > >Centos.) Is it worth discussing whether we can eliminate some of these, > >especially for upstreams like t-h-t that aren't tightly tied to the > >OpenStack release schedule? > > I have to bring one more thing here: the way we handle patches is > error-prone and opaque. `git push -f` is rarely a good way to solve > problems. It's easy to forget a patch on rebase, it requires explicit > coordination between people doing rebases, it does not (by default) provide > history of changes to patch series, it's hard to revert to previous rebase > if you forget to tag it. Delorean can also be confused with multi-patch > rebase. > > Can we consider some specialized solution for handling patch stacks? I've > heard Rackspace is using Ply [1], looks nice at first glance. It's idea is > to store patch series in a separate repo - which looks like what we want. So > we can use our fork only for mgt-* branch and have all patches tracked > outside. Then we (maybe) could even use gerrit for patches (which I don't > like but some people dream of)! > > [1] https://github.com/rconradharris/ply I'm all for improving our tools. It's worth mentioning that the openstack-puppet-modules maintainers (mmagr, EmilienM, et al) are working on a better way to maintain patches against an upstream that is frequently fast-forwarded (and there is CI on the fast-forwards). At first glance the system they're working on looks like exactly what we want, although I need to know more about the details. Certainly worth looking at for rdo-manager going forward. --Hugh -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == RDO Manager: Install, configure, and scale OpenStack == == http://rdoproject.org == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From ichi.sara at gmail.com Thu May 28 11:57:18 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Thu, 28 May 2015 13:57:18 +0200 Subject: [Rdo-list] [Heat][ceilometer] vertical scalability Message-ID: Hey there, I know that performing horizontal scalability in Openstack can be automated with heat and ceilometer. I'm wondering if we can do the same thing for vertical scalability with these two modules? B.regards, Sara -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu May 28 12:53:23 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 28 May 2015 08:53:23 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> Message-ID: <55670FC3.7000201@redhat.com> On 05/28/2015 06:58 AM, Alan Pevec wrote: >> Don't we plan to drop Fedora dist-git? My understanding from RDO >> meetup at the summit is that that's the plan, unless someone has clear > > I was not at meetup but that's not my conclusion from the notes in > etherpad https://etherpad.openstack.org/p/RDO_Vancouver > Rich, do you have an usable voice recording of the meetup or more > detailed notes? I have the recording, but I haven't had an opportunity to edit it yet. It's on my list for today. --Rich > > Before the meetup I did add few lines at the bottom of == Packaging == > section regarding RDO==Delorean, but I didn't see any comments. > We cannot say RDO==Delorean and github/openstack-packages rpm- > are not enough to be used as dist-git. > >> objections (like Fedora Infra relying on packages to run their >> infrastructure; though in that case we would already have a problem, >> since we dropped non-rawhide branches anyway). > > We did not drop anything, yet, until we have alternative home for dist-git. > >> Also, we have packages that are not in Fedora (openstack-designate, >> openstack-neutron-*aas, python-networking-*). > > Until we have dist-git ready in git.centos.org please do open Fedora > review requests so we can have dist-git for them. > > > Cheers, > Alan > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Thu May 28 12:56:21 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 28 May 2015 08:56:21 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55650D2E.1040201@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> Message-ID: <55671075.1090702@redhat.com> On 05/26/2015 08:17 PM, Graeme Gillies wrote: > This is an interesting idea. As we start to get more projects under big > tent and into RDO (and the projects themselves might be in various > states in terms of maturity and stability), it might be worthwhile > having some documentation somewhere on the RDO website on what big tent > projects are in RDO, and which one or more people have taken the > ownership of packaging and maintaining them. This might be at a level > higher than RPMs itself. Essentially some form of process/governance to > officially identify what's in RDO (to avoid duplicates) and give people > a way of identifying who's responsible if patches/work is needed (and > who to contact to offer help assisting maintenance). > > Does this make sense? Yes, definitely. It would be nice to have even now, since we do get the "what's in RDO" or "is Designate/Trove/Whatever in RDO" question fairly regularly already. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From pmyers at redhat.com Thu May 28 13:05:11 2015 From: pmyers at redhat.com (Perry Myers) Date: Thu, 28 May 2015 09:05:11 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> Message-ID: <55671287.4090900@redhat.com> On 05/28/2015 06:58 AM, Alan Pevec wrote: >> Don't we plan to drop Fedora dist-git? My understanding from RDO >> meetup at the summit is that that's the plan, unless someone has clear > > I was not at meetup but that's not my conclusion from the notes in > etherpad https://etherpad.openstack.org/p/RDO_Vancouver > Rich, do you have an usable voice recording of the meetup or more > detailed notes? Since I was the one talking during this part of the meeting, I'll give my recollection... We didn't discuss Fedora dist-git specifically afair. If getting out of Fedora and just being "on Fedora" requires us to not use Fedora dist-git, then I agree that could be problematic unless we have a proper replacement. Ideally if we could somehow continue to use dist-git and other Fedora infrastructure for some things, but not publish the OpenStack packages directly in Fedora official release, that would be ideal. To be clear, I have no issues with Fedora (I love it and use it every day), but tying a single release of OpenStack to a single point release of Fedora (like "Kilo on Fedora 22 vs. Juno on Fedora 21") doesn't make a lot of sense. What makes more sense is to say that we focus in RDO on: * Providing the latest stable OpenStack (Kilo right now) on top of whatever the latest Fedora is (Fedora 22 as of yesterday) * Providing the latest in development OpenStack (Liberty) on top of whatever the latest Fedora is (F22) And as Fedora rolls forward to F23, so would our efforts. And as OpenStack releases Liberty, we'd stop updating Kilo packages and focus on Liberty for stable and 'M' for development. So... Can we use Fedora but somehow just not include the OpenStack core packages in Fedora proper? Matt, what are your thoughts? > Before the meetup I did add few lines at the bottom of == Packaging == > section regarding RDO==Delorean, but I didn't see any comments. > We cannot say RDO==Delorean and github/openstack-packages rpm- > are not enough to be used as dist-git. > >> objections (like Fedora Infra relying on packages to run their >> infrastructure; though in that case we would already have a problem, >> since we dropped non-rawhide branches anyway). > > We did not drop anything, yet, until we have alternative home for dist-git. +1 It was just a proposal to start further discussion on list... which is happening now, so that's great :) >> Also, we have packages that are not in Fedora (openstack-designate, >> openstack-neutron-*aas, python-networking-*). > > Until we have dist-git ready in git.centos.org please do open Fedora > review requests so we can have dist-git for them. > > > Cheers, > Alan From ihrachys at redhat.com Thu May 28 13:13:59 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 28 May 2015 15:13:59 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55671287.4090900@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> <55671287.4090900@redhat.com> Message-ID: <55671497.5070200@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/28/2015 03:05 PM, Perry Myers wrote: > So... Can we use Fedora but somehow just not include the OpenStack > core packages in Fedora proper? > What's the intended goal to stick to Fedora infra while not providing packages for it? For me, it does not make sense at all and sounds like an abuse of Fedora resources. Till now, I've heard that some people see value in walking thru official Fedora new package review process. Any other reasons to stick to it? Not that I think the new package review argument is valid since we still can apply the process to our reviews outside of Fedora infra. But that's at least something. Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVZxSXAAoJEC5aWaUY1u570NMIAJ+YgKuxKn8IFH4Uwvygv2NA XoQZbqQAOQyMmjrACeCl6v8fT1A64gEQZ7VkurK+y9Q+SREouJ+6OVfNvWlu+4qB rCwwmImiiV17R2+nmkh+PvS1njy4UmMALA+jXyx8DwfH3uPs1YbeHeyZ1iAXnu4s NGsdZyaiqsk01kwI/YKHMd0fDoxIqUCN6DABLCfDrWML4RVHcOtzCVo2Wy9vLfPQ R9rQuaOgumEjlqKx9c+veswRSm8ATEg751JPLUVOWg67AjsnJdvtfKgPihflMD6M qagSFejSq106sdcq494MaDZET8IsWvj2zSVlkWoaNoNjhjaLereX6YWhLq2rv1c= =Q2Os -----END PGP SIGNATURE----- From pmyers at redhat.com Thu May 28 13:27:15 2015 From: pmyers at redhat.com (Perry Myers) Date: Thu, 28 May 2015 09:27:15 -0400 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55671497.5070200@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> <55671287.4090900@redhat.com> <55671497.5070200@redhat.com> Message-ID: <556717B3.8010007@redhat.com> On 05/28/2015 09:13 AM, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 05/28/2015 03:05 PM, Perry Myers wrote: >> So... Can we use Fedora but somehow just not include the OpenStack >> core packages in Fedora proper? >> > > What's the intended goal to stick to Fedora infra while not providing > packages for it? For me, it does not make sense at all and sounds like > an abuse of Fedora resources. > > Till now, I've heard that some people see value in walking thru > official Fedora new package review process. Any other reasons to stick > to it? That's one thing, the other would be to show tight alignment with the Fedora community. My reasons for thinking we should stick with Fedora infra is similar to what we're doing with CentOS. We use CBS and a lot of CentOS infra, but RDO is not _in_ CentOS. It's in a CentOS SIG (Cloud SIG) I think Fedora needs the same concept as the CentOS SIGs... > Not that I think the new package review argument is valid since we > still can apply the process to our reviews outside of Fedora infra. > But that's at least something. From apevec at gmail.com Thu May 28 14:17:29 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 28 May 2015 16:17:29 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <556717B3.8010007@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <5564E873.4000209@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> <55671287.4090900@redhat.com> <55671497.5070200@redhat.com> <556717B3.8010007@redhat.com> Message-ID: > My reasons for thinking we should stick with Fedora infra is similar to what > we're doing with CentOS. Fedora is also next EL so we get to play/fix in advance with new stuff e.g. systemd support before EL7, we developed service unit files on Fedora. > We use CBS and a lot of CentOS infra, but RDO is not _in_ CentOS. It's in a > CentOS SIG (Cloud SIG) > > I think Fedora needs the same concept as the CentOS SIGs... There are Fedora SIGs but hard requirements is that they need to get their packages into Fedora i.e. Fedora doesn't have separate Koji instance for Community Build System[*]. There's Copr but that's not the same, it supports SRPM builds only without dist-git. Cheers, Alan [*] that's what Centos CBS stands for, it is NOT CentOS Build System, for core OS there is a separate non-public legacy buildsystem, not Koji based AFAIK From kchamart at redhat.com Thu May 28 14:58:03 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 28 May 2015 16:58:03 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: References: <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> <55671287.4090900@redhat.com> <55671497.5070200@redhat.com> <556717B3.8010007@redhat.com> Message-ID: <20150528145803.GD17090@tesla.redhat.com> On Thu, May 28, 2015 at 04:17:29PM +0200, Alan Pevec wrote: > > My reasons for thinking we should stick with Fedora infra is similar to what > > we're doing with CentOS. > > Fedora is also next EL so we get to play/fix in advance with new stuff > e.g. systemd support before EL7, we developed service unit files on > Fedora. Exactly. The advantages of working with Fedora as it's being developed, instead of our own silo does have a lot of tooling benefits, especially for developers. > > We use CBS and a lot of CentOS infra, but RDO is not _in_ CentOS. > > It's in a CentOS SIG (Cloud SIG) > > > > > > I think Fedora needs the same concept as the CentOS SIGs... There's the existing Fedora Cloud SIG. Observing other SIGs, very few maintain sustained meaningful activity after the initial phase of excitement to get started. > There are Fedora SIGs but hard requirements is that they need to get > their packages into Fedora i.e. Fedora doesn't have separate Koji > instance for Community Build System[*]. There's Copr but that's not > the same, it supports SRPM builds only without dist-git. > -- /kashyap From javier.pena at redhat.com Thu May 28 15:10:36 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 28 May 2015 11:10:36 -0400 (EDT) Subject: [Rdo-list] RE(4): RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: References: Message-ID: <1221078633.6446436.1432825836653.JavaMail.zimbra@redhat.com> ----- Original Message ----- > You may install RDO Kilo on F22, I spent 4-5 hr to complete install. > As soon as you get "chkconfig --add service" returns 1 > Respond `systemctl enable service` && packstack > --answer-file=./answer-file-xxx-yyyy.txt. > More over ( for instance) when IP_nova.pp crashes , check > `systemctl | grep nova` && enable all running nova services. > Same hook works for glance,cinder,neutron,swift. It's a simple > shell script accepting via command line parameter (nova,neutron,...). > I took per single service care for > iptables,httpd,mariadb,mongod,openvswitch,memcached > It improves a knowledge of services been activated during packstack > run. I am not kidding. I release that it will be fixed pretty soon, just > wanted to get RDO Kilo running on F22 and got it. Hi Boris, I have been looking at this issue, and it looks like it is a bug in the Puppet side. From https://github.com/puppetlabs/puppet/blob/3.x/lib/puppet/provider/service/systemd.rb#L10 , it seems that Puppet only uses the systemd provider for services up to Fedora 21, which leaves the new F22 with chkconfig, which creates the issue. A quick edit to /usr/share/ruby/vendor_ruby/puppet/provider/service/systemd.rb allowed me to get going, so now we just need to fix it upstream and in the Fedora package. Regards, Javier > Boris. > Date: Sun, 24 May 2015 03:07:26 +1000 > Subject: Re: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello > instructions on F22 RC1 > From: outbackdingo at gmail.com > To: apevec at gmail.com > CC: bderzhavets at hotmail.com; P at draigbrady.com; sross at redhat.com; > rdo-list at redhat.com > So can someoone clarify for me what repo is successful on Fedora 22 that i > dshould use for a fresh laptop > On Fri, May 22, 2015 at 10:41 PM, Alan Pevec < apevec at gmail.com > wrote: > > 2015-05-22 10:55 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com >: > > > > Sorry, issue with openstack-nova-novncproxy.service again, the rest seems > > > > to be OK. > > > > ( connected to VNC console via virt-manager) > > > > nova-novncproxy[19781]: websockify.ProxyRequestHandler): > > > > nova-novncproxy[19781]: AttributeError: 'module' object has no attribute > > > 'ProxyRequestHandler' > > > Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to > > > f21-updates, please create Bodhi update. > > > We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. > > > Cheers, > > > Alan > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com > Boris. > Date: Sun, 24 May 2015 03:07:26 +1000 > Subject: Re: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello > instructions on F22 RC1 > From: outbackdingo at gmail.com > To: apevec at gmail.com > CC: bderzhavets at hotmail.com; P at draigbrady.com; sross at redhat.com; > rdo-list at redhat.com > So can someoone clarify for me what repo is successful on Fedora 22 that i > dshould use for a fresh laptop > On Fri, May 22, 2015 at 10:41 PM, Alan Pevec < apevec at gmail.com > wrote: > > 2015-05-22 10:55 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com >: > > > > Sorry, issue with openstack-nova-novncproxy.service again, the rest seems > > > > to be OK. > > > > ( connected to VNC console via virt-manager) > > > > nova-novncproxy[19781]: websockify.ProxyRequestHandler): > > > > nova-novncproxy[19781]: AttributeError: 'module' object has no attribute > > > 'ProxyRequestHandler' > > > Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to > > > f21-updates, please create Bodhi update. > > > We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. > > > Cheers, > > > Alan > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Thu May 28 17:04:15 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 28 May 2015 19:04:15 +0200 Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <20150528145803.GD17090@tesla.redhat.com> References: <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> <55671287.4090900@redhat.com> <55671497.5070200@redhat.com> <556717B3.8010007@redhat.com> <20150528145803.GD17090@tesla.redhat.com> Message-ID: 2015-05-28 16:58 GMT+02:00 Kashyap Chamarthy : > On Thu, May 28, 2015 at 04:17:29PM +0200, Alan Pevec wrote: >> > My reasons for thinking we should stick with Fedora infra is similar to what >> > we're doing with CentOS. >> >> Fedora is also next EL so we get to play/fix in advance with new stuff >> e.g. systemd support before EL7, we developed service unit files on >> Fedora. > > Exactly. The advantages of working with Fedora as it's being developed, > instead of our own silo does have a lot of tooling benefits, especially > for developers. > +2 >> > We use CBS and a lot of CentOS infra, but RDO is not _in_ CentOS. >> > It's in a CentOS SIG (Cloud SIG) >> > >> > >> > I think Fedora needs the same concept as the CentOS SIGs... > > There's the existing Fedora Cloud SIG. Observing other SIGs, very few > maintain sustained meaningful activity after the initial phase of > excitement to get started. > /me as CentOS Cloud SIG and Fedora Cloud WG member Just for the record, CentOS Cloud SIG deals with Cloud Infrastructure (OpenStack, CloudStack, OpenNebula, Eucalyptus), Fedora Cloud WG holds stewardships on cloud images (which are an official product). I had actively lobbied to give more power to SIG, which ultimately led me to push for a governance change in Fedora as it became impossible to move forward. And I think the new Council has now the power and the will to make things happen. >> There are Fedora SIGs but hard requirements is that they need to get >> their packages into Fedora i.e. Fedora doesn't have separate Koji >> instance for Community Build System[*]. There's Copr but that's not >> the same, it supports SRPM builds only without dist-git. >> > If people wants to join me lobbying Fedora to introduce layered products, I'd love getting some support! ie: having a true RDO product based on Fedora, that gets as much love as the CentOS one. Though it will have a different target (developers and early adopters), it would give us a good platform to work integration on the next Enterprise Linux releases. Nothing prevents into having copr being able to interact with dist-git (be it the pristine one or another one) or having a separate koji instance for layered products with more flexibility to SIGs. But it won't happen without people championing this idea, and I trust the Fedora Council to be supportive if we come up with a good proposal. H. > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Thu May 28 17:44:00 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 28 May 2015 19:44:00 +0200 Subject: [Rdo-list] availability of openstack-kilo/f21 and openstack-kilo/f22 In-Reply-To: <5567511B.2050109@berendt.io> References: <5567511B.2050109@berendt.io> Message-ID: > At the moment only openstack-kilo/el7/ is available and > openstack-kilo/testing/f21 and openstack-kilo/testing/f22. BTW f21 is symlink to f22 Before moving it out of testing, I wanted to look at selinux issue, permissive is required but didn't have time yet. OTOH looks like Fedora Juno has the same issue after Packstack started deploying keystone in httpd too, so it wouldn't be regression... Has anyone opened BZ against Fedora selinux-policy yet? Cheers, Alan From rbowen at redhat.com Thu May 28 18:52:47 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 28 May 2015 14:52:47 -0400 Subject: [Rdo-list] RDO Community Meetup, Vancouver - Audio Message-ID: <556763FF.2010803@redhat.com> While we're in the transition to our new website, I've put the recordings from the meetup in Vancouver on my personal soundcloud account: https://soundcloud.com/rich-bowen/sets/rdo-community-meetup-openstack I've split it into 6 chunks, in case you only particularly care about one topic. First, Perry gives an intro, and an overview of several community issues. Next, we have Jarda talking about RDO Manager Karsten gives an update on the Cloud SIG, and CentOS project's involvement in RDO. I talk about the website migration/upgrade plans Wes talks about CI And then there's a bit of open discussion about packaging, plans for Fedora, and then it trails off a bit at the end. Thanks again for those that were there. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ggillies at redhat.com Thu May 28 22:53:27 2015 From: ggillies at redhat.com (Graeme Gillies) Date: Fri, 29 May 2015 08:53:27 +1000 Subject: [Rdo-list] Slight tweak to allow easier rebuilding of RDO SRPMS for Operators Message-ID: <55679C67.4010507@redhat.com> Hi, I attempting to rebuild an SRPM from RDO (openstack-ironic) to apply a patch needed for my environment. I was simply trying to do yumdownloader --source openstack-ironic rpm2cpio openstack-ironic-2015.1-dev684.el7.centos.src.rpm | cpio -i # modify spec and apply patch rpmbuild --define '%_srcrpmdir .' --define '%_sourcedir .' -bs openstack-ironic.spec mock openstack-ironic-2015.1-dev684.el7.centos.src.rpm However this gives me a build failure with /var/tmp/rpm-tmp.jyD2vL: line 35: cd: ironic-%{upstream_version}: No such file or directory I had a feeling this was because of the workflow introduced by delorean (something I am not immediately familiar with). I started looking at this page https://www.rdoproject.org/packaging/rdo-packaging.html And followed it through I understand what delorean is, what it's trying to achieve, and how the macros work etc. I was able to fix my build with by adding %{!?upstream_version: %global upstream_version 2015.1.dev684} I'm just wondering if it is possible to tweak the rpm specs that delorean/RDO use to add such a conditional above to all rpms, so that it is possible for us Operators to still rebuild these SRPMS as needed using the tools we always have used. While I can appreciate delorean and everything it does, from a pure operators perspective, we aren't a fan of having to use custom tools to build packages for all different products/infrastructure under our control. If this could be done it would be very much appreciated! Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From hguemar at fedoraproject.org Fri May 29 00:15:44 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Fri, 29 May 2015 02:15:44 +0200 Subject: [Rdo-list] Slight tweak to allow easier rebuilding of RDO SRPMS for Operators In-Reply-To: <55679C67.4010507@redhat.com> References: <55679C67.4010507@redhat.com> Message-ID: Hi Graeme, thanks for noticing this issue in ironic delorean spec (I'm fixing it right now) We came up with the following fallback macro in order to make it easier to sync between delorean and dist-git specs. %{!?upstream_version: %global upstream_version %{version}%{?milestone}} Regards, H. From ichi.sara at gmail.com Fri May 29 08:34:00 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Fri, 29 May 2015 10:34:00 +0200 Subject: [Rdo-list] [heat][ceilometer] [autoscaling] WordPress demo Message-ID: Hey there, I'm trying to make work the autoscaling HOT template that uses wordpress as example. I'm following this tutorial but can't make it work as I have some difficulties to set up the infrastructure needed. Unfortunately, the tuto doesn't go into such details. So I decided to seek information here, may be someone of you have done it before. If so, please, just give the steps to follow in order to make this demo work ( setting up the infrastructure). looking forward to hearing from you, Sara -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri May 29 08:39:27 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 29 May 2015 04:39:27 -0400 Subject: [Rdo-list] RE(4): RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 In-Reply-To: <1221078633.6446436.1432825836653.JavaMail.zimbra@redhat.com> References: , <1221078633.6446436.1432825836653.JavaMail.zimbra@redhat.com> Message-ID: > Date: Thu, 28 May 2015 11:10:36 -0400 > From: javier.pena at redhat.com > To: bderzhavets at hotmail.com > CC: outbackdingo at gmail.com; rdo-list at redhat.com; p at draigbrady.com > Subject: Re: [Rdo-list] RE(4): RE(2): Attempt to setup RDO Kilo per recent Trello instructions on F22 RC1 > > > > ----- Original Message ----- > > > You may install RDO Kilo on F22, I spent 4-5 hr to complete install. > > As soon as you get "chkconfig --add service" returns 1 > > Respond `systemctl enable service` && packstack > > --answer-file=./answer-file-xxx-yyyy.txt. > > More over ( for instance) when IP_nova.pp crashes , check > > `systemctl | grep nova` && enable all running nova services. > > Same hook works for glance,cinder,neutron,swift. It's a simple > > shell script accepting via command line parameter (nova,neutron,...). > > > I took per single service care for > > iptables,httpd,mariadb,mongod,openvswitch,memcached > > > It improves a knowledge of services been activated during packstack > > run. I am not kidding. I release that it will be fixed pretty soon, just > > wanted to get RDO Kilo running on F22 and got it. > > Hi Boris, > > I have been looking at this issue, and it looks like it is a bug in the Puppet side. From >https://github.com/puppetlabs/puppet/blob/3.x/lib/puppet/provider/service/systemd.rb#L10 , it >seems that Puppet only uses the systemd provider for services up to Fedora 21, which leaves the new >F22 with chkconfig, which creates the issue. > A quick edit to /usr/share/ruby/vendor_ruby/puppet/provider/service/systemd.rb allowed me to get >going, so now we just need to fix it upstream and in the Fedora package. Thank you so much. Been advised it's not difficult to follow your directions. packstack failed just two times :- After first one - Correction mentioned file, packstack restart. After second one - `systemctl enable target; systemctl start target`, the rest went with no problems. > > Regards, > Javier > > > Boris. > > > Date: Sun, 24 May 2015 03:07:26 +1000 > > Subject: Re: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello > > instructions on F22 RC1 > > From: outbackdingo at gmail.com > > To: apevec at gmail.com > > CC: bderzhavets at hotmail.com; P at draigbrady.com; sross at redhat.com; > > rdo-list at redhat.com > > > So can someoone clarify for me what repo is successful on Fedora 22 that i > > dshould use for a fresh laptop > > > On Fri, May 22, 2015 at 10:41 PM, Alan Pevec < apevec at gmail.com > wrote: > > > > 2015-05-22 10:55 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com >: > > > > > > Sorry, issue with openstack-nova-novncproxy.service again, the rest seems > > > > > > to be OK. > > > > > > ( connected to VNC console via virt-manager) > > > > > > > nova-novncproxy[19781]: websockify.ProxyRequestHandler): > > > > > > nova-novncproxy[19781]: AttributeError: 'module' object has no attribute > > > > 'ProxyRequestHandler' > > > > > > Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to > > > > > f21-updates, please create Bodhi update. > > > > > We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. > > > > > > Cheers, > > > > > Alan > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > Boris. > > > Date: Sun, 24 May 2015 03:07:26 +1000 > > Subject: Re: [Rdo-list] RE(2): Attempt to setup RDO Kilo per recent Trello > > instructions on F22 RC1 > > From: outbackdingo at gmail.com > > To: apevec at gmail.com > > CC: bderzhavets at hotmail.com; P at draigbrady.com; sross at redhat.com; > > rdo-list at redhat.com > > > So can someoone clarify for me what repo is successful on Fedora 22 that i > > dshould use for a fresh laptop > > > On Fri, May 22, 2015 at 10:41 PM, Alan Pevec < apevec at gmail.com > wrote: > > > > 2015-05-22 10:55 GMT+02:00 Boris Derzhavets < bderzhavets at hotmail.com >: > > > > > > Sorry, issue with openstack-nova-novncproxy.service again, the rest seems > > > > > > to be OK. > > > > > > ( connected to VNC console via virt-manager) > > > > > > > nova-novncproxy[19781]: websockify.ProxyRequestHandler): > > > > > > nova-novncproxy[19781]: AttributeError: 'module' object has no attribute > > > > 'ProxyRequestHandler' > > > > > > Solly, P?draig, python-websockify-0.6.0-2.fc21 was not pushed to > > > > > f21-updates, please create Bodhi update. > > > > > We'll carry it in RDO Kilo Fedora repo until it reaches stable updates. > > > > > > Cheers, > > > > > Alan > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilio.moreno at adam.es Fri May 29 09:47:34 2015 From: emilio.moreno at adam.es (Emilio) Date: Fri, 29 May 2015 11:47:34 +0200 Subject: [Rdo-list] Error libvirtd allocate memory (juno) In-Reply-To: References: , <1221078633.6446436.1432825836653.JavaMail.zimbra@redhat.com> Message-ID: <556835B6.6090603@adam.es> Hi! Our version of openstack is JUNO and we use CentOS7 like system (RDO) We have a problem with RAM Allocating, with default settings (1->1.5) libvirt cannot allocate more memory (with CPU all is ok, 1:16 and no problem) May 29 11:22:53 nova03 systemd: Starting Virtual Machine qemu-instance-000004bc. May 29 11:22:53 nova03 systemd-machined: New machine qemu-instance-000004bc. May 29 11:22:53 nova03 systemd: Started Virtual Machine qemu-instance-000004bc. May 29 11:22:53 nova03 kvm: 33 guests now active May 29 11:22:53 nova03 kernel: qbrb920d77c-7e: port 2(tapb920d77c-7e) entered disabled state May 29 11:22:53 nova03 avahi-daemon[1660]: Withdrawing workstation service for tapb920d77c-7e. May 29 11:22:53 nova03 kernel: device tapb920d77c-7e left promiscuous mode May 29 11:22:53 nova03 kernel: qbrb920d77c-7e: port 2(tapb920d77c-7e) entered disabled state May 29 11:22:53 nova03 journal: No se pudo leer desde el monitor: Conexi?n reinicializada por la m?quina remota May 29 11:22:53 nova03 journal: Error interno: early end of file from monitor: possible problem: Cannot set up guest memory 'pc.ram': Cannot allocate memory Same error in all nova's 01-02-03 like this. My NOVA's have 128Gb of RAM and we used aprox 123-124 but with 1.5 (overcommit ram) we cannot allocate more instances...Do you know this issue? It's a bug? Libvirt version : [root at nova02 log]# libvirtd --version libvirtd (libvirt) 1.2.8 In nova.conf, lines of overcommit are by default type (comment) 1:16 for cpu (this is ok) and 1:1.5 for ram (this is the problem) Thanks!!!! -- ------------------------------------------------------------------------ T?cnico de Sistemas de Adam Departamento de Sistemas Tel. 902 902 685 Carrer Artesans, 7 - Parc Tecnol?gic del Vall?s 08290 Cerdanyola del Vall?s - Barcelona www.adam.es www.adam.es Advertencia legal: Este mensaje y, en su caso, los ficheros anexos son confidenciales, especialmente en lo que respecta a los datos personales, y se dirigen exclusivamente al destinatario referenciado. Si usted no lo es y lo ha recibido por error o tiene conocimiento del mismo por cualquier motivo, le rogamos que nos lo comunique por este medio y proceda a destruirlo o borrarlo, y que en todo caso se abstenga de utilizar, reproducir, alterar, archivar o comunicar a terceros el presente mensaje y ficheros anexos, todo ello bajo pena de incurrir en responsabilidades legales. El emisor no garantiza la integridad, rapidez o seguridad del presente correo, ni se responsabiliza de posibles perjuicios derivados de la captura, incorporaciones de virus o cualesquiera otras manipulaciones efectuadas por terceros. ecotechNo imprimas si no es necesario. Protejamos el Medio Ambiente. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: adam.png Type: image/png Size: 10687 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fulla.jpg Type: image/jpeg Size: 741 bytes Desc: not available URL: From mattdm at redhat.com Thu May 28 22:27:36 2015 From: mattdm at redhat.com (Matthew Miller) Date: Thu, 28 May 2015 18:27:36 -0400 (EDT) Subject: [Rdo-list] Packaging the big tent (or at least part of it) In-Reply-To: <55671287.4090900@redhat.com> References: <483227090.5020376.1432675433367.JavaMail.zimbra@redhat.com> <55650D2E.1040201@redhat.com> <20150527004228.GA14433@mattdm.org> <20150527151854.GA27921@redhat.com> <5566DEDE.1040900@redhat.com> <55671287.4090900@redhat.com> Message-ID: <101851129.8215549.1432852056839.JavaMail.zimbra@redhat.com> Perry Myers" > Ideally if we could somehow continue to use dist-git and other Fedora > infrastructure for some things, but not publish the OpenStack packages > directly in Fedora official release, that would be ideal. I think this is ideal too, and now's a good time to hit us with it -- I'll make sure we talk about it at next week's FAD. > So... Can we use Fedora but somehow just not include the OpenStack core > packages in Fedora proper? > > Matt, what are your thoughts? One possibility would be to latch on to the in-progress* work to develop a dist-git for Copr https://fedorahosted.org/fedora-infrastructure/ticket/4564 and then use Copr for this. I think, though, that we really could benefit from having branches in dist-git other than the fedora and epel versions ? SCLs could use this as well. * but stalled. *sigh* -- Matthew Miller Fedora Project Leader From lars at redhat.com Fri May 29 13:58:51 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 29 May 2015 09:58:51 -0400 Subject: [Rdo-list] [heat][ceilometer] [autoscaling] WordPress demo In-Reply-To: References: Message-ID: <20150529135851.GG27921@redhat.com> On Fri, May 29, 2015 at 10:34:00AM +0200, ICHIBA Sara wrote: > > but can't make it work as I have some difficulties to set up the > infrastructure needed. What sort of problems are you having? What is working, and what isn't working? -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ichi.sara at gmail.com Fri May 29 14:21:04 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Fri, 29 May 2015 16:21:04 +0200 Subject: [Rdo-list] [heat][ceilometer] [autoscaling] WordPress demo In-Reply-To: <20150529135851.GG27921@redhat.com> References: <20150529135851.GG27921@redhat.com> Message-ID: First thank you for your response, I will tell you what I did and you tell me what I need to add. I used those two files (plz find them attached) with the command heat stack-create my_stack -f autoscaling.yaml. The stack was spawn correctlty and I can see in the dashboard two instances, the database server and the web server. But I can't access the web server and when I take a look at the members of my pool , it says that my web server is inactive. and After a while it just disappear. 2015-05-29 15:58 GMT+02:00 Lars Kellogg-Stedman : > On Fri, May 29, 2015 at 10:34:00AM +0200, ICHIBA Sara wrote: > > < > http://myopensourcelife.com/2014/09/13/autoscaling-in-openstack-using-heat-and-ceilometer-part-1/ > > > > but can't make it work as I have some difficulties to set up the > > infrastructure needed. > > What sort of problems are you having? What is working, and what isn't > working? > > -- > Lars Kellogg-Stedman | larsks @ > {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: autoscaling.yaml Type: application/octet-stream Size: 7926 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lb_server.yaml Type: application/octet-stream Size: 1020 bytes Desc: not available URL: From jeff.dexter at servicemesh.com Fri May 29 17:24:21 2015 From: jeff.dexter at servicemesh.com (Jeff Dexter) Date: Fri, 29 May 2015 13:24:21 -0400 Subject: [Rdo-list] [heat][ceilometer] [autoscaling] WordPress demo In-Reply-To: References: <20150529135851.GG27921@redhat.com> Message-ID: <114924FC-2F6D-4790-8974-F6638926415E@servicemesh.com> Does the heat stack complete? If not does it show errors. Use the command 'heat event-list my_stack' this will show you the events and what completed and what didn't. If you have errors you can check heat-engine.log for detailed errors Sent from my iPhone > On May 29, 2015, at 10:21 AM, ICHIBA Sara wrote: > > First thank you for your response, I will tell you what I did and you tell me what I need to add. > > I used those two files (plz find them attached) with the command heat stack-create my_stack -f autoscaling.yaml. > > The stack was spawn correctlty and I can see in the dashboard two instances, the database server and the web server. But I can't access the web server and when I take a look at the members of my pool , it says that my web server is inactive. and After a while it just disappear. > > 2015-05-29 15:58 GMT+02:00 Lars Kellogg-Stedman : >> On Fri, May 29, 2015 at 10:34:00AM +0200, ICHIBA Sara wrote: >> > >> > but can't make it work as I have some difficulties to set up the >> > infrastructure needed. >> >> What sort of problems are you having? What is working, and what isn't >> working? >> >> -- >> Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} >> Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri May 29 17:25:00 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 29 May 2015 13:25:00 -0400 Subject: [Rdo-list] OSCON - Open Cloud Day CFP closes tonight Message-ID: <5568A0EC.1030408@redhat.com> CFP ends at midnight tonight for the OSCon Open Cloud Day CFP. === Hi Everyone, Just a quick email to let you know that there is a *new call out for OSCON Open Cloud Day & Ignite proposal submissions*. Here are the details: *OSCON - Open Cloud Day* *Tuesday, July 21, 2015 at the Oregon Convention Center* *Deadline: midnight on May 29, 2015* *Link to submit your proposal:* http://www.oscon.com/open-source-2015/public/cfp/395 * Open Cloud Day is an all-day event focused on the /State of Cloud in 2015, /co-sponsored by Red Hat & IBM * They are specifically looking for proposals on these topics: Open Source, IssS, PaaS, Linux Containers, Sofware Defined Networking, and lessons learned in deploying new and rapidly changing technologies * This is a 30 minute presentation for a developer audience focused on the next-generation of scale-out applications in a software defined infrastructure *Ignite OSCON* *Monday**, July 20, 2015 located in the Portland Ballroom at the Oregon Convention Center* *Deadline: midnight on June 5, 2015* *Link to submit your proposal: * http://www.oscon.com/open-source-2015/public/cfp/397 * Any topic is fair game as long as it?s interesting ? - from technology to culture to business to science fiction * This is a 5-minute presentation to deliver something inspirational, lessons learned, or an interesting story * Speakers are limited to 20 slides, which automatically advance after 15 seconds?that?s the fun of Ignite! Act now to send in your OpenStack submissions! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ichi.sara at gmail.com Fri May 29 17:41:08 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Fri, 29 May 2015 19:41:08 +0200 Subject: [Rdo-list] [heat][ceilometer] [autoscaling] WordPress demo In-Reply-To: <114924FC-2F6D-4790-8974-F6638926415E@servicemesh.com> References: <20150529135851.GG27921@redhat.com> <114924FC-2F6D-4790-8974-F6638926415E@servicemesh.com> Message-ID: it completes actually without any error 2015-05-29 19:24 GMT+02:00 Jeff Dexter : > Does the heat stack complete? If not does it show errors. Use the command > 'heat event-list my_stack' this will show you the events and what completed > and what didn't. If you have errors you can check heat-engine.log for > detailed errors > > Sent from my iPhone > > On May 29, 2015, at 10:21 AM, ICHIBA Sara wrote: > > First thank you for your response, I will tell you what I did and you tell > me what I need to add. > > I used those two files (plz find them attached) with the command heat > stack-create my_stack -f autoscaling.yaml. > > The stack was spawn correctlty and I can see in the dashboard two > instances, the database server and the web server. But I can't access the > web server and when I take a look at the members of my pool , it says that > my web server is inactive. and After a while it just disappear. > > 2015-05-29 15:58 GMT+02:00 Lars Kellogg-Stedman : > >> On Fri, May 29, 2015 at 10:34:00AM +0200, ICHIBA Sara wrote: >> > < >> http://myopensourcelife.com/2014/09/13/autoscaling-in-openstack-using-heat-and-ceilometer-part-1/ >> > >> > but can't make it work as I have some difficulties to set up the >> > infrastructure needed. >> >> What sort of problems are you having? What is working, and what isn't >> working? >> >> -- >> Lars Kellogg-Stedman | larsks @ >> {freenode,twitter,github} >> Cloud Engineering / OpenStack | http://blog.oddbit.com/ >> >> > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Sun May 31 14:55:50 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 31 May 2015 16:55:50 +0200 Subject: [Rdo-list] Error showing up in the httpd log file Message-ID: Hi all, I've just got a Kilo installation with packstack and I can see in the httpd logs errors like this: ERROR:scss.expression:Function not found: twbs-font-path:1 ERROR:scss.expression:Function not found: twbs-font-path:1 Any hints what may be causing it? Horizon appears to be working ok. Thanks, Marius