From mohammed.arafa at gmail.com Fri Jan 1 16:53:09 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 1 Jan 2016 11:53:09 -0500 Subject: [Rdo-list] [rdo-manager] so close yet .. In-Reply-To: References: Message-ID: ok .. i had a typo in undercloud.conf DHCP_START= = 10.200.3.10 # double equal sign is there a step where we can verify the undercloud.conf file _before_ running the installer? if yes can we add it to the documentation? On Wed, Dec 30, 2015 at 9:34 PM, Mohammed Arafa wrote: > i got this far before the installer expired > i just want to install the undercloud and over cloud and move on > > br-ctlplane:is 10.200.3.1 > > so what am i doing wrong now? > > 2015-12-31 04:27:30 - requests.packages.urllib3.connectionpool - INFO - > Starting new HTTP connection (1): 10.200.3.1 > + rm /tmp/tmp.aJDV8VFmfv > + openstack role show heat_stack_user > +-------+----------------------------------+ > | Field | Value | > +-------+----------------------------------+ > | id | ecccb08da04e4460a3057e8097907e39 | > | name | heat_stack_user | > +-------+----------------------------------+ > ++ os-apply-config --key neutron.dhcp_start --type netaddress > [2015/12/31 04:27:32 AM] [WARNING] DEPRECATED: falling back to > /var/run/os-collect-config/os_config_files.json > [2015/12/31 04:27:32 AM] [ERROR] cannot interpret value '= 10.200.3.10' as > type netaddress > + DHCP_START= > [2015-12-31 04:27:32,361] (os-refresh-config) [ERROR] during > post-configure phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit > status 1] > > [2015-12-31 04:27:32,362] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 562, in install > _run_orc(instack_env) > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 494, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 325, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Jan 4 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 4 Jan 2016 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160104150003.90CD160A4009@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-01-06 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Packaging ](https://etherpad.openstack.org/p/RDO-Packaging) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Jan 4 16:12:32 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 4 Jan 2016 11:12:32 -0500 Subject: [Rdo-list] RDO blog posts you missed while celebrating the new year Message-ID: <568A99F0.4090906@redhat.com> Happy 2016! A lot of us have been out over the last few weeks, so the blog traffic has been very slow. We're looking forward to the coming months, as we work towards Mitaka. Here's some of the blog posts you may have missed while you were enjoying your New Years celebration: Integrating classic IT with cloud-native by Gordon Haff This is the fifth and final in a series of posts that delves deeper into the questions that IDC?s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fifth question asked: What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve? ? read more at http://tm3.org/4e RDO Community Day @ FOSDEM - Schedule announced by Rich Bowen The schedule for RDO Community Day at FOSDEM is now available at https://www.rdoproject.org/events/rdo-day-fosdem-2016/. Exact times are not confirmed, but we should have those in the next few days. ? read more at http://tm3.org/4f Ceph single node deployment on Fedora 23 by Daniel P. Berrang? A little while back Cole documented a minimal ceph deployment on Fedora. Unfortunately, since then the ?mkcephfs? command has been dropped in favour of the ?ceph-deploy? tool. There?s various other blog posts talking about ceph-deploy, but none of them had quite the right set of commands to get a working single node deployment ? the status would always end up in ?HEALTH_WARN? which is pretty much an error state for ceph. After much trial & error I finally figured out the steps that work on Fedora 23. ? read more at http://tm3.org/4g -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From andrius at cumulusnetworks.com Mon Jan 4 16:15:23 2016 From: andrius at cumulusnetworks.com (Andrius Benokraitis) Date: Mon, 4 Jan 2016 11:15:23 -0500 Subject: [Rdo-list] Cumulus Linux + RDO "Rack on a Laptop" Hands On Demo Message-ID: <1C4BC5C4-B0CD-4B80-A233-97B0107BC20C@cumulusnetworks.com> Greetings all, Wanted to provide a pretty cool hands-on networking demo with RDO we?ve recently created: https://support.cumulusnetworks.com/hc/en-us/articles/215832697 Would definitely appreciate any feedback! Thanks very much! Andrius. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Mon Jan 4 18:39:46 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 4 Jan 2016 13:39:46 -0500 Subject: [Rdo-list] openstack-puppet-modules branches In-Reply-To: References: Message-ID: Were we aware that we essentially had the same issue with the Packstack repository ? Packstack uses branch names like "liberty" instead of "stable/liberty", I just filed a bug for this [1] [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1295503 David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Mon, Dec 14, 2015 at 8:25 PM, Alan Pevec wrote: > Hi all (but specifically Puppet folks in CC :) > > we need to rebase openstack-puppet-modules[1] (short OPM) to Mitaka > while leaving Liberty on the stable branch asap. > OPM is a special case in Delorean, it is built from master-patches, > not master like other projects, because there are always some patches > required by packstack or rdom which are not or cannot be merged in > upstream openstack-puppet. > For Delorean Liberty rdoinfo was not forked, instead Delorean was > modified to try distro and source branch specified in projects.ini[2] > then fallback to default rpm-master and master respectively. This > works nicely for most of the projects but fallback is not active when > branch is specified explicitly in rdoinfo, like it is for OPM > special-case[3]. > Alternative solutions are: > 1. fork rdoinfo for Liberty > 2. modify Delorean to support this special case > 3. modify OPM repo to match working schema for other projects > > AD 1. I'm -2, fork for just one special case is unjustified and > keeping everything else in sync would be wasteful. > AD 2. After quick poking at it, clear -2 from me. > AD 3. Rename branches in OPM repo like this: > current master -> upstream-master (verbatim copies of upstream > modules' master branches) > master-patches -> master (non-upstream patches rebased on top of > upstream-master) > current stable/liberty -> upstream-liberty (verbatim copies of > upstream modules' stable/liberty branches) > liberty-patches -> stable/liberty (non-upstream patches rebased on > top of upstream-liberty) > > This would work immediately with current Delorean tooling, only > required change is to remove source-branch in rdoinfo for OPM, > and I hope OPM tooling could be modified easily to handle this change? > > > Cheers, > Alan > > [1] https://github.com/redhat-openstack/openstack-puppet-modules > [2] https://github.com/redhat-openstack/delorean-instance/blob/2c182dd57c590cb17117d9e114bce72e13d6c394/delorean-user-data.txt#L201-L202 > [3] https://github.com/redhat-openstack/rdoinfo/blob/60e523481def987d6592f0dc6dbdd86016351724/rdo.yml#L491 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Mon Jan 4 19:47:26 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 4 Jan 2016 14:47:26 -0500 Subject: [Rdo-list] Upcoming RDO/OpenStack meetups (week of Jan 4) Message-ID: <568ACC4E.5010606@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Tuesday January 05 in Tehran, IR: Mastering OpenStack - Step 15 - Network Design / Docker on OpenStack - Step 7 - http://www.meetup.com/Iran-OpenStack/events/227796998/ * Wednesday January 06 in Amsterdam, NL: De 2de ITGilde Tech-Talk ?Openstack and Ceph? by Alessandro Vozza - http://www.meetup.com/ITGilde-Cooperatie-Amsterdam-Unix-Linux-Meetups/events/227122004/ * Thursday January 07 in Raleigh, NC, US: Meet for "Building Agile Clouds with OpenStack" - Technical Event! - http://www.meetup.com/Raleigh-Triange-Building-Agile-Clouds-with-OpenStack/events/226396633/ * Thursday January 07 in San Francisco, CA, US: Unified Underlay and Overlay SDNs for OpenStack Clouds - http://www.meetup.com/openstack/events/222218302/ * Thursday January 07 in Istanbul, TR: Ankara 10. Meetup, Konu: SDN Basics (VXLAN, OpenFlow, Controllers) - http://www.meetup.com/Turkey-OpenStack-Meetup/events/227291737/ * Saturday January 09 in Bangalore, IN: OpenStack India Meetup and Hackathon - Bangalore - http://www.meetup.com/Indian-OpenStack-User-Group/events/227411441/ * Saturday January 09 in Indore, IN: Cloud Computing & OpenStack - http://www.meetup.com/Linux-Users-Ethical-Hackers-in-Indore/events/227754782/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ayoung at redhat.com Mon Jan 4 19:52:46 2016 From: ayoung at redhat.com (Adam Young) Date: Mon, 4 Jan 2016 14:52:46 -0500 Subject: [Rdo-list] Upcoming RDO/OpenStack meetups (week of Jan 4) In-Reply-To: <568ACC4E.5010606@redhat.com> References: <568ACC4E.5010606@redhat.com> Message-ID: <568ACD8E.2030305@redhat.com> On 01/04/2016 02:47 PM, Rich Bowen wrote: > The following are the meetups I'm aware of in the coming week where > OpenStack and/or RDO enthusiasts are likely to be present. If you know > of others, please let me know, and/or add them to > http://rdoproject.org/Events > > If there's a meetup in your area, please consider attending. If you > attend, please consider taking a few photos, and possibly even writing > up a brief summary of what was covered. > > --Rich > > * Tuesday January 05 in Tehran, IR: Mastering OpenStack - Step 15 - > Network Design / Docker on OpenStack - Step 7 - > http://www.meetup.com/Iran-OpenStack/events/227796998/ > > * Wednesday January 06 in Amsterdam, NL: De 2de ITGilde Tech-Talk > ?Openstack and Ceph? by Alessandro Vozza - > http://www.meetup.com/ITGilde-Cooperatie-Amsterdam-Unix-Linux-Meetups/events/227122004/ > > * Thursday January 07 in Raleigh, NC, US: Meet for "Building Agile > Clouds with OpenStack" - Technical Event! - > http://www.meetup.com/Raleigh-Triange-Building-Agile-Clouds-with-OpenStack/events/226396633/ > > * Thursday January 07 in San Francisco, CA, US: Unified Underlay and > Overlay SDNs for OpenStack Clouds - > http://www.meetup.com/openstack/events/222218302/ > > * Thursday January 07 in Istanbul, TR: Ankara 10. Meetup, Konu: SDN > Basics (VXLAN, OpenFlow, Controllers) - > http://www.meetup.com/Turkey-OpenStack-Meetup/events/227291737/ > > * Saturday January 09 in Bangalore, IN: OpenStack India Meetup and > Hackathon - Bangalore - > http://www.meetup.com/Indian-OpenStack-User-Group/events/227411441/ > > * Saturday January 09 in Indore, IN: Cloud Computing & OpenStack - > http://www.meetup.com/Linux-Users-Ethical-Hackers-in-Indore/events/227754782/ > > > One in Boston Tue Jan 12 http://www.meetup.com/Openstack-Boston/ Is there an automated way to add that to the calendar? From rbowen at redhat.com Mon Jan 4 20:04:21 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 4 Jan 2016 15:04:21 -0500 Subject: [Rdo-list] Upcoming RDO/OpenStack meetups (week of Jan 4) In-Reply-To: <568ACD8E.2030305@redhat.com> References: <568ACC4E.5010606@redhat.com> <568ACD8E.2030305@redhat.com> Message-ID: <568AD045.4010301@redhat.com> On 01/04/2016 02:52 PM, Adam Young wrote: >> >> > One in Boston Tue Jan 12 > > http://www.meetup.com/Openstack-Boston/ > > Is there an automated way to add that to the calendar? Not really automated. The process for adding events to the calendar (at http://rdoproject.org/events ) is a little convoluted. That calendar is built off of the calendar at https://github.com/OSAS/rh-events and pulls in events tagged with RDO. I've been meaning to document this a little better, and will try to get that done this week. However, the event above is already in there, and will be live on the RDO site as soon as the CI gets done and the site rebuilt. When I send this mailing, it's just for the upcoming week, but when I put the calendar on the website, it's the upcoming two weeks, and this one is already in there. Perhaps I should go out 2 weeks and 3 weeks, respectively, to give a little more advance notice. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Jan 4 20:45:26 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 4 Jan 2016 15:45:26 -0500 Subject: [Rdo-list] Call for Speakers NOW OPEN - OpenStack Summit Austin April 2016 In-Reply-To: <0E7A967D-FE9B-450C-8521-B616110ACEBE@openstack.org> References: <0E7A967D-FE9B-450C-8521-B616110ACEBE@openstack.org> Message-ID: <568AD9E6.9050901@redhat.com> FYI - in case you haven't seen this yet. The time to get your talks in for OpenStack Summit Austin is now. -------- Forwarded Message -------- Subject: [openstack-community] Call for Speakers NOW OPEN - OpenStack Summit Austin April 2016 Date: Tue, 22 Dec 2015 11:03:05 -0600 From: Kendall Waters To: marketing at lists.openstack.org, summitsponsors at lists.openstack.org, Community at lists.openstack.org, women-of-openstack at lists.openstack.org, foundation-board at lists.openstack.org, openstack at lists.openstack.org Hi everyone, * * *The Call for Speakers is now OPEN for the April OpenStack Summit in Austin*!* *Hurry - the deadline to submit a talk is* February 1 at 11:59pm PST. * /NEW: Speakers are limited to a maximum of *THREE* submissions. / * * Other Summit items now available: * Attendee Registration - Price increase in early March * Call for Sponsors - March 8 deadline * Hotel Discount Room Blocks *The Design Summit will be held at the Hilton Austin. Workshops and the Certified OpenStack Administrator exams will be held at the JW Marriott. * Visa Invitation Request * Travel Support Program Application - February 9 deadline If you have any Summit related questions please email summit at openstack.org . Cheers, Kendall From rbowen at redhat.com Mon Jan 4 20:52:10 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 4 Jan 2016 15:52:10 -0500 Subject: [Rdo-list] OpenStack Speakers Bureau In-Reply-To: <566F04F9.4060507@openstack.org> References: <566F04F9.4060507@openstack.org> Message-ID: <568ADB7A.6050209@redhat.com> I wanted to point this out to anybody that's interested in speaking at OpenStack-related events. It appears that a lot of progress has been made on the OpenStack Speakers Bureau over the last few weeks. If you want to make yourself available to speak at events: Speaker Profile (https://www.openstack.org/profile/speaker) - Willing to Travel - Countries willing to travel to - Languages Spoken - Areas of Expertise - Links to previous presentations If you are looking for a speaker for your event: Speakers Bureau (https://www.openstack.org/community/speakers) - Search by name, expertise, company - Filter by Language Spoken, Country of Origin, - Countries willing to travel to From rbowen at redhat.com Tue Jan 5 15:26:09 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 5 Jan 2016 10:26:09 -0500 Subject: [Rdo-list] Mitaka 2 test day and doc day Message-ID: <568BE091.2050901@redhat.com> Based on the recent poll, most people didn't have a preference, but there was a slight preference overall to keep the Mitaka 2 test day on Jan 27-28, so we'll stick with that date. Ahead of that, we'd like to do another Docs Day, both to prepare the test matrix for the test day, but also to address the open issues about the existing documentation. I propose the 20th, 21st, for this event - a week ahead of the test day. Please let me know if there's any strong objection to this date. Please take a moment between now and then to look at the open issues - https://github.com/redhat-openstack/website/issues - and open additional issues as you find problems on the website. Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Tue Jan 5 21:32:23 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 5 Jan 2016 16:32:23 -0500 Subject: [Rdo-list] [Rdo-newsletter] RDO community newsletter, January 2016 Message-ID: <568C3667.4030705@redhat.com> The newsletter is also available at https://www.rdoproject.org/newsletter/2016-january Quick links: * Quick Start - http://rdoproject.org/quickstart * Mailing Lists - http://rdoproject.org/Mailing_lists * RDO packages - http://rdoproject.org/repos/ with the trunk packages in http://rdoproject.org/repos/openstack/openstack-trunk/ * RDO blog - http://rdoproject.org/blog * Q&A - http://ask.openstack.org/ * Open Tickets - http://tm3.org/rdobugs * Twitter - http://twitter.com/rdocommunity * Mitaka release schedule - http://docs.openstack.org/releases/schedules/mitaka.html Thanks for being part of the RDO community! As usual, things slowed down a lot over the New Years break, but they're showing signs of picking up again as we get into the year, and the push towards Mitaka. Community Updates ================== In case you missed it, here's a few highlights from rdo-list that you may have missed. The CFP is now open for the OpenStack Summit in Austin. The CFP itself is at https://goo.gl/q1ru8x and is open until February 1st. This year, for the first time, speakers are limited to three talk submissions. Don't wait for the last minute - get your talks in now. And if you want to run your abstract by someone, there's a number of us on this list who would be glad to look it over for you. There has been considerable progress on the OpenStack Speakers Bureau. If you are willing to speak on any OpenStack related topic, indicate this by updating your profile at https://www.openstack.org/profile/speaker to show your area(s) of expertise, and where you're willing to travel to speak. If you're looking for speakers for your meetup, search at https://www.openstack.org/community/speakers for speakers in your area, and contact them from their profile page. You can always catch up on the conversation at http://rdo.fosslists.org/list.html?rdo-list at redhat.com Upcoming Events =============== There's several events on the horizon that you should be aware of. On January 20th and 21st, we invite you to spend an hour or two helping to improve the RDO website and documentation. You can stop by the #RDO channel on Freenode to discuss proposed changes, and you can find the open issues list at https://github.com/redhat-openstack/website/issues The following week, we'll be holding the Mitaka 2 test day, where we'll be putting both RDO/Packstack and RDO Manager through their paces to ensure that RDO users have a successful experience with Mitaka. Test day details will appear at https://www.rdoproject.org/testday/ a little closer to the event. Join us again on #RDO for help and discussion. On the last weekend of January, FOSDEM will once again be held at ULB in Brussels. Details are at https://fosdem.org/2016/ We hope to have talks in the IaaS/Virtualization Devroom at the main event. Also, on the day before FOSDEM - January 29th - we'll be holding an RDO Community Day as part of the FOSDEM Fringe - https://fosdem.org/2016/fringe/ - where we'll be having a variety of talks and open discussions about the progress and future of the RDO project. A preliminary schedule for the event has been posted at https://www.rdoproject.org/events/rdo-day-fosdem-2016/ If you expect to attend, please register at https://goo.gl/Z5zByI so that we know how many to expect for lunch. On February 5th-7th, DevConf.cz will be held in Brno, where many of the RDO engineers are based. We expect there to be OpenStack and RDO content there, and for there to be informal gatherings of RDO enthusiasts at the event. Find out more at http://devconf.cz/ On February 12th, the MAD For OpenStack miniconference will be held in Madrid. Register at http://mad4openstack.eventbrite.com Other RDO events, including the many OpenStack meetups around the world, are always listed at http://rdoproject.org/events If you have an RDO-related event, please feel free to add it by submitting a pull request to https://github.com/rbowen/rh-events/blob/master/2016/RDO-Meetups.yml Blog Posts ========== Traffic on RDO-related blogs slowed down over December, as it usually does. But there have been a few great articles that you don't want to miss. Tim Bell, at CERN, blogged about the CERN cloud running RDO Kilo, at http://openstack-in-production.blogspot.com/2015/11/our-cloud-in-kilo.html Adam Young blogged about getting started with TripleO at http://adam.younglogic.com/2015/12/getting-started-with-tripleo/ David Simard introduced WeIRDO at https://dmsimard.com/2015/12/07/thinking-outside-the-box-and-outside-the-gate-to-improve-openstack-and-rdo/ Andrius Benokraitis blogged about setting up a "Rack In A Laptop" with RDO, at https://support.cumulusnetworks.com/hc/en-us/articles/215832697 You can catch up on other RDO-related blogging in these roundups: https://www.rdoproject.org/blog/2015/12/rdo-blog-roundup-week-of-december-8/ https://www.rdoproject.org/blog/2015/12/rdo-blog-roundup-week-of-december-14/ https://www.rdoproject.org/blog/2016/01/rdo-blog-roundup-jan-4-2016/ Packaging meetings ================== Every Wednesday at 15:00 UTC, we have the weekly RDO community meeting on the #RDO channel on Freenode IRC. And at 15:00 UTC Thursdays, we have the CentOS Cloud SIG Meeting on #centos-devel. While there is some overlap in these meetings, the former is more focused on the RDO packaging process, and the latter is focused on the CentOS community, and the CI that happens within the CentOS infrastructure, as well as the other projects that use this infra. We encourage you to come listen in on these meetings to get a feel for what's going on in the RDO community. Bug Statistics ============== Chandan Kumar has started posting weekly bug statistics summaries. These are great for tracking the progress of the project, and are also an excellent place to look if you're getting started and looking for something to start working on. You can see the latest of these messages at http://rdo.fosslists.org/thread.html/Zer5przzb89rhai Keep in touch ============= There's lots of ways to stay in in touch with what's going on in the RDO community. The best ways are ... WWW * RDO - http://rdoproject.org/ * OpenStack Q&A - http://ask.openstack.org/ Mailing Lists * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter IRC * IRC - #rdo on Freenode.irc.net * Puppet module development - #rdo-puppet Social Media * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * Facebook - http://facebook.com/rdocommunity Thanks again for being part of the RDO community! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From chkumar246 at gmail.com Wed Jan 6 04:23:17 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 6 Jan 2016 09:53:17 +0530 Subject: [Rdo-list] RDO Bug Statistics [2016-01-06] Message-ID: # RDO Bugs on 2016-01-06 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 373 - Fixed (MODIFIED, POST, ON_QA): 211 ## Number of open bugs by component dib-utils [ 2] diskimage-builder [ 4] + distribution [ 13] ++++++ dnsmasq [ 1] Documentation [ 4] + instack [ 4] + instack-undercloud [ 28] +++++++++++++ iproute [ 1] openstack-ceilometer [ 2] openstack-cinder [ 13] ++++++ openstack-foreman-inst... [ 2] openstack-glance [ 2] openstack-heat [ 5] ++ openstack-horizon [ 2] openstack-ironic [ 2] openstack-ironic-disco... [ 1] openstack-keystone [ 10] ++++ openstack-manila [ 10] ++++ openstack-neutron [ 12] +++++ openstack-nova [ 20] +++++++++ openstack-packstack [ 82] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 16] +++++++ openstack-selinux [ 11] +++++ openstack-swift [ 3] + openstack-tripleo [ 27] +++++++++++++ openstack-tripleo-heat... [ 5] ++ openstack-tripleo-imag... [ 2] openstack-trove [ 1] openstack-tuskar [ 2] openstack-utils [ 1] Package Review [ 10] ++++ python-glanceclient [ 2] python-keystonemiddleware [ 1] python-neutronclient [ 3] + python-novaclient [ 1] python-openstackclient [ 5] ++ python-oslo-config [ 2] rdo-manager [ 51] ++++++++++++++++++++++++ rdo-manager-cli [ 6] ++ rdopkg [ 1] RFEs [ 2] tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (373 bugs) ### dib-utils (2 bugs) [1263779 ] http://bugzilla.redhat.com/1263779 (NEW) Component: dib-utils Last change: 2015-12-07 Summary: Packstack Ironic admin_url misconfigured in nova.conf [1283812 ] http://bugzilla.redhat.com/1283812 (NEW) Component: dib-utils Last change: 2015-12-10 Summary: local_interface=bond0.120 in undercloud.conf create broken network configuration ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (13 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1290163 ] http://bugzilla.redhat.com/1290163 (NEW) Component: distribution Last change: 2015-12-10 Summary: Tracker: Review requests for new RDO Mitaka packages [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2016-01-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-12-10 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-12-07 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (2 bugs) [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2016-01-04 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (13 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2015-11-25 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2016-01-04 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (5 bugs) [1291047 ] http://bugzilla.redhat.com/1291047 (NEW) Component: openstack-heat Last change: 2015-12-22 Summary: (RDO Mitaka) Overcloud deployment failed: Exceeded max scheduling attempts [1293961 ] http://bugzilla.redhat.com/1293961 (ASSIGNED) Component: openstack-heat Last change: 2016-01-06 Summary: [SFCI] Heat template failed to start because Property error: ... net_cidr (constraint not found) [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (2 bugs) [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: openstack-ironic Last change: 2016-01-04 Summary: IPMI driver for Ironic should support RAID for operating system/root parition [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (1 bug) [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (10 bugs) [1289267 ] http://bugzilla.redhat.com/1289267 (NEW) Component: openstack-keystone Last change: 2015-12-09 Summary: Mitaka: keystone.py is deprecated for WSGI implementation [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-12-07 Summary: keystone: add token flush cronjob script to keystone package [1280530 ] http://bugzilla.redhat.com/1280530 (NEW) Component: openstack-keystone Last change: 2015-11-12 Summary: Fernet tokens cannot read key files with SELInuxz enabeld [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1284871 ] http://bugzilla.redhat.com/1284871 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: /usr/share/keystone/wsgi-keystone.conf is missing group=keystone [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf ### openstack-manila (10 bugs) [1278918 ] http://bugzilla.redhat.com/1278918 (NEW) Component: openstack-manila Last change: 2015-12-06 Summary: manila-api fails to start without updates from upstream stable/liberty [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1277787 ] http://bugzilla.redhat.com/1277787 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: Glusterfs_driver: Export location for Glusterfs NFS- Ganesha is incorrect [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1277792 ] http://bugzilla.redhat.com/1277792 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: glusterfs_driver: Access-deny for glusterfs driver should be dynamic [1278919 ] http://bugzilla.redhat.com/1278919 (NEW) Component: openstack-manila Last change: 2015-12-06 Summary: AvailabilityZoneFilter is not working in manila- scheduler [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (12 bugs) [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2015-11-23 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-11-19 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-12-22 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1281308 ] http://bugzilla.redhat.com/1281308 (NEW) Component: openstack-neutron Last change: 2015-12-30 Summary: QoS policy is not enforced when using a previously used port [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1280258 ] http://bugzilla.redhat.com/1280258 (NEW) Component: openstack-neutron Last change: 2015-11-11 Summary: tenants seem like they are able to detach admin enforced QoS policies from ports or networks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https [1065826 ] http://bugzilla.redhat.com/1065826 (ASSIGNED) Component: openstack-neutron Last change: 2015-12-15 Summary: [RFE] [neutron] neutron services needs more RPM granularity ### openstack-nova (20 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1278808 ] http://bugzilla.redhat.com/1278808 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Guest fails to use more than 1 vCPU with smpboot: do_boot_cpu failed(-1) to wakeup [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova: fail to edit project quota with DataError from nova [1294747 ] http://bugzilla.redhat.com/1294747 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Migration fails when the SRIOV PF is not online [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2016-01-04 Summary: logrotate should copytruncate to avoid openstack logging to deleted files [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: v4-fixed-ip= not working with juno nova networking ### openstack-packstack (82 bugs) [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1284182 ] http://bugzilla.redhat.com/1284182 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Unable start Keystone, core dump [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-11-06 Summary: Error: service-update is not currently supported by the keystone sql driver [1188491 ] http://bugzilla.redhat.com/1188491 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: Packstack wording is unclear for demo and testing provisioning. [1201612 ] http://bugzilla.redhat.com/1201612 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: Interactive - Packstack asks for Tempest details even when Tempest install is declined [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1292271 ] http://bugzilla.redhat.com/1292271 (NEW) Component: openstack-packstack Last change: 2015-12-18 Summary: Receive Msg 'Error: Could not find user glance' [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1255369 ] http://bugzilla.redhat.com/1255369 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: Improve session settings for horizon [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1254389 ] http://bugzilla.redhat.com/1254389 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-13 Summary: Can no longer run packstack to maintain cluster [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1063393 ] http://bugzilla.redhat.com/1063393 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: RFE: Provide option to set bind_host/bind_port for API services [1290415 ] http://bugzilla.redhat.com/1290415 (NEW) Component: openstack-packstack Last change: 2015-12-10 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1282746 ] http://bugzilla.redhat.com/1282746 (NEW) Component: openstack-packstack Last change: 2015-12-04 Summary: Swift's proxy-server is not configured to use ceilometer [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1242647 ] http://bugzilla.redhat.com/1242647 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: Nova keypair doesn't work with Nova Networking [1239027 ] http://bugzilla.redhat.com/1239027 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: please move httpd log files to corresponding dirs [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1116019 ] http://bugzilla.redhat.com/1116019 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: AMQP1.0 server configurations needed [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1184806 ] http://bugzilla.redhat.com/1184806 (NEW) Component: openstack-packstack Last change: 2015-12-02 Summary: [RFE] Packstack should support deploying Nova and Glance with RBD images and Ceph as a backend [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1279642 ] http://bugzilla.redhat.com/1279642 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run fails when running with DEMO [1200129 ] http://bugzilla.redhat.com/1200129 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: [RFE] add support for ceilometer workload partitioning via tooz/redis [1194678 ] http://bugzilla.redhat.com/1194678 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: On aarch64, nova.conf should default to vnc_enabled=False [1293693 ] http://bugzilla.redhat.com/1293693 (NEW) Component: openstack-packstack Last change: 2015-12-23 Summary: Keystone setup fails on missing required parameter [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1286995 ] http://bugzilla.redhat.com/1286995 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: PackStack should configure LVM filtering with LVM/iSCSI [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1279641 ] http://bugzilla.redhat.com/1279641 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run does not install keystoneauth1 [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1276277 ] http://bugzilla.redhat.com/1276277 (NEW) Component: openstack-packstack Last change: 2015-10-31 Summary: packstack --allinone fails on CentOS 7 x86_64 1503-01 [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [903645 ] http://bugzilla.redhat.com/903645 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: RFE: Include the ability in PackStack to support SSL for all REST services and message bus communication [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1289761 ] http://bugzilla.redhat.com/1289761 (NEW) Component: openstack-packstack Last change: 2015-12-10 Summary: PackStack installs Nova crontab that nova user can't run [1286828 ] http://bugzilla.redhat.com/1286828 (NEW) Component: openstack-packstack Last change: 2015-12-04 Summary: Packstack should have the option to install QoS (neutron) [1283261 ] http://bugzilla.redhat.com/1283261 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: ceilometer-nova is not configured [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1285494 ] http://bugzilla.redhat.com/1285494 (NEW) Component: openstack-packstack Last change: 2015-11-25 Summary: openstack- packstack-7.0.0-0.5.dev1661.gaf13b7e.el7.noarch cripples(?) httpd.conf [1227298 ] http://bugzilla.redhat.com/1227298 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: Packstack should support MTU settings [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1005073 ] http://bugzilla.redhat.com/1005073 (NEW) Component: openstack-packstack Last change: 2015-12-02 Summary: [RFE] Please add glance and nova lib folder config [1168113 ] http://bugzilla.redhat.com/1168113 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: The warning message " NetworkManager is active " appears even when the NetworkManager is inactive [1172310 ] http://bugzilla.redhat.com/1172310 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: support Keystone LDAP [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1202922 ] http://bugzilla.redhat.com/1202922 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: packstack key injection fails with legacy networking (Nova networking) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1282928 ] http://bugzilla.redhat.com/1282928 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-09 Summary: Trove-api fails to start when deployed using packstack on RHEL 7.2 RC1.1 [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server ### openstack-puppet-modules (16 bugs) [1288533 ] http://bugzilla.redhat.com/1288533 (NEW) Component: openstack-puppet-modules Last change: 2015-12-04 Summary: packstack fails on installing mongodb [1289309 ] http://bugzilla.redhat.com/1289309 (NEW) Component: openstack-puppet-modules Last change: 2015-12-07 Summary: Neutron module needs updating in OPM [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-puppet-modules Last change: 2015-12-16 Summary: puppet module for manila should include service type - shareV2 [1285900 ] http://bugzilla.redhat.com/1285900 (NEW) Component: openstack-puppet-modules Last change: 2015-11-26 Summary: Typo in log file name for trove-guestagent [1285897 ] http://bugzilla.redhat.com/1285897 (NEW) Component: openstack-puppet-modules Last change: 2015-11-26 Summary: trove-guestagent.conf should define the configuration for backups [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start ### openstack-selinux (11 bugs) [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2016-01-04 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1284879 ] http://bugzilla.redhat.com/1284879 (NEW) Component: openstack-selinux Last change: 2015-11-24 Summary: Keystone via mod_wsgi is missing permission to read /etc/keystone/fernet-keys [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2016-01-04 Summary: Nova rootwrap-daemon requires a selinux exception [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied ### openstack-swift (3 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-12-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (27 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2015-12-11 Summary: missing python-proliantutils [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1284664 ] http://bugzilla.redhat.com/1284664 (NEW) Component: openstack-tripleo Last change: 2015-11-23 Summary: NtpServer is passed as string by "openstack overcloud deploy" [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1290156 ] http://bugzilla.redhat.com/1290156 (NEW) Component: openstack-trove Last change: 2015-12-09 Summary: Move guestagent settings to default section ### openstack-tuskar (2 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (1 bug) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2016-01-04 Summary: Can't enable OpenStack service after openstack-service disable ### Package Review (10 bugs) [1283295 ] http://bugzilla.redhat.com/1283295 (NEW) Component: Package Review Last change: 2015-11-18 Summary: Review Request: CloudKitty - Rating as a Service [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2015-12-03 Summary: Review Request: openstack-mistral - workflow Service for OpenStack cloud [1290090 ] http://bugzilla.redhat.com/1290090 (ASSIGNED) Component: Package Review Last change: 2015-12-10 Summary: Review Request: python-networking-midonet [1290308 ] http://bugzilla.redhat.com/1290308 (NEW) Component: Package Review Last change: 2015-12-10 Summary: Review Request: python-midonetclient [1288149 ] http://bugzilla.redhat.com/1288149 (NEW) Component: Package Review Last change: 2015-12-07 Summary: Review Request: python-os-win - Windows / Hyper-V library for OpenStack projects [1268372 ] http://bugzilla.redhat.com/1268372 (ASSIGNED) Component: Package Review Last change: 2015-12-02 Summary: Review Request: openstack-app-catalog-ui - openstack horizon plugin for the openstack app-catalog [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2015-11-05 Summary: Review Request: Murano - is an application catalog for OpenStack [1293948 ] http://bugzilla.redhat.com/1293948 (NEW) Component: Package Review Last change: 2015-12-23 Summary: Review Request: python-kuryr [1292794 ] http://bugzilla.redhat.com/1292794 (NEW) Component: Package Review Last change: 2016-01-05 Summary: Review Request: openstack-magnum - Container Management project for OpenStack [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2015-11-13 Summary: New Package: python-dracclient ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (3 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1281352 ] http://bugzilla.redhat.com/1281352 (NEW) Component: python-neutronclient Last change: 2015-11-12 Summary: Internal server error when running qos-bandwidth-limit- rule-update as a tenant Edit ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: appdirs requirement ### python-oslo-config (2 bugs) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2016-01-04 Summary: oslo_config != oslo.config [1282093 ] http://bugzilla.redhat.com/1282093 (NEW) Component: python-oslo-config Last change: 2016-01-04 Summary: please rebase oslo.log to 1.12.0 ### rdo-manager (51 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1294599 ] http://bugzilla.redhat.com/1294599 (NEW) Component: rdo-manager Last change: 2015-12-29 Summary: Virtual environment overcloud deploy fails with default memory allocation [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support IPv6 [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1294085 ] http://bugzilla.redhat.com/1294085 (NEW) Component: rdo-manager Last change: 2016-01-04 Summary: Creating an instance on RDO overcloud, errors out [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] Support enabling the port security extension [1294683 ] http://bugzilla.redhat.com/1294683 (NEW) Component: rdo-manager Last change: 2016-01-01 Summary: instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux. [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-11-18 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1292253 ] http://bugzilla.redhat.com/1292253 (NEW) Component: rdo-manager Last change: 2016-01-01 Summary: Production + EPEL + yum-plugin-priorities results in wrong version of hiera [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-31 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (211 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2016-01-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (10 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Ceilometer requires pymongo>=3.0.2 [1265721 ] http://bugzilla.redhat.com/1265721 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (POST) Component: openstack-ceilometer Last change: 2016-01-04 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer-common missing python-babel dependency [1287252 ] http://bugzilla.redhat.com/1287252 (POST) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer-alarm-notifier does not start: unit file is missing [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Ceilometer dbsync failing during HA deployment [1265818 ] http://bugzilla.redhat.com/1265818 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: ceilometer polling agent does not start [1214928 ] http://bugzilla.redhat.com/1214928 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: package ceilometermiddleware missing ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (5 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1278962 ] http://bugzilla.redhat.com/1278962 (ON_QA) Component: openstack-glance Last change: 2015-11-13 Summary: python-cryptography requires pyasn1>=0.1.8 but only 0.1.6 is available in Centos [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2016-01-04 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1281920 ] http://bugzilla.redhat.com/1281920 (POST) Component: openstack-neutron Last change: 2015-11-16 Summary: neutron-server will not start: fails with pbr version issue [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2016-01-04 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2016-01-04 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all ### openstack-packstack (70 bugs) [1252483 ] http://bugzilla.redhat.com/1252483 (POST) Component: openstack-packstack Last change: 2015-12-07 Summary: Demo network provisioning: public and private are shared, private has no tenant [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: Cannot start nova-network on juno - Centos7 [1288179 ] http://bugzilla.redhat.com/1288179 (POST) Component: openstack-packstack Last change: 2015-12-08 Summary: Mitaka: Packstack image provisioning fails with "Store filesystem could not be configured correctly" [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1259354 ] http://bugzilla.redhat.com/1259354 (MODIFIED) Component: openstack-packstack Last change: 2015-11-10 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: rdo release RPM not installed on all fedora hosts [1266028 ] http://bugzilla.redhat.com/1266028 (POST) Component: openstack-packstack Last change: 2015-12-15 Summary: Packstack should use pymysql database driver since Liberty [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1290429 ] http://bugzilla.redhat.com/1290429 (POST) Component: openstack-packstack Last change: 2015-12-10 Summary: Packstack does not correctly configure Nova notifications for Neutron in Mitaka-1 [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: packstack requires 2 runs to install ceilometer [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1150652 ] http://bugzilla.redhat.com/1150652 (POST) Component: openstack-packstack Last change: 2015-12-07 Summary: PackStack does not provide an option to register hosts to Red Hat Satellite 6 [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1295503 ] http://bugzilla.redhat.com/1295503 (MODIFIED) Component: openstack-packstack Last change: 2016-01-05 Summary: Packstack master branch is in the liberty repositories (was: Packstack installation fails with unsupported db backend) [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1124982 ] http://bugzilla.redhat.com/1124982 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Help text for SSL is incorrect regarding passphrase on the cert [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1285314 ] http://bugzilla.redhat.com/1285314 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Packstack needs to support aodh services since Mitaka [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1187412 ] http://bugzilla.redhat.com/1187412 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Script wording for service installation should be consistent [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (2 bugs) [1290387 ] http://bugzilla.redhat.com/1290387 (POST) Component: openstack-sahara Last change: 2015-12-10 Summary: openstack-sahara-api fails to start in Mitaka-1, cannot find api-paste.ini [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (13 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2016-01-04 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1219406 ] http://bugzilla.redhat.com/1219406 (MODIFIED) Component: openstack-selinux Last change: 2015-11-06 Summary: Glance over nfs fails due to selinux [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (2 bugs) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account [1272572 ] http://bugzilla.redhat.com/1272572 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-12-10 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon ### openstack-trove (2 bugs) [1278608 ] http://bugzilla.redhat.com/1278608 (MODIFIED) Component: openstack-trove Last change: 2015-11-06 Summary: trove-api fails to start [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1229493 ] http://bugzilla.redhat.com/1229493 (POST) Component: openstack-tuskar Last change: 2015-12-04 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (3 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (POST) Component: openstack-utils Last change: 2016-01-05 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2016-01-04 Summary: openstack-status as admin falsely shows zero instances [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2016-01-04 Summary: update openstack-status for rdo-manager ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content ### python-glanceclient (2 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2016-01-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2016-01-04 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2016-01-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2016-01-04 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (10 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1271335 ] http://bugzilla.redhat.com/1271335 (POST) Component: rdo-manager Last change: 2015-12-30 Summary: [RFE] Support explicit configuration of L2 population [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1272180 ] http://bugzilla.redhat.com/1272180 (POST) Component: rdo-manager Last change: 2015-12-04 Summary: Horizon doesn't load when deploying without pacemaker [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-11-04 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (10 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2015-11-08 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jduncan at redhat.com Wed Jan 6 12:27:52 2016 From: jduncan at redhat.com (Jamie Duncan) Date: Wed, 6 Jan 2016 07:27:52 -0500 Subject: [Rdo-list] RDO Liberty Multi-Node Networking Config Issue Message-ID: Hi all. I'm working to set up a multi-node packstack install similar to a blog post I put together that has worked cleanly in the past[0]. It uses the networking set on the RDO page[1] to make the instances available on the external network. Quick and easy set up for a demo. This worked fine in Kilo. However, in Liberty I'm hitting this error: [Errno 2] No such file or directory: '/etc/neutron/plugins/openvswitch/.ovs_neutron_plugin.ini.crudini.lck' http://pastebin.com/fr7nc7WW Something is missing, but I don't know enough about crudini to suss out where to start. Am I doing something wrong, or has the Liberty config process significantly changed? All help is appreciated. 0 - https://lostinopensource.wordpress.com/2015/08/28/multi-node-openstack-on-your-laptop-in-about-an-hour/ 1 - https://www.rdoproject.org/networking/neutron-with-existing-external-network/ -- Jamie Duncan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Jan 6 13:54:17 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 6 Jan 2016 13:54:17 +0000 Subject: [Rdo-list] RDO Liberty Multi-Node Networking Config Issue In-Reply-To: References: Message-ID: Correct command on Liberty openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini \ ovs bridge_mappings extnet:br-ex ________________________________ From: rdo-list-bounces at redhat.com on behalf of Jamie Duncan Sent: Wednesday, January 6, 2016 7:27 AM To: rdo-list at redhat.com Subject: [Rdo-list] RDO Liberty Multi-Node Networking Config Issue Hi all. I'm working to set up a multi-node packstack install similar to a blog post I put together that has worked cleanly in the past[0]. It uses the networking set on the RDO page[1] to make the instances available on the external network. Quick and easy set up for a demo. This worked fine in Kilo. However, in Liberty I'm hitting this error: [Errno 2] No such file or directory: '/etc/neutron/plugins/openvswitch/.ovs_neutron_plugin.ini.crudini.lck' http://pastebin.com/fr7nc7WW [http://pastebin.com/i/facebook.png] [root at ceph2 ~]# openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neut - Pastebin.com pastebin.com Something is missing, but I don't know enough about crudini to suss out where to start. Am I doing something wrong, or has the Liberty config process significantly changed? All help is appreciated. 0 - https://lostinopensource.wordpress.com/2015/08/28/multi-node-openstack-on-your-laptop-in-about-an-hour/ 1 - https://www.rdoproject.org/networking/neutron-with-existing-external-network/ -- Jamie Duncan -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Wed Jan 6 15:57:58 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 6 Jan 2016 21:27:58 +0530 Subject: [Rdo-list] [Meeting Minutes] RDO meeting (2016-01-06) Message-ID: ============================== #rdo: RDO meeting (2016-01-06) ============================== Meeting started by chandankumar at 15:00:41 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2016-01-06/rdo_meeting_(2016-01-06).2016-01-06-15.00.log.html . Meeting summary --------------- * LINK: RDO meeting etherpad link https://etherpad.openstack.org/p/RDO-Packaging (chandankumar, 15:02:23) * updates on RDO packaging (chandankumar, 15:03:03) * LINK: https://github.com/redhat-openstack/rdoinfo/pull/135 (chandankumar, 15:03:51) * Python 3 porting effort (chandankumar, 15:08:51) * ACTION: reboot py3 reviews (number80, 15:09:19) * ACTION: number80 reboot py3 reviews (number80, 15:09:27) * LINK: https://trello.com/c/ReowuP4z/105-python3 (chandankumar, 15:09:37) * Oslo libraries updates (chandankumar, 15:12:35) * LINK: https://trello.com/c/xeD4dBAs/117-update-oslo-to-match-upper-constraints (chandankumar, 15:12:53) * Facter3 in RDO (chandankumar, 15:16:18) * LINK: https://trello.com/c/EN1kbuox/116-add-facter3-in-rdo (chandankumar, 15:20:37) * ACTION: hguemar import facter3 in Mitaka (number80, 15:20:41) * Liberty current-passed-ci repo status (chandankumar, 15:21:46) * LINK: https://review.openstack.org/264160 (chandankumar, 15:24:21) * liberty delorean is borked and in process to be unborked (jruzicka, 15:26:59) * RDO upcoming events (chandankumar, 15:28:05) * Doc day - Jan 20/21 (rbowen, 15:28:09) * LINK: https://review.openstack.org/264160 (chandankumar, 15:28:16) * LINK: https://github.com/redhat-openstack/website/issues (rbowen, 15:28:28) * Mitaka test day, Jan 28/29 (rbowen, 15:28:45) * RDO Day @ FOSDEM schedule - https://www.rdoproject.org/events/rdo-day-fosdem-2016/ (rbowen, 15:29:14) * chair for next meeting (chandankumar, 15:31:37) * ACTION: jruzicka to chair for next meeting (chandankumar, 15:31:59) * open discussion (chandankumar, 15:32:48) * LINK: http://aral.github.io/fork-me-on-github-retina-ribbons/right-turquoise at 2x.png is kinda cheesy indeed (dmsimard, 15:49:58) Meeting ended at 15:52:24 UTC. Action Items ------------ * reboot py3 reviews * number80 reboot py3 reviews * hguemar import facter3 in Mitaka * jruzicka to chair for next meeting Action Items, by person ----------------------- * jruzicka * jruzicka to chair for next meeting * number80 * number80 reboot py3 reviews * **UNASSIGNED** * reboot py3 reviews * hguemar import facter3 in Mitaka People Present (lines said) --------------------------- * chandankumar (62) * jruzicka (54) * rbowen (47) * number80 (39) * dmsimard (29) * snecklifter (19) * trown (9) * zodbot (9) * imcsk8 (8) * Humbedooh (2) * social (1) * elmiko (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Jan 7 20:35:44 2016 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 7 Jan 2016 15:35:44 -0500 Subject: [Rdo-list] Annual "State Of The Cloud" survey Message-ID: <568ECC20.8050904@redhat.com> The annual State Of The Cloud survey is up at https://www.surveymonkey.com/r/73PMT62 Last years report may be downloaded at http://www.rightscale.com/lp/2015-state-of-the-cloud-report -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ibravo at ltgfederal.com Fri Jan 8 15:27:52 2016 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Fri, 8 Jan 2016 10:27:52 -0500 Subject: [Rdo-list] Working with RDO Manager Message-ID: I have a couple of scenarios that I?m working on to configure/deploy RDO Manager, and would like to get your input as of what is the preferred way to achieve these issues: * Issue #1: Inject baremetal driver I need a RAID driver to be installed before the OS on certain pieces of hardware. My assumption here is that I need to build the overcloud images as described here: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html#get-images and once the images are created, decompress, include the new driver and compress it again. I believe the idea was to use http://libguestfs.org to do the file injection into the image. * Issue #2: Configure Ceph Disks to use additional partitions Is there an easy way to tell the image to install the OS in one partition and to install the ceph nodes into each additional HDD? Im kinda lost here. Any help is welcomed. * Issue #3: Encrypt Network communications The idea is to encrypt all traffic on the endpoints of the nodes. This is a configuration that needs to be performed in all nodes, so I understand that the best way to accomplish this is to follow the documentation here, right? https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/extra_config.html#making-configuration-changes Regards, IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Jan 8 15:31:33 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 8 Jan 2016 10:31:33 -0500 Subject: [Rdo-list] F23: GnomeKeyring warning using python-openstackclient Message-ID: FYI, https://bugzilla.redhat.com/show_bug.cgi?id=1259747 There is a fix in the bug. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Fri Jan 8 16:34:56 2016 From: mcornea at redhat.com (Marius Cornea) Date: Fri, 8 Jan 2016 11:34:56 -0500 (EST) Subject: [Rdo-list] Working with RDO Manager In-Reply-To: References: Message-ID: <1347384708.9224937.1452270896820.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Ignacio Bravo" > To: "rdo-list" > Sent: Friday, January 8, 2016 4:27:52 PM > Subject: [Rdo-list] Working with RDO Manager > > I have a couple of scenarios that I?m working on to configure/deploy RDO > Manager, and would like to get your input as of what is the preferred way to > achieve these issues: > > > * Issue #1: Inject baremetal driver > I need a RAID driver to be installed before the OS on certain pieces of > hardware. My assumption here is that I need to build the overcloud images as > described here: > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html#get-images > and once the images are created, decompress, include the new driver and > compress it again. > I believe the idea was to use http://libguestfs.org to do the file injection > into the image. Yes, you can use virt-customize provided by libguestfs-tools: virt-customize -a image.qcow2 --run-command 'yum install -y something' > * Issue #2: Configure Ceph Disks to use additional partitions > Is there an easy way to tell the image to install the OS in one partition and > to install the ceph nodes into each additional HDD? Im kinda lost here. Any > help is welcomed. There are some instructions for this on the downstream Director documentation. See section 6.3.5 in https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/ > > * Issue #3: Encrypt Network communications > The idea is to encrypt all traffic on the endpoints of the nodes. This is a > configuration that needs to be performed in all nodes, so I understand that > the best way to accomplish this is to follow the documentation here, right? > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/extra_config.html#making-configuration-changes It's not clear to me what kind of encryption you would like to achieve here, something like SSL connectivity for the Openstack endpoints? > Regards, > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Fri Jan 8 17:12:36 2016 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 8 Jan 2016 12:12:36 -0500 Subject: [Rdo-list] Do you blog about OpenStack? Message-ID: <568FEE04.9030508@redhat.com> Do you blog about RDO, or about OpenStack in general? Be sure that your blog feed is listed on Planet OpenStack: https://wiki.openstack.org/wiki/AddingYourBlog We've also got http://planet.rdoproject.org/ which, of course, has a lot of overlap with Planet OpenStack. Make sure your blog is listed there by editing https://github.com/redhat-openstack/website/edit/master/planet-rdo.ini -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From bderzhavets at hotmail.com Fri Jan 8 20:03:51 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 8 Jan 2016 20:03:51 +0000 Subject: [Rdo-list] Do you blog about OpenStack? In-Reply-To: <568FEE04.9030508@redhat.com> References: <568FEE04.9030508@redhat.com> Message-ID: Rich, I saw my old blog in RSS at http://planet.rdoproject.org/ If you are willing to keep tracking my blogging please switch to new one http://dbaxps.blogspot.com/ (Openstack RDO && KVM Hypervisor) all new posts appear in new blog. I am not doing it myself, because I am not sure would you (RDO Community ) like it or not. Thank you. Boris ________________________________________ From: rdo-list-bounces at redhat.com on behalf of Rich Bowen Sent: Friday, January 8, 2016 12:12 PM To: rdo-list at redhat.com Subject: [Rdo-list] Do you blog about OpenStack? Do you blog about RDO, or about OpenStack in general? Be sure that your blog feed is listed on Planet OpenStack: https://wiki.openstack.org/wiki/AddingYourBlog We've also got http://planet.rdoproject.org/ which, of course, has a lot of overlap with Planet OpenStack. Make sure your blog is listed there by editing https://github.com/redhat-openstack/website/edit/master/planet-rdo.ini -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From jduncan at redhat.com Sat Jan 9 02:05:48 2016 From: jduncan at redhat.com (Jamie Duncan) Date: Fri, 8 Jan 2016 21:05:48 -0500 Subject: [Rdo-list] Confusion with Floating IPs in RDO Liberty Packstack 3-node setup Message-ID: I have a 3-node setup using packstack. It went smoothly with no issues. I created a public subnet with the following command. neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=10.10.179.230,end=10.10.179.250 --gateway=10.10.183.254 external_network 10.10.176.0/21 So I should have about 18 usable IP addresses. Perfect. But I only have 2. [root at dell-r430-13 ~(keystone_atomic)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | 1c94a153-20d4-43cd-b8c2-4d259975e043 | | 10.10.179.235 | | | ad4e0089-2d4d-46dd-a0f2-21b6c854398d | | 10.10.179.232 | | +--------------------------------------+------------------+---------------------+---------+ I had to create the floating IPs manually with a for loop. It wasn't hard. I'm just wondering what I did wrong, or if that's something I've just somehow missed every other time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sat Jan 9 07:43:05 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 9 Jan 2016 07:43:05 +0000 Subject: [Rdo-list] Confusion with Floating IPs in RDO Liberty Packstack 3-node setup In-Reply-To: References: Message-ID: neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=10.10.179.230,end=10.10.179.250 --gateway=10.10.183.254 external_network 10.10.176.0/21 Syntax error highlighted red neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool start=10.10.179.230,end=10.10.179.250 --gateway=10.10.183.254 external_network 10.10.176.0/21 Address: 10.10.176.0 00001010.00001010.10110000.00000000 Netmask: 255.255.248.0 1 11111111.11111111.11111000.00000000 Wildcard: 0.0.7.255 00000000.00000000.00000111.11111111 Network Address: 10.10.176.0/21 00001010.00001010.10110000.00000000 Broadcast Address: 10.10.183.255 00001010.00001010.10110111.11111111 First host: 10.10.176.1 00001010.00001010.10110000.00000001 Last host: 10.10.183.254 00001010.00001010.10110111.11111110 Total host count: 2046 ________________________________ From: rdo-list-bounces at redhat.com on behalf of Jamie Duncan Sent: Friday, January 8, 2016 9:05 PM To: rdo-list at redhat.com Subject: [Rdo-list] Confusion with Floating IPs in RDO Liberty Packstack 3-node setup I have a 3-node setup using packstack. It went smoothly with no issues. I created a public subnet with the following command. neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=10.10.179.230,end=10.10.179.250 --gateway=10.10.183.254 external_network 10.10.176.0/21 So I should have about 18 usable IP addresses. Perfect. But I only have 2. [root at dell-r430-13 ~(keystone_atomic)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | 1c94a153-20d4-43cd-b8c2-4d259975e043 | | 10.10.179.235 | | | ad4e0089-2d4d-46dd-a0f2-21b6c854398d | | 10.10.179.232 | | +--------------------------------------+------------------+---------------------+---------+ I had to create the floating IPs manually with a for loop. It wasn't hard. I'm just wondering what I did wrong, or if that's something I've just somehow missed every other time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jduncan at redhat.com Sat Jan 9 14:40:41 2016 From: jduncan at redhat.com (Jamie Duncan) Date: Sat, 9 Jan 2016 09:40:41 -0500 Subject: [Rdo-list] Confusion with Floating IPs in RDO Liberty Packstack 3-node setup In-Reply-To: References: Message-ID: AH. /me hates bonehead typos. Thanks again Boris! -jduncan On Sat, Jan 9, 2016 at 2:43 AM, Boris Derzhavets wrote: > neutron subnet-create --name public_subnet --enable_dhcp=False > --allocation-pool=start=10.10.179.230,end=10.10.179.250 > --gateway=10.10.183.254 external_network 10.10.176.0/21 > > > Syntax error highlighted red > > > neutron subnet-create --name public_subnet --enable_dhcp=False > --allocation-pool start=10.10.179.230,end=10.10.179.250 > --gateway=10.10.183.254 external_network 10.10.176.0/21 > > > > > > > > *Address:* 10.10.176.0 > 00001010.00001010.10110000.00000000 > > > *Netmask:* 255.255.248.0 1 > 11111111.11111111.11111000.00000000 > > > *Wildcard:* 0.0.7.255 > 00000000.00000000.00000111.11111111 > > > *Network Address:* 10.10.176.0/21 > 00001010.00001010.10110000.00000000 > > > *Broadcast Address:* 10.10.183.255 > 00001010.00001010.10110111.11111111 > > *First host:* 10.10.176.1 > 00001010.00001010.10110000.00000001 > > > *Last host:* 10.10.183.254 > 00001010.00001010.10110111.11111110 > > > *Total host count:* 2046 > > ------------------------------ > *From:* rdo-list-bounces at redhat.com on > behalf of Jamie Duncan > *Sent:* Friday, January 8, 2016 9:05 PM > *To:* rdo-list at redhat.com > *Subject:* [Rdo-list] Confusion with Floating IPs in RDO Liberty > Packstack 3-node setup > > I have a 3-node setup using packstack. It went smoothly with no issues. > > I created a public subnet with the following command. > > neutron subnet-create --name public_subnet --enable_dhcp=False > --allocation-pool=start=10.10.179.230,end=10.10.179.250 > --gateway=10.10.183.254 external_network 10.10.176.0/21 > > So I should have about 18 usable IP addresses. Perfect. > > But I only have 2. > > [root at dell-r430-13 ~(keystone_atomic)]# neutron floatingip-list > > +--------------------------------------+------------------+---------------------+---------+ > | id | fixed_ip_address | > floating_ip_address | port_id | > > +--------------------------------------+------------------+---------------------+---------+ > | 1c94a153-20d4-43cd-b8c2-4d259975e043 | | 10.10.179.235 > | | > | ad4e0089-2d4d-46dd-a0f2-21b6c854398d | | 10.10.179.232 > | | > > +--------------------------------------+------------------+---------------------+---------+ > > I had to create the floating IPs manually with a for loop. > > It wasn't hard. I'm just wondering what I did wrong, or if that's > something I've just somehow missed every other time. > > > > > -- Jamie Duncan Sr. Cloud Something or Other 804.343.6086 - w 804.307.7079 - c -------------- next part -------------- An HTML attachment was scrubbed... URL: From abregman at redhat.com Sat Jan 9 17:06:19 2016 From: abregman at redhat.com (Arie Bregman) Date: Sat, 9 Jan 2016 19:06:19 +0200 Subject: [Rdo-list] Do you blog about OpenStack? In-Reply-To: <568FEE04.9030508@redhat.com> References: <568FEE04.9030508@redhat.com> Message-ID: Hi, Added mine: https://review.openstack.org/#/c/265572 Thanks On Fri, Jan 8, 2016 at 7:12 PM, Rich Bowen wrote: > Do you blog about RDO, or about OpenStack in general? Be sure that your > blog feed is listed on Planet OpenStack: > https://wiki.openstack.org/wiki/AddingYourBlog > > We've also got http://planet.rdoproject.org/ which, of course, has a lot > of overlap with Planet OpenStack. Make sure your blog is listed there by > editing > https://github.com/redhat-openstack/website/edit/master/planet-rdo.ini > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thaleslv at yahoo.com Sat Jan 9 18:46:18 2016 From: thaleslv at yahoo.com (Thales) Date: Sat, 9 Jan 2016 18:46:18 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing References: <388608657.2077375.1452365178464.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <388608657.2077375.1452365178464.JavaMail.yahoo@mail.yahoo.com> Hello, I'm trying to learn OpenStack via RDO, so I downloaded packstack. ?I went to this link to start on RDOhttps://www.rdoproject.org/install/quickstart/ I set up a fixed IP in CentOS 7. ? ?CentOS 7 is a guest OS. ? I'm using Virtualbox as my virtual machine, and Windows 10 is my host OS. ?I'm using a Bridged adpater for Virtual Box. I downloaded and installed RDO via packstack. I then went here:https://www.rdoproject.org/install/running-an-instance/ I went there to learn how to run an instance, and on step three on that page, "Create or Import a Pair", I ?received an error. ? ?The error occurred when I tried to "create" the key. ?It error was "Keypair data is invalid: failed to generate fingerprint (HTTP 400)". ? ? I also tried to import a key, by generating one on the command line. ?However, in both cases it failed. ?I got some guidance from this video,https://www.youtube.com/watch?v=GYTctLuPbOs, but that didn't do the trick for me, either. ? I have been looking around for similar problems on the web, and I found a couple, but none of the solutions worked for me. ?I recently disabled SELinux with the "setenforce 0" command, and that didn't fix the problem either. ? Does anyone have any idea what his could be? ? I'm really just learning now, so I'm sure it's got to be something rudimentary. ? Thanks for any help! ? ...John -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Sun Jan 10 16:03:05 2016 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 10 Jan 2016 17:03:05 +0100 Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <388608657.2077375.1452365178464.JavaMail.yahoo@mail.yahoo.com> References: <388608657.2077375.1452365178464.JavaMail.yahoo.ref@mail.yahoo.com> <388608657.2077375.1452365178464.JavaMail.yahoo@mail.yahoo.com> Message-ID: Hi, You could try adding it via the CLI: # generate the keypair; cloud.key is the private key; cloud.key.pub is the public key ssh-keygen -t rsa -f cloud.key -N '' # load the keystone credentials source keystonerc_admin # import the public key nova keypair-add --pub_key cloud.key.pub cloudkey Marius On Sat, Jan 9, 2016 at 7:46 PM, Thales wrote: > Hello, > > I'm trying to learn OpenStack via RDO, so I downloaded packstack. I went to > this link to start on RDO > https://www.rdoproject.org/install/quickstart/ > > I set up a fixed IP in CentOS 7. CentOS 7 is a guest OS. I'm using > Virtualbox as my virtual machine, and Windows 10 is my host OS. I'm using a > Bridged adpater for Virtual Box. > > I downloaded and installed RDO via packstack. > > I then went here: > https://www.rdoproject.org/install/running-an-instance/ > > > I went there to learn how to run an instance, and on step three on that > page, "Create or Import a Pair", I received an error. The error occurred > when I tried to "create" the key. It error was "Keypair data is invalid: > failed to generate fingerprint (HTTP 400)". I also tried to import a > key, by generating one on the command line. However, in both cases it > failed. I got some guidance from this video > ,https://www.youtube.com/watch?v=GYTctLuPbOs, but that didn't do the trick > for me, either. > > I have been looking around for similar problems on the web, and I found a > couple, but none of the solutions worked for me. I recently disabled > SELinux with the "setenforce 0" command, and that didn't fix the problem > either. > > Does anyone have any idea what his could be? I'm really just learning > now, so I'm sure it's got to be something rudimentary. > > Thanks for any help! > > ...John > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From thaleslv at yahoo.com Mon Jan 11 03:50:12 2016 From: thaleslv at yahoo.com (Thales) Date: Mon, 11 Jan 2016 03:50:12 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: References: Message-ID: <1783733732.2466478.1452484212213.JavaMail.yahoo@mail.yahoo.com> Thanks, Marius! I get the same error.? "Error (BadRequest): Keypair data is invalid: failed to generate fingerprint (HTTP p 400) (Requested ID: ...)" I'm just staring out with OpenStack here, trying to get through my first tutorial, so I'm fumbling around. ...John On Sunday, January 10, 2016 10:03 AM, Marius Cornea wrote: Hi, You could try adding it via the CLI: # generate the keypair; cloud.key is the private key; cloud.key.pub is the public key ssh-keygen -t rsa -f cloud.key -N '' # load the keystone credentials source keystonerc_admin # import the public key nova keypair-add --pub_key cloud.key.pub cloudkey Marius On Sat, Jan 9, 2016 at 7:46 PM, Thales wrote: > Hello, > > I'm trying to learn OpenStack via RDO, so I downloaded packstack.? I went to > this link to start on RDO > https://www.rdoproject.org/install/quickstart/ > > I set up a fixed IP in CentOS 7.? ? CentOS 7 is a guest OS.? I'm using > Virtualbox as my virtual machine, and Windows 10 is my host OS.? I'm using a > Bridged adpater for Virtual Box. > > I downloaded and installed RDO via packstack. > > I then went here: > https://www.rdoproject.org/install/running-an-instance/ > > > I went there to learn how to run an instance, and on step three on that > page, "Create or Import a Pair", I? received an error.? ? The error occurred > when I tried to "create" the key.? It error was "Keypair data is invalid: > failed to generate fingerprint (HTTP 400)".? ? I also tried to import a > key, by generating one on the command line.? However, in both cases it > failed.? I got some guidance from this video > ,https://www.youtube.com/watch?v=GYTctLuPbOs, but that didn't do the trick > for me, either. > >? I have been looking around for similar problems on the web, and I found a > couple, but none of the solutions worked for me.? I recently disabled > SELinux with the "setenforce 0" command, and that didn't fix the problem > either. > >? Does anyone have any idea what his could be?? I'm really just learning > now, so I'm sure it's got to be something rudimentary. > >? Thanks for any help! > >? ...John > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnetravali at sonusnet.com Mon Jan 11 10:37:51 2016 From: gnetravali at sonusnet.com (Netravali, Ganesh) Date: Mon, 11 Jan 2016 10:37:51 +0000 Subject: [Rdo-list] HA setup in Kilo Message-ID: Hi Experts. I am trying bring up 2 instance on Kilo with Active-Backup configuration. I need to float the external net IP between 2 instances. IP should get assigned to instance which acts as active. Can you please suggest the best method? Thanks Ganesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Mon Jan 11 10:50:07 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 11 Jan 2016 10:50:07 +0000 Subject: [Rdo-list] HA setup in Kilo In-Reply-To: References: Message-ID: Hi, I would suggest to use lbaas service: https://www.rdoproject.org/networking/lbaas/ Regards, Pedro Sousa On Mon, Jan 11, 2016 at 10:37 AM, Netravali, Ganesh wrote: > Hi Experts. > > > > I am trying bring up 2 instance on Kilo with Active-Backup configuration. > I need to float the external net IP between 2 instances. IP should get > assigned to instance which acts as active. Can you please suggest the best > method? > > > > Thanks > > Ganesh > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Mon Jan 11 11:17:37 2016 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 11 Jan 2016 12:17:37 +0100 Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1783733732.2466478.1452484212213.JavaMail.yahoo@mail.yahoo.com> References: <1783733732.2466478.1452484212213.JavaMail.yahoo@mail.yahoo.com> Message-ID: Strange, can you run cat cloud.key.pub and ssh-keygen -l -f cloud.key.pub and provide the output on a paste service such as http://paste.openstack.org/ ? On Mon, Jan 11, 2016 at 4:50 AM, Thales wrote: > Thanks, Marius! > > I get the same error. > > "Error (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP p 400) (Requested ID: ...)" > > I'm just staring out with OpenStack here, trying to get through my first > tutorial, so I'm fumbling around. > > ...John > > > On Sunday, January 10, 2016 10:03 AM, Marius Cornea > wrote: > > > Hi, > > You could try adding it via the CLI: > > # generate the keypair; cloud.key is the private key; cloud.key.pub is > the public key > ssh-keygen -t rsa -f cloud.key -N '' > > # load the keystone credentials > source keystonerc_admin > > # import the public key > nova keypair-add --pub_key cloud.key.pub cloudkey > > Marius > > On Sat, Jan 9, 2016 at 7:46 PM, Thales wrote: >> Hello, >> >> I'm trying to learn OpenStack via RDO, so I downloaded packstack. I went >> to >> this link to start on RDO >> https://www.rdoproject.org/install/quickstart/ >> >> I set up a fixed IP in CentOS 7. CentOS 7 is a guest OS. I'm using >> Virtualbox as my virtual machine, and Windows 10 is my host OS. I'm using >> a >> Bridged adpater for Virtual Box. >> >> I downloaded and installed RDO via packstack. >> >> I then went here: >> https://www.rdoproject.org/install/running-an-instance/ >> >> >> I went there to learn how to run an instance, and on step three on that >> page, "Create or Import a Pair", I received an error. The error >> occurred >> when I tried to "create" the key. It error was "Keypair data is invalid: >> failed to generate fingerprint (HTTP 400)". I also tried to import a >> key, by generating one on the command line. However, in both cases it >> failed. I got some guidance from this video >> ,https://www.youtube.com/watch?v=GYTctLuPbOs, but that didn't do the trick >> for me, either. >> >> I have been looking around for similar problems on the web, and I found a >> couple, but none of the solutions worked for me. I recently disabled >> SELinux with the "setenforce 0" command, and that didn't fix the problem >> either. >> >> Does anyone have any idea what his could be? I'm really just learning >> now, so I'm sure it's got to be something rudimentary. >> >> Thanks for any help! >> >> ...John > >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From hguemar at fedoraproject.org Mon Jan 11 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 11 Jan 2016 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160111150003.8506560A4009@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-01-13 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Packaging ](https://etherpad.openstack.org/p/RDO-Packaging) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From dms at redhat.com Mon Jan 11 15:06:22 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 11 Jan 2016 10:06:22 -0500 Subject: [Rdo-list] openstack-puppet-modules branches In-Reply-To: References: Message-ID: On Mon, Dec 14, 2015 at 8:25 PM, Alan Pevec wrote: > AD 3. Rename branches in OPM repo like this: > current master -> upstream-master (verbatim copies of upstream > modules' master branches) > master-patches -> master (non-upstream patches rebased on top of > upstream-master) > current stable/liberty -> upstream-liberty (verbatim copies of > upstream modules' stable/liberty branches) > liberty-patches -> stable/liberty (non-upstream patches rebased on > top of upstream-liberty) So it looks like we agreed on doing this. Is someone taking care of doing the renames ? Packstack and OPM are broken in Mitaka until then. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From bderzhavets at hotmail.com Mon Jan 11 15:11:15 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 11 Jan 2016 15:11:15 +0000 Subject: [Rdo-list] HA setup in Kilo In-Reply-To: References: Message-ID: See http://blog.aaronorosen.com/implementing-high-availability-instances-with-neutron-using-vrrp/ Implementing High Availability Instances with Neutron ... blog.aaronorosen.com 8 Responses to Implementing High Availability Instances with Neutron using VRRP. Pingback: OpenStack?????? (5/26-6/1) ? OpenStack???? ------------------------------------------------------------------------------------------------------------------------------------------- From: rdo-list-bounces at redhat.com on behalf of Netravali, Ganesh Sent: Monday, January 11, 2016 5:37 AM To: rdo-list at redhat.com Subject: [Rdo-list] HA setup in Kilo Hi Experts. I am trying bring up 2 instance on Kilo with Active-Backup configuration. I need to float the external net IP between 2 instances. IP should get assigned to instance which acts as active. Can you please suggest the best method? Thanks Ganesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Jan 11 16:52:35 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 11 Jan 2016 11:52:35 -0500 Subject: [Rdo-list] RDO/OpenStack meetups this week. Message-ID: <5693DDD3.8090504@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events, which also lists events further in the future than just this week. If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Tuesday January 12 in Los Angeles, CA, US: Webinar - Red Hat and Cisco: Making OpenStack work for the Enterprise - http://www.meetup.com/Southern-California-Red-Hat-User-Group-RHUG/events/227418207/ * Tuesday January 12 in Boston, MA, US: OpenStack and data bases! - http://www.meetup.com/Openstack-Boston/events/227826780/ * Wednesday January 13 in Amersfoort, NL: OpenStack Jobs - http://www.meetup.com/Openstack-Netherlands/events/227409852/ * Thursday January 14 in Fort Lauderdale, FL, US: Monthly SFOUG Meeting - http://www.meetup.com/South-Florida-OpenStack-Users-Group/events/227415610/ * Saturday January 16 in Shenzhen, CN: 2016??????????The Dragons of OpenStack - http://www.meetup.com/Shenzhen-openstack-usergroup/events/228001930/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From javier.pena at redhat.com Mon Jan 11 17:36:35 2016 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 11 Jan 2016 12:36:35 -0500 (EST) Subject: [Rdo-list] openstack-puppet-modules branches In-Reply-To: References: Message-ID: <1976166437.10861314.1452533795267.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On Mon, Dec 14, 2015 at 8:25 PM, Alan Pevec wrote: > > AD 3. Rename branches in OPM repo like this: > > current master -> upstream-master (verbatim copies of upstream > > modules' master branches) > > master-patches -> master (non-upstream patches rebased on top of > > upstream-master) > > current stable/liberty -> upstream-liberty (verbatim copies of > > upstream modules' stable/liberty branches) > > liberty-patches -> stable/liberty (non-upstream patches rebased on > > top of upstream-liberty) > > So it looks like we agreed on doing this. > > Is someone taking care of doing the renames ? Packstack and OPM are > broken in Mitaka until then. > Hi David, I see the change has already been implemented for both OPM and Packstack. Is there anything missing in the repos? Regards, Javier > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From thaleslv at yahoo.com Mon Jan 11 18:21:34 2016 From: thaleslv at yahoo.com (Thales) Date: Mon, 11 Jan 2016 18:21:34 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: References: Message-ID: <2135117753.2707377.1452536494861.JavaMail.yahoo@mail.yahoo.com> Marius, "Strange, can you run cat cloud.key.pub and ssh-keygen -l -f cloud.key.pub and provide the output on a paste service such as http://paste.openstack.org/??" Okay! ?Here's the output. http://paste.openstack.org/show/483458/ ...John On Monday, January 11, 2016 5:17 AM, Marius Cornea wrote: Strange, can you run cat cloud.key.pub and ssh-keygen -l -f cloud.key.pub and provide the output on a paste service such as http://paste.openstack.org/ ? On Mon, Jan 11, 2016 at 4:50 AM, Thales wrote: > Thanks, Marius! > > I get the same error. > > "Error (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP p 400) (Requested ID: ...)" > > I'm just staring out with OpenStack here, trying to get through my first > tutorial, so I'm fumbling around. > > ...John > > > On Sunday, January 10, 2016 10:03 AM, Marius Cornea > wrote: > > > Hi, > > You could try adding it via the CLI: > > # generate the keypair; cloud.key is the private key; cloud.key.pub is > the public key > ssh-keygen -t rsa -f cloud.key -N '' > > # load the keystone credentials > source keystonerc_admin > > # import the public key > nova keypair-add --pub_key cloud.key.pub cloudkey -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Jan 11 20:12:16 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 11 Jan 2016 15:12:16 -0500 Subject: [Rdo-list] Unanswered ask.openstack.org "RDO" questions (Jan 11, 2016) Message-ID: <56940CA0.7030209@redhat.com> In an attempt to raise awareness of the questions that folks are asking about RDO on ask.openstack.org, as well as to get those questions answered, I'm thinking of doing a weekly automated mailing of the unanswered questions containing (or tagged with) "RDO". These are sorted by age, so some of the ones at the end are very old and probably ready to be closed. Thanks for any time you can devote to answering one or two of these. OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Upgrading from RDO commercial RHEL OSP https://ask.openstack.org/en/question/87200/upgrading-from-rdo-commercial-rhel-osp/ Tags: upgrading, rdo, rhel, osp Abnormal resource consumption in RDO environment. https://ask.openstack.org/en/question/87189/abnormal-resource-consumption-in-rdo-environment/ Tags: rdo, liberty, kilo, fedora, centos Clarification on docs for self service connectivity https://ask.openstack.org/en/question/87183/clarification-on-docs-for-self-service-connectivity/ Tags: liberty, neutron, connectivity, router RDO installation : Huge resources consumption (RAM) https://ask.openstack.org/en/question/87158/rdo-installation-huge-resources-consumption-ram/ Tags: ram, rdo, liberty, juno, kilo Can't create volume with cinder https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ Tags: cinder, glusterfs, nfs Packstack install MongoDB Error https://ask.openstack.org/en/question/86293/packstack-install-mongodb-error/ Tags: mongodb, mongodb.pp, packstack, rdo, error Sahara HDP Cluster: error_message=Cluster is missing a service: YARN https://ask.openstack.org/en/question/85801/sahara-hdp-cluster-error_messagecluster-is-missing-a-service-yarn/ Tags: sahara, hdp, hadoop, error, yarn error installing rdo kilo with proxy https://ask.openstack.org/en/question/85703/error-installing-rdo-kilo-with-proxy/ Tags: rdo, packstack, centos, proxy Why is /usr/bin/openstack domain list ... hanging? https://ask.openstack.org/en/question/85593/why-is-usrbinopenstack-domain-list-hanging/ Tags: puppet, keystone, kilo Internal Server Error when access horizon kilo https://ask.openstack.org/en/question/85331/internal-server-error-when-access-horizon-kilo/ Tags: rdo, horizon How to configure the RDO Dashboard for SSL https://ask.openstack.org/en/question/85284/how-to-configure-the-rdo-dashboard-for-ssl/ Tags: mod_ssl, rdo [ RDO ] Could not find declared class ::remote::db https://ask.openstack.org/en/question/84820/rdo-could-not-find-declared-class-remotedb/ Tags: rdo Sahara SSHException: Error reading SSH protocol banner https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ Tags: sahara, icehouse, ssh, vanila Error Sahara create cluster: 'Error attach volume to instance https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, vanila, icehouse Creating Sahara cluster: Error attach volume to instance https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, hadoop, icehouse, vanilla Routing between two tenants https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ Tags: kilo, fuel, rdo, routing Freeing IP from FLAT network setup https://ask.openstack.org/en/question/84063/freeing-ip-from-flat-network-setup/ Tags: juno, existing-network, rdo, neutron, flat How to deploy Virtual network function (VNF) in Opnstack integrated Opendaylight https://ask.openstack.org/en/question/84061/how-to-deploy-virtual-network-function-vnf-in-opnstack-integrated-opendaylight/ Tags: vnf, kilo, opendaylight, nfv cann't install python-keystone-auth-token [Close Duplicate] https://ask.openstack.org/en/question/83942/cannt-install-python-keystone-auth-token-close-duplicate/ Tags: python-keystone, openstack-swift RDO kilo installation metadata widget doesn't work https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ Tags: kilo, flavor, metadata Not able to ssh into RDO Kilo instance https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ Tags: rdo, instance-ssh No able to create an instance in odl integrated RDO Kilo openstack https://ask.openstack.org/en/question/83700/no-able-to-create-an-instance-in-odl-integrated-rdo-kilo-openstack/ Tags: kilo, rdo, opendaylight, kilo-neutron, integration redhat RDO enable access to swift via S3 https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ Tags: swift, s3 Doesn't Swift use storage the same way Cinder does? https://ask.openstack.org/en/question/83532/doesnt-swift-use-storage-the-same-way-cinder-does/ Tags: swift, cinder, storage, network, topology Heat stack create failed https://ask.openstack.org/en/question/82846/heat-stack-create-failed/ Tags: rdo, tripleo, heat, overcloud openstack baremetal introspection internal server error https://ask.openstack.org/en/question/82790/openstack-baremetal-introspection-internal-server-error/ Tags: rdo, ironic-inspector, tripleo glance\nova command line SSL failure https://ask.openstack.org/en/question/82692/glancenova-command-line-ssl-failure/ Tags: glance, kilo-openstack, ssl Cannot create/update flavor metadata from horizon https://ask.openstack.org/en/question/82477/cannot-createupdate-flavor-metadata-from-horizon/ Tags: rdo, kilo, flavor, metadata Installing openstack using packstack (rdo) failed https://ask.openstack.org/en/question/82473/installing-openstack-using-packstack-rdo-failed/ Tags: rdo, packstack, installation-error, keystone -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From sasha at redhat.com Tue Jan 12 00:28:50 2016 From: sasha at redhat.com (Sasha Chuzhoy) Date: Mon, 11 Jan 2016 19:28:50 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1783733732.2466478.1452484212213.JavaMail.yahoo@mail.yahoo.com> References: <1783733732.2466478.1452484212213.JavaMail.yahoo@mail.yahoo.com> Message-ID: <238262411.9673239.1452558530542.JavaMail.zimbra@redhat.com> Hi John, Does this command work for you: nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv If it works (nova keypair-list), you could ssh to the instance launched with the key using: ssh -i cloudkey.priv @ Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Sunday, January 10, 2016 10:50:12 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Thanks, Marius! > > I get the same error. > > "Error (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP p 400) (Requested ID: ...)" > > I'm just staring out with OpenStack here, trying to get through my first > tutorial, so I'm fumbling around. > > ...John > > > On Sunday, January 10, 2016 10:03 AM, Marius Cornea > wrote: > > > Hi, > > You could try adding it via the CLI: > > # generate the keypair; cloud.key is the private key; cloud.key.pub is > the public key > ssh-keygen -t rsa -f cloud.key -N '' > > # load the keystone credentials > source keystonerc_admin > > # import the public key > nova keypair-add --pub_key cloud.key.pub cloudkey > > Marius > > On Sat, Jan 9, 2016 at 7:46 PM, Thales < thaleslv at yahoo.com > wrote: > > Hello, > > > > I'm trying to learn OpenStack via RDO, so I downloaded packstack. I went to > > this link to start on RDO > > https://www.rdoproject.org/install/quickstart/ > > > > I set up a fixed IP in CentOS 7. CentOS 7 is a guest OS. I'm using > > Virtualbox as my virtual machine, and Windows 10 is my host OS. I'm using a > > Bridged adpater for Virtual Box. > > > > I downloaded and installed RDO via packstack. > > > > I then went here: > > https://www.rdoproject.org/install/running-an-instance/ > > > > > > I went there to learn how to run an instance, and on step three on that > > page, "Create or Import a Pair", I received an error. The error occurred > > when I tried to "create" the key. It error was "Keypair data is invalid: > > failed to generate fingerprint (HTTP 400)". I also tried to import a > > key, by generating one on the command line. However, in both cases it > > failed. I got some guidance from this video > > , https://www.youtube.com/watch?v=GYTctLuPbOs, but that didn't do the trick > > for me, either. > > > > I have been looking around for similar problems on the web, and I found a > > couple, but none of the solutions worked for me. I recently disabled > > SELinux with the "setenforce 0" command, and that didn't fix the problem > > either. > > > > Does anyone have any idea what his could be? I'm really just learning > > now, so I'm sure it's got to be something rudimentary. > > > > Thanks for any help! > > > > ...John > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From emilien at redhat.com Tue Jan 12 02:43:36 2016 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 11 Jan 2016 21:43:36 -0500 Subject: [Rdo-list] [puppet] list of blockers to deploy mitaka Message-ID: <56946858.1060706@redhat.com> Hi, I've listed the blockers I've found out when trying to deploy our Puppet modules with Mitaka: https://etherpad.openstack.org/p/puppet-openstack-ci-mitaka Until we don't fix all those issues, we can't bump our CI to Mitaka, so we really need to make progress if we want to synchronize our release with other OpenStack projects. Any contribution is welcome, -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From amuller at redhat.com Tue Jan 12 02:56:44 2016 From: amuller at redhat.com (Assaf Muller) Date: Mon, 11 Jan 2016 21:56:44 -0500 Subject: [Rdo-list] Unanswered ask.openstack.org "RDO" questions (Jan 11, 2016) In-Reply-To: <56940CA0.7030209@redhat.com> References: <56940CA0.7030209@redhat.com> Message-ID: On Mon, Jan 11, 2016 at 3:12 PM, Rich Bowen wrote: > In an attempt to raise awareness of the questions that folks are asking > about RDO on ask.openstack.org, as well as to get those questions > answered, I'm thinking of doing a weekly automated mailing of the > unanswered questions containing (or tagged with) "RDO". > > These are sorted by age, so some of the ones at the end are very old and > probably ready to be closed. > > Thanks for any time you can devote to answering one or two of these. I answered the Neutron questions, apart from the ODL ones. CC'd some ODL folks. > > > OpenStack-Docker driver failed > https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ > Tags: docker, openstack, liberty > > > Upgrading from RDO commercial RHEL OSP > https://ask.openstack.org/en/question/87200/upgrading-from-rdo-commercial-rhel-osp/ > Tags: upgrading, rdo, rhel, osp > > > Abnormal resource consumption in RDO environment. > https://ask.openstack.org/en/question/87189/abnormal-resource-consumption-in-rdo-environment/ > Tags: rdo, liberty, kilo, fedora, centos > > > Clarification on docs for self service connectivity > https://ask.openstack.org/en/question/87183/clarification-on-docs-for-self-service-connectivity/ > Tags: liberty, neutron, connectivity, router > > > RDO installation : Huge resources consumption (RAM) > https://ask.openstack.org/en/question/87158/rdo-installation-huge-resources-consumption-ram/ > Tags: ram, rdo, liberty, juno, kilo > > > Can't create volume with cinder > https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ > Tags: cinder, glusterfs, nfs > > > Packstack install MongoDB Error > https://ask.openstack.org/en/question/86293/packstack-install-mongodb-error/ > Tags: mongodb, mongodb.pp, packstack, rdo, error > > > Sahara HDP Cluster: error_message=Cluster is missing a service: YARN > https://ask.openstack.org/en/question/85801/sahara-hdp-cluster-error_messagecluster-is-missing-a-service-yarn/ > Tags: sahara, hdp, hadoop, error, yarn > > > error installing rdo kilo with proxy > https://ask.openstack.org/en/question/85703/error-installing-rdo-kilo-with-proxy/ > Tags: rdo, packstack, centos, proxy > > > Why is /usr/bin/openstack domain list ... hanging? > https://ask.openstack.org/en/question/85593/why-is-usrbinopenstack-domain-list-hanging/ > Tags: puppet, keystone, kilo > > > Internal Server Error when access horizon kilo > https://ask.openstack.org/en/question/85331/internal-server-error-when-access-horizon-kilo/ > Tags: rdo, horizon > > > How to configure the RDO Dashboard for SSL > https://ask.openstack.org/en/question/85284/how-to-configure-the-rdo-dashboard-for-ssl/ > Tags: mod_ssl, rdo > > > [ RDO ] Could not find declared class ::remote::db > https://ask.openstack.org/en/question/84820/rdo-could-not-find-declared-class-remotedb/ > Tags: rdo > > > Sahara SSHException: Error reading SSH protocol banner > https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ > Tags: sahara, icehouse, ssh, vanila > > > Error Sahara create cluster: 'Error attach volume to instance > https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ > Tags: sahara, attach-volume, vanila, icehouse > > > Creating Sahara cluster: Error attach volume to instance > https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ > Tags: sahara, attach-volume, hadoop, icehouse, vanilla > > > Routing between two tenants > https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ > Tags: kilo, fuel, rdo, routing > > > Freeing IP from FLAT network setup > https://ask.openstack.org/en/question/84063/freeing-ip-from-flat-network-setup/ > Tags: juno, existing-network, rdo, neutron, flat > > > How to deploy Virtual network function (VNF) in Opnstack integrated > Opendaylight > https://ask.openstack.org/en/question/84061/how-to-deploy-virtual-network-function-vnf-in-opnstack-integrated-opendaylight/ > Tags: vnf, kilo, opendaylight, nfv > > > cann't install python-keystone-auth-token [Close Duplicate] > https://ask.openstack.org/en/question/83942/cannt-install-python-keystone-auth-token-close-duplicate/ > Tags: python-keystone, openstack-swift > > > RDO kilo installation metadata widget doesn't work > https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ > Tags: kilo, flavor, metadata > > > Not able to ssh into RDO Kilo instance > https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ > Tags: rdo, instance-ssh > > > No able to create an instance in odl integrated RDO Kilo openstack > https://ask.openstack.org/en/question/83700/no-able-to-create-an-instance-in-odl-integrated-rdo-kilo-openstack/ > Tags: kilo, rdo, opendaylight, kilo-neutron, integration > > > redhat RDO enable access to swift via S3 > https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ > Tags: swift, s3 > > > Doesn't Swift use storage the same way Cinder does? > https://ask.openstack.org/en/question/83532/doesnt-swift-use-storage-the-same-way-cinder-does/ > Tags: swift, cinder, storage, network, topology > > > Heat stack create failed > https://ask.openstack.org/en/question/82846/heat-stack-create-failed/ > Tags: rdo, tripleo, heat, overcloud > > > openstack baremetal introspection internal server error > https://ask.openstack.org/en/question/82790/openstack-baremetal-introspection-internal-server-error/ > Tags: rdo, ironic-inspector, tripleo > > > glance\nova command line SSL failure > https://ask.openstack.org/en/question/82692/glancenova-command-line-ssl-failure/ > Tags: glance, kilo-openstack, ssl > > > Cannot create/update flavor metadata from horizon > https://ask.openstack.org/en/question/82477/cannot-createupdate-flavor-metadata-from-horizon/ > Tags: rdo, kilo, flavor, metadata > > > Installing openstack using packstack (rdo) failed > https://ask.openstack.org/en/question/82473/installing-openstack-using-packstack-rdo-failed/ > Tags: rdo, packstack, installation-error, keystone > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Tue Jan 12 10:09:15 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 12 Jan 2016 11:09:15 +0100 Subject: [Rdo-list] What should be RDO Definition of Done? Message-ID: Hello, In an effort to improve RDO release process, we came accross the idea of having a defined definition of done. What are the criteria to decide if a release of RDO is DONE? * RDO installs w/ packstack * RDO installs w/ RDO Manager * Documentation is up to date etc .... I added the topic to the RDO meeting agenda, but I'd like to enlarge the discussion outside the pool of people coming to the meetings and even technical contributors. Regards, H. From me at coolsvap.net Tue Jan 12 10:15:30 2016 From: me at coolsvap.net (Swapnil Kulkarni) Date: Tue, 12 Jan 2016 15:45:30 +0530 Subject: [Rdo-list] What should be RDO Definition of Done? In-Reply-To: References: Message-ID: On Tue, Jan 12, 2016 at 3:39 PM, Ha?kel wrote: > Hello, > > In an effort to improve RDO release process, we came accross the idea > of having a defined definition of done. > What are the criteria to decide if a release of RDO is DONE? > > * RDO installs w/ packstack > only install will not suffice i think being operational. * RDO installs w/ RDO Manager > * Documentation is up to date > etc .... > > Also all the components it supports should have the basic driver/plugin which is operational and documented. e.g. LVM for cinder should be operational I added the topic to the RDO meeting agenda, but I'd like to enlarge > the discussion outside the pool of people coming > to the meetings and even technical contributors. > > Regards, > H. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Tue Jan 12 13:01:44 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 12 Jan 2016 14:01:44 +0100 Subject: [Rdo-list] What should be RDO Definition of Done? In-Reply-To: References: Message-ID: These are (obvious) examples, the point is to come up with a detailed list of release criteria. From mohammed.arafa at gmail.com Tue Jan 12 14:34:07 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 12 Jan 2016 09:34:07 -0500 Subject: [Rdo-list] What should be RDO Definition of Done? In-Reply-To: References: Message-ID: You mean like have an instance pingable from the controller node via its floating IP? On Jan 12, 2016 8:04 AM, "Ha?kel" wrote: > These are (obvious) examples, the point is to come up with a detailed > list of release criteria. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Jan 12 14:52:59 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 12 Jan 2016 09:52:59 -0500 Subject: [Rdo-list] What should be RDO Definition of Done? In-Reply-To: References: Message-ID: <5695134B.6080309@redhat.com> We also have this checklist: https://www.rdoproject.org/rdo/release-checklist/ This is mostly marketing type stuff. Since what we produce is software, we need to be sure that when we produce it, we do a great job of telling the world about it. These are things that we can do to amplify what we already naturally do, and make sure that we don't drop things that are boring or routine. --Rich On 01/12/2016 08:01 AM, Ha?kel wrote: > These are (obvious) examples, the point is to come up with a detailed > list of release criteria. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From hguemar at fedoraproject.org Tue Jan 12 15:15:47 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 12 Jan 2016 16:15:47 +0100 Subject: [Rdo-list] What should be RDO Definition of Done? In-Reply-To: <5695134B.6080309@redhat.com> References: <5695134B.6080309@redhat.com> Message-ID: 2016-01-12 15:52 GMT+01:00 Rich Bowen : > We also have this checklist: > https://www.rdoproject.org/rdo/release-checklist/ > > This is mostly marketing type stuff. Since what we produce is software, > we need to be sure that when we produce it, we do a great job of telling > the world about it. These are things that we can do to amplify what we > already naturally do, and make sure that we don't drop things that are > boring or routine. > > --Rich > Yes, this also *must* be part of our Definition of Done (DoD). Originally, this was discussed with John in order to have better coordination w/ RDO Manager releases, but this should not be limited to technical artefacts. Marketing, documentation (including upstream's) should be part of it. > On 01/12/2016 08:01 AM, Ha?kel wrote: >> These are (obvious) examples, the point is to come up with a detailed >> list of release criteria. >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Tue Jan 12 15:17:23 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 12 Jan 2016 16:17:23 +0100 Subject: [Rdo-list] What should be RDO Definition of Done? In-Reply-To: References: Message-ID: 2016-01-12 15:34 GMT+01:00 Mohammed Arafa : > You mean like have an instance pingable from the controller node via its > floating IP? > Yes, through improving our test days scenarios. > On Jan 12, 2016 8:04 AM, "Ha?kel" wrote: >> >> These are (obvious) examples, the point is to come up with a detailed >> list of release criteria. >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From emilien at redhat.com Tue Jan 12 17:37:14 2016 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 12 Jan 2016 12:37:14 -0500 Subject: [Rdo-list] OPM downstream patches Message-ID: <569539CA.9060806@redhat.com> So I started an etherpad to discuss why we have so much downstream patches in Puppet modules. https://etherpad.openstack.org/p/opm-patches In my opinion, we should follow some best practices: * upstream first. If you find a bug, submit the patch upstream, wait for at least a positive review from a core and also successful CI jobs. Then you can backport it downstream if urgent. * backport it to stable branches when needed. The patch we want is in master and not stable? It's too easy to backport it in OPM. Do the backport in upstream/stable first, it will help to stay updated with upstream. * don't change default parameters, don't override them. Our installers are able to override any parameter so do not hardcode this kind of change. * keep up with upstream: if you have an upstream patch under review that is already in OPM: keep it alive and make sure it lands as soon as possible. UPSTREAM FIRST please please please (I'll send you cookies if you want). If you have any question about an upstream patch, please join #puppet-openstack (freenode) and talk to the group. We're doing reviews every day and it's not difficult to land a patch. In the meantime, I would like to justify each of our backports in the etherpad and clean-up a maximum of them. Thank you for reading so far, -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From dms at redhat.com Tue Jan 12 17:43:56 2016 From: dms at redhat.com (David Moreau Simard) Date: Tue, 12 Jan 2016 12:43:56 -0500 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569539CA.9060806@redhat.com> References: <569539CA.9060806@redhat.com> Message-ID: +1 to upstream first. Also, if any downstream modifications are deemed necessary, I'm convinced we should be maintaining actual patches, not entire forked repositories but I think that's another topic. RPM packaging spec files have built-in mechanisms to pull patches and we should leverage it. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Tue, Jan 12, 2016 at 12:37 PM, Emilien Macchi wrote: > So I started an etherpad to discuss why we have so much downstream > patches in Puppet modules. > > https://etherpad.openstack.org/p/opm-patches > > In my opinion, we should follow some best practices: > > * upstream first. If you find a bug, submit the patch upstream, wait for > at least a positive review from a core and also successful CI jobs. Then > you can backport it downstream if urgent. > * backport it to stable branches when needed. The patch we want is in > master and not stable? It's too easy to backport it in OPM. Do the > backport in upstream/stable first, it will help to stay updated with > upstream. > * don't change default parameters, don't override them. Our installers > are able to override any parameter so do not hardcode this kind of change. > * keep up with upstream: if you have an upstream patch under review that > is already in OPM: keep it alive and make sure it lands as soon as possible. > > UPSTREAM FIRST please please please (I'll send you cookies if you want). > > If you have any question about an upstream patch, please join > #puppet-openstack (freenode) and talk to the group. We're doing reviews > every day and it's not difficult to land a patch. > > In the meantime, I would like to justify each of our backports in the > etherpad and clean-up a maximum of them. > > Thank you for reading so far, > -- > Emilien Macchi > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From emilien at redhat.com Tue Jan 12 18:16:51 2016 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 12 Jan 2016 13:16:51 -0500 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569539CA.9060806@redhat.com> References: <569539CA.9060806@redhat.com> Message-ID: <56954313.1010001@redhat.com> Also, the way we're packaging OPM is really bad. * we have no SHA1 for each module we have in OPM * we are not able to validate each module * package tarball is not pure. All other OpenStack RPMS take upstream tarball so we can easily compare but in OPM... no way to do it. Those issues are really critical, I would like to hear from OPM folks, and find solutions that we will work on during the following weeks. Thanks On 01/12/2016 12:37 PM, Emilien Macchi wrote: > So I started an etherpad to discuss why we have so much downstream > patches in Puppet modules. > > https://etherpad.openstack.org/p/opm-patches > > In my opinion, we should follow some best practices: > > * upstream first. If you find a bug, submit the patch upstream, wait for > at least a positive review from a core and also successful CI jobs. Then > you can backport it downstream if urgent. > * backport it to stable branches when needed. The patch we want is in > master and not stable? It's too easy to backport it in OPM. Do the > backport in upstream/stable first, it will help to stay updated with > upstream. > * don't change default parameters, don't override them. Our installers > are able to override any parameter so do not hardcode this kind of change. > * keep up with upstream: if you have an upstream patch under review that > is already in OPM: keep it alive and make sure it lands as soon as possible. > > UPSTREAM FIRST please please please (I'll send you cookies if you want). > > If you have any question about an upstream patch, please join > #puppet-openstack (freenode) and talk to the group. We're doing reviews > every day and it's not difficult to land a patch. > > In the meantime, I would like to justify each of our backports in the > etherpad and clean-up a maximum of them. > > Thank you for reading so far, > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From rbowen at redhat.com Tue Jan 12 18:53:35 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 12 Jan 2016 13:53:35 -0500 Subject: [Rdo-list] RDO meetup at DevConf.cz? Message-ID: <56954BAF.70805@redhat.com> We have the opportunity to have a RDO meetup at DevConf.cz for those of us who will be there. The room is available 9H00 through 18H00 on Saturday, and we can pick any time that we like. (Although it'll fill up quickly, I presume.) I see OpenStack content at: 14:00 16:30 17:20 https://devconfcz2016.sched.org/?s=openstack We could either do this in the morning, say, 11:00, or we could do it at 15:00 right after Jakub's rdopkg talk. Thoughts? -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From thaleslv at yahoo.com Tue Jan 12 19:49:48 2016 From: thaleslv at yahoo.com (Thales) Date: Tue, 12 Jan 2016 19:49:48 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <238262411.9673239.1452558530542.JavaMail.zimbra@redhat.com> References: <238262411.9673239.1452558530542.JavaMail.zimbra@redhat.com> Message-ID: <1872312913.3066988.1452628188775.JavaMail.yahoo@mail.yahoo.com> Hello Sasha, I tried your command, and got the following error: [john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 cloudkey.priv ERROR (CommandError): You must provide a username or user id via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] ...John On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy wrote: Hi John, Does this command work for you: nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv If it works (nova keypair-list), you could ssh to the instance launched with the key using: ssh -i cloudkey.priv @ Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Sunday, January 10, 2016 10:50:12 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Thanks, Marius! > > I get the same error. > > "Error (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP p 400) (Requested ID: ...)" > > I'm just staring out with OpenStack here, trying to get through my first > tutorial, so I'm fumbling around. > > ...John > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Tue Jan 12 20:21:41 2016 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 12 Jan 2016 15:21:41 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1872312913.3066988.1452628188775.JavaMail.yahoo@mail.yahoo.com> References: <238262411.9673239.1452558530542.JavaMail.zimbra@redhat.com> <1872312913.3066988.1452628188775.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1112286009.10283967.1452630101512.JavaMail.zimbra@redhat.com> You need to source the file with the variables. Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" command. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Tuesday, January 12, 2016 2:49:48 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Hello Sasha, > I tried your command, and got the following error: > [john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > cloudkey.priv > ERROR (CommandError): You must provide a username or user id via > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > ...John > > > On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy > wrote: > > > Hi John, > Does this command work for you: > nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > If it works (nova keypair-list), you could ssh to the instance launched with > the key using: > ssh -i cloudkey.priv @ > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Marius Cornea" > > Cc: rdo-list at redhat.com > > Sent: Sunday, January 10, 2016 10:50:12 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Thanks, Marius! > > > > I get the same error. > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > fingerprint > > (HTTP p 400) (Requested ID: ...)" > > > > I'm just staring out with OpenStack here, trying to get through my first > > tutorial, so I'm fumbling around. > > > > ...John > > > > From thaleslv at yahoo.com Tue Jan 12 20:52:34 2016 From: thaleslv at yahoo.com (Thales) Date: Tue, 12 Jan 2016 20:52:34 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1112286009.10283967.1452630101512.JavaMail.zimbra@redhat.com> References: <1112286009.10283967.1452630101512.JavaMail.zimbra@redhat.com> Message-ID: <1608283806.1663961.1452631954189.JavaMail.yahoo@mail.yahoo.com> Thanks! ? ?I'm just going through the beginning tutorials now, so I'm not aware of those commands, nor do I know where to find them, however I ran the first, and, unfortunately, I get the same error:[root at localhost keystone(keystone_admin)]# nova keypair-add cloudkey > cloudkey.priv; chmod 600 cloudkey.priv ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint (HTTP 400) (Request-ID: req-e722e315-e701-40d8-9fd6-0b5e71df3de1) Regards,...John On Tuesday, January 12, 2016 2:21 PM, Sasha Chuzhoy wrote: You need to source the file with the variables. Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" command. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Tuesday, January 12, 2016 2:49:48 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Hello Sasha, > I tried your command, and got the following error: > [john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > cloudkey.priv > ERROR (CommandError): You must provide a username or user id via > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > ...John >? > >? ? On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy >? ? wrote: >? > >? Hi John, > Does this command work for you: >? nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > If it works (nova keypair-list), you could ssh to the instance launched with > the key using: > ssh -i cloudkey.priv @ > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Marius Cornea" > > Cc: rdo-list at redhat.com > > Sent: Sunday, January 10, 2016 10:50:12 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Thanks, Marius! > > > > I get the same error. > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > fingerprint > > (HTTP p 400) (Requested ID: ...)" > > > > I'm just staring out with OpenStack here, trying to get through my first > > tutorial, so I'm fumbling around. > > > > ...John > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Jan 12 21:53:30 2016 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 12 Jan 2016 16:53:30 -0500 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <56954313.1010001@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> Message-ID: <569575DA.9050101@redhat.com> On 01/12/2016 01:16 PM, Emilien Macchi wrote: > Also, the way we're packaging OPM is really bad. > > * we have no SHA1 for each module we have in OPM > * we are not able to validate each module > * package tarball is not pure. All other OpenStack RPMS take upstream > tarball so we can easily compare but in OPM... no way to do it. > > Those issues are really critical, I would like to hear from OPM folks, > and find solutions that we will work on during the following weeks. I have 2 proposals, maybe wrong but I wanted to share. Solution #1 - Forks + Puppetfile * Forking all our Puppet modules in https://github.com/redhat-openstack/ * Apply our custom patches in a specific branch for each module * Create a Puppetfile per RDO/OSP version that track SHA1 from repos * Create a script (with r10k for example) that checkout all modules and build RPM. Solution #2 - Forks + one RPM per module * Forking all our Puppet modules in https://github.com/redhat-openstack/ * Apply our custom patches in a specific branch for each module * Create a RPM for each module Of course, I don't take in consideration CI work so I'm willing to suggestions. Feedback is welcome here! > Thanks > > On 01/12/2016 12:37 PM, Emilien Macchi wrote: >> So I started an etherpad to discuss why we have so much downstream >> patches in Puppet modules. >> >> https://etherpad.openstack.org/p/opm-patches >> >> In my opinion, we should follow some best practices: >> >> * upstream first. If you find a bug, submit the patch upstream, wait for >> at least a positive review from a core and also successful CI jobs. Then >> you can backport it downstream if urgent. >> * backport it to stable branches when needed. The patch we want is in >> master and not stable? It's too easy to backport it in OPM. Do the >> backport in upstream/stable first, it will help to stay updated with >> upstream. >> * don't change default parameters, don't override them. Our installers >> are able to override any parameter so do not hardcode this kind of change. >> * keep up with upstream: if you have an upstream patch under review that >> is already in OPM: keep it alive and make sure it lands as soon as possible. >> >> UPSTREAM FIRST please please please (I'll send you cookies if you want). >> >> If you have any question about an upstream patch, please join >> #puppet-openstack (freenode) and talk to the group. We're doing reviews >> every day and it's not difficult to land a patch. >> >> In the meantime, I would like to justify each of our backports in the >> etherpad and clean-up a maximum of them. >> >> Thank you for reading so far, >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From thaleslv at yahoo.com Wed Jan 13 00:21:32 2016 From: thaleslv at yahoo.com (Thales) Date: Wed, 13 Jan 2016 00:21:32 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1608283806.1663961.1452631954189.JavaMail.yahoo@mail.yahoo.com> References: <1112286009.10283967.1452630101512.JavaMail.zimbra@redhat.com> <1608283806.1663961.1452631954189.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> I'm still struggling with this one, so if anyone has any other ideas, I'm all ears (or eyes!). ?? Regards,...John On Tuesday, January 12, 2016 2:52 PM, Thales wrote: Thanks! ? ?I'm just going through the beginning tutorials now, so I'm not aware of those commands, nor do I know where to find them, however I ran the first, and, unfortunately, I get the same error:[root at localhost keystone(keystone_admin)]# nova keypair-add cloudkey > cloudkey.priv; chmod 600 cloudkey.priv ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint (HTTP 400) (Request-ID: req-e722e315-e701-40d8-9fd6-0b5e71df3de1) Regards,...John On Tuesday, January 12, 2016 2:21 PM, Sasha Chuzhoy wrote: You need to source the file with the variables. Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" command. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Tuesday, January 12, 2016 2:49:48 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Hello Sasha, > I tried your command, and got the following error: > [john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > cloudkey.priv > ERROR (CommandError): You must provide a username or user id via > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > ...John >? > >? ? On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy >? ? wrote: >? > >? Hi John, > Does this command work for you: >? nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > If it works (nova keypair-list), you could ssh to the instance launched with > the key using: > ssh -i cloudkey.priv @ > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Marius Cornea" > > Cc: rdo-list at redhat.com > > Sent: Sunday, January 10, 2016 10:50:12 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Thanks, Marius! > > > > I get the same error. > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > fingerprint > > (HTTP p 400) (Requested ID: ...)" > > > > I'm just staring out with OpenStack here, trying to get through my first > > tutorial, so I'm fumbling around. > > > > ...John > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Wed Jan 13 03:02:50 2016 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 12 Jan 2016 22:02:50 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> References: <1112286009.10283967.1452630101512.JavaMail.zimbra@redhat.com> <1608283806.1663961.1452631954189.JavaMail.yahoo@mail.yahoo.com> <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> Message-ID: <895813556.10402525.1452654170910.JavaMail.zimbra@redhat.com> John, Would it be possible for you to share all the commands you've ran (history) during the installation + the OS version (maybe using http://paste.openstack.org/) Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Tuesday, January 12, 2016 7:21:32 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > I'm still struggling with this one, so if anyone has any other ideas, I'm all > ears (or eyes!). > Regards,...John > > > On Tuesday, January 12, 2016 2:52 PM, Thales wrote: > > > Thanks! ? ?I'm just going through the beginning tutorials now, so I'm not > aware of those commands, nor do I know where to find them, however I ran > the first, and, unfortunately, I get the same error:[root at localhost > keystone(keystone_admin)]# nova keypair-add cloudkey > cloudkey.priv; chmod > 600 cloudkey.priv > ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP 400) (Request-ID: req-e722e315-e701-40d8-9fd6-0b5e71df3de1) > Regards,...John > > On Tuesday, January 12, 2016 2:21 PM, Sasha Chuzhoy > wrote: > > > You need to source the file with the variables. > Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" > command. > > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Sasha Chuzhoy" > > Cc: "Marius Cornea" , rdo-list at redhat.com > > Sent: Tuesday, January 12, 2016 2:49:48 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Hello Sasha, > > I tried your command, and got the following error: > > [john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > > cloudkey.priv > > ERROR (CommandError): You must provide a username or user id via > > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > > > > ...John > >? > > > >? ? On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy > >? ? wrote: > >? > > > >? Hi John, > > Does this command work for you: > >? nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > > > If it works (nova keypair-list), you could ssh to the instance launched > > with > > the key using: > > ssh -i cloudkey.priv @ > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" > > > To: "Marius Cornea" > > > Cc: rdo-list at redhat.com > > > Sent: Sunday, January 10, 2016 10:50:12 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Thanks, Marius! > > > > > > I get the same error. > > > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > > fingerprint > > > (HTTP p 400) (Requested ID: ...)" > > > > > > I'm just staring out with OpenStack here, trying to get through my first > > > tutorial, so I'm fumbling around. > > > > > > ...John > > > > > > > > > > > > From thaleslv at yahoo.com Wed Jan 13 03:36:42 2016 From: thaleslv at yahoo.com (Thales) Date: Wed, 13 Jan 2016 03:36:42 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <895813556.10402525.1452654170910.JavaMail.zimbra@redhat.com> References: <895813556.10402525.1452654170910.JavaMail.zimbra@redhat.com> Message-ID: <61375750.3463806.1452656202171.JavaMail.yahoo@mail.yahoo.com> I'll do my best to recall.? My OS is CentOS 7, in VirtualBox 5.0. ? ? The host OS is WIndows 10. ? ?I have an 8 core machine, with 16 gigabytes of ram, 10 of which are being used by the guest OS. I set up a fixed IP, and used a "bridged adapter" in VirtualBox. For installing RDO, I followed these instructions exactly: https://www.rdoproject.org/install/quickstart/ I also disabled firewalld, and for SELinux executed "setenforce 0" Here is the pastebin, basically a repeat of the "quickstart" website http://paste.openstack.org/show/483685/ Looking at my Bash "history" command confirms those were the steps taken. Thanks, Sasha!....John On Tuesday, January 12, 2016 9:02 PM, Sasha Chuzhoy wrote: John, Would it be possible for you to share all the commands you've ran (history) during the installation + the OS version? (maybe using http://paste.openstack.org/) Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Tuesday, January 12, 2016 7:21:32 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > I'm still struggling with this one, so if anyone has any other ideas, I'm all > ears (or eyes!). > Regards,...John >? > >? ? On Tuesday, January 12, 2016 2:52 PM, Thales wrote: >? > >? Thanks! ? ?I'm just going through the beginning tutorials now, so I'm not >? aware of those commands, nor do I know where to find them, however I ran >? the first, and, unfortunately, I get the same error:[root at localhost >? keystone(keystone_admin)]# nova keypair-add cloudkey > cloudkey.priv; chmod >? 600 cloudkey.priv > ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP 400) (Request-ID: req-e722e315-e701-40d8-9fd6-0b5e71df3de1) > Regards,...John > >? ? On Tuesday, January 12, 2016 2:21 PM, Sasha Chuzhoy >? ? wrote: >? > >? You need to source the file with the variables. > Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" > command. > > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Sasha Chuzhoy" > > Cc: "Marius Cornea" , rdo-list at redhat.com > > Sent: Tuesday, January 12, 2016 2:49:48 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Hello Sasha, > > I tried your command, and got the following error: > > [john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > > cloudkey.priv > > ERROR (CommandError): You must provide a username or user id via > > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > > > > ...John > >? > > > >? ? On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy > >? ? wrote: > >? > > > >? Hi John, > > Does this command work for you: > >? nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > > > If it works (nova keypair-list), you could ssh to the instance launched > > with > > the key using: > > ssh -i cloudkey.priv @ > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" > > > To: "Marius Cornea" > > > Cc: rdo-list at redhat.com > > > Sent: Sunday, January 10, 2016 10:50:12 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Thanks, Marius! > > > > > > I get the same error. > > > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > > fingerprint > > > (HTTP p 400) (Requested ID: ...)" > > > > > > I'm just staring out with OpenStack here, trying to get through my first > > > tutorial, so I'm fumbling around. > > > > > > ...John > > > > > > > > > >? ? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Wed Jan 13 07:54:52 2016 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 13 Jan 2016 02:54:52 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> References: <1112286009.10283967.1452630101512.JavaMail.zimbra@redhat.com> <1608283806.1663961.1452631954189.JavaMail.yahoo@mail.yahoo.com> <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> Message-ID: <241991190.11892422.1452671692434.JavaMail.zimbra@redhat.com> ----- Original Message ----- > I'm still struggling with this one, so if anyone has any other ideas, I'm all > ears (or eyes!). > Regards, > ...John > On Tuesday, January 12, 2016 2:52 PM, Thales wrote: > Thanks! I'm just going through the beginning tutorials now, so I'm not aware > of those commands, nor do I know where to find them, however I ran the > first, and, unfortunately, I get the same error: > [root at localhost keystone(keystone_admin)]# nova keypair-add cloudkey > > cloudkey.priv; chmod 600 cloudkey.priv > ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP 400) (Request-ID: req-e722e315-e701-40d8-9fd6-0b5e71df3de1) > Regards, > ...John > On Tuesday, January 12, 2016 2:21 PM, Sasha Chuzhoy wrote: > You need to source the file with the variables. > Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" > command. > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Thales" < thaleslv at yahoo.com > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > Cc: "Marius Cornea" < marius at remote-lab.net >, rdo-list at redhat.com > > Sent: Tuesday, January 12, 2016 2:49:48 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Hello Sasha, > > I tried your command, and got the following error: > > [ john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > > cloudkey.priv > > ERROR (CommandError): You must provide a username or user id via > > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > > > > ...John > > > > > > On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy < sasha at redhat.com > > > wrote: > > > > > > Hi John, > > Does this command work for you: > > nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > > > If it works (nova keypair-list), you could ssh to the instance launched > > with > > the key using: > > ssh -i cloudkey.priv @ > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" < thaleslv at yahoo.com > > > > To: "Marius Cornea" < marius at remote-lab.net > > > > Cc: rdo-list at redhat.com > > > Sent: Sunday, January 10, 2016 10:50:12 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Thanks, Marius! > > > > > > I get the same error. > > > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > > fingerprint > > > (HTTP p 400) (Requested ID: ...)" > > > > > > I'm just staring out with OpenStack here, trying to get through my first > > > tutorial, so I'm fumbling around. > > > > > > ...John > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Wed Jan 13 07:55:07 2016 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 13 Jan 2016 02:55:07 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> References: <1112286009.10283967.1452630101512.JavaMail.zimbra@redhat.com> <1608283806.1663961.1452631954189.JavaMail.yahoo@mail.yahoo.com> <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1349772164.11892456.1452671707742.JavaMail.zimbra@redhat.com> ----- Original Message ----- > I'm still struggling with this one, so if anyone has any other ideas, I'm all > ears (or eyes!). > Regards, > ...John > On Tuesday, January 12, 2016 2:52 PM, Thales wrote: > Thanks! I'm just going through the beginning tutorials now, so I'm not aware > of those commands, nor do I know where to find them, however I ran the > first, and, unfortunately, I get the same error: > [root at localhost keystone(keystone_admin)]# nova keypair-add cloudkey > > cloudkey.priv; chmod 600 cloudkey.priv > ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP 400) (Request-ID: req-e722e315-e701-40d8-9fd6-0b5e71df3de1) > Regards, > ...John > On Tuesday, January 12, 2016 2:21 PM, Sasha Chuzhoy wrote: > You need to source the file with the variables. > Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" > command. > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Thales" < thaleslv at yahoo.com > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > Cc: "Marius Cornea" < marius at remote-lab.net >, rdo-list at redhat.com > > Sent: Tuesday, January 12, 2016 2:49:48 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Hello Sasha, > > I tried your command, and got the following error: > > [ john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > > cloudkey.priv > > ERROR (CommandError): You must provide a username or user id via > > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > > > > ...John > > > > > > On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy < sasha at redhat.com > > > wrote: > > > > > > Hi John, > > Does this command work for you: > > nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > > > If it works (nova keypair-list), you could ssh to the instance launched > > with > > the key using: > > ssh -i cloudkey.priv @ > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" < thaleslv at yahoo.com > > > > To: "Marius Cornea" < marius at remote-lab.net > > > > Cc: rdo-list at redhat.com > > > Sent: Sunday, January 10, 2016 10:50:12 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Thanks, Marius! > > > > > > I get the same error. > > > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > > fingerprint > > > (HTTP p 400) (Requested ID: ...)" > > > > > > I'm just staring out with OpenStack here, trying to get through my first > > > tutorial, so I'm fumbling around. > > > > > > ...John > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Wed Jan 13 07:56:55 2016 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 13 Jan 2016 02:56:55 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> References: <1112286009.10283967.1452630101512.JavaMail.zimbra@redhat.com> <1608283806.1663961.1452631954189.JavaMail.yahoo@mail.yahoo.com> <1364666626.3407341.1452644492752.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1931522934.11892588.1452671815938.JavaMail.zimbra@redhat.com> ----- Original Message ----- > I'm still struggling with this one, so if anyone has any other ideas, I'm all > ears (or eyes!). Hi John, Can you paste the contents of /var/log/nova/*.log somewhere? It might help understanding why it is not working. Regards, Javier > Regards, > ...John > On Tuesday, January 12, 2016 2:52 PM, Thales wrote: > Thanks! I'm just going through the beginning tutorials now, so I'm not aware > of those commands, nor do I know where to find them, however I ran the > first, and, unfortunately, I get the same error: > [root at localhost keystone(keystone_admin)]# nova keypair-add cloudkey > > cloudkey.priv; chmod 600 cloudkey.priv > ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP 400) (Request-ID: req-e722e315-e701-40d8-9fd6-0b5e71df3de1) > Regards, > ...John > On Tuesday, January 12, 2016 2:21 PM, Sasha Chuzhoy wrote: > You need to source the file with the variables. > Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" > command. > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Thales" < thaleslv at yahoo.com > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > Cc: "Marius Cornea" < marius at remote-lab.net >, rdo-list at redhat.com > > Sent: Tuesday, January 12, 2016 2:49:48 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Hello Sasha, > > I tried your command, and got the following error: > > [ john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > > cloudkey.priv > > ERROR (CommandError): You must provide a username or user id via > > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > > > > ...John > > > > > > On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy < sasha at redhat.com > > > wrote: > > > > > > Hi John, > > Does this command work for you: > > nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > > > If it works (nova keypair-list), you could ssh to the instance launched > > with > > the key using: > > ssh -i cloudkey.priv @ > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" < thaleslv at yahoo.com > > > > To: "Marius Cornea" < marius at remote-lab.net > > > > Cc: rdo-list at redhat.com > > > Sent: Sunday, January 10, 2016 10:50:12 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Thanks, Marius! > > > > > > I get the same error. > > > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > > fingerprint > > > (HTTP p 400) (Requested ID: ...)" > > > > > > I'm just staring out with OpenStack here, trying to get through my first > > > tutorial, so I'm fumbling around. > > > > > > ...John > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From chkumar246 at gmail.com Wed Jan 13 09:46:57 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 13 Jan 2016 15:16:57 +0530 Subject: [Rdo-list] RDO Bug statistics [2016-01-13] Message-ID: # RDO Bugs on 2016-01-13 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 374 - Fixed (MODIFIED, POST, ON_QA): 210 ## Number of open bugs by component dib-utils [ 2] diskimage-builder [ 4] + distribution [ 13] ++++++ dnsmasq [ 1] Documentation [ 4] + instack [ 4] + instack-undercloud [ 28] +++++++++++++ iproute [ 1] openstack-ceilometer [ 2] openstack-cinder [ 12] +++++ openstack-foreman-inst... [ 2] openstack-glance [ 2] openstack-heat [ 5] ++ openstack-horizon [ 2] openstack-ironic [ 2] openstack-ironic-disco... [ 1] openstack-keystone [ 10] ++++ openstack-manila [ 10] ++++ openstack-neutron [ 12] +++++ openstack-nova [ 20] +++++++++ openstack-packstack [ 83] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 17] ++++++++ openstack-selinux [ 11] +++++ openstack-swift [ 3] + openstack-tripleo [ 27] +++++++++++++ openstack-tripleo-heat... [ 5] ++ openstack-tripleo-imag... [ 2] openstack-trove [ 1] openstack-tuskar [ 2] openstack-utils [ 1] Package Review [ 9] ++++ python-glanceclient [ 2] python-keystonemiddleware [ 1] python-neutronclient [ 3] + python-novaclient [ 1] python-openstackclient [ 5] ++ python-oslo-config [ 2] rdo-manager [ 52] +++++++++++++++++++++++++ rdo-manager-cli [ 6] ++ rdopkg [ 1] RFEs [ 2] tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (374 bugs) ### dib-utils (2 bugs) [1263779 ] http://bugzilla.redhat.com/1263779 (NEW) Component: dib-utils Last change: 2015-12-07 Summary: Packstack Ironic admin_url misconfigured in nova.conf [1283812 ] http://bugzilla.redhat.com/1283812 (NEW) Component: dib-utils Last change: 2015-12-10 Summary: local_interface=bond0.120 in undercloud.conf create broken network configuration ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (13 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1290163 ] http://bugzilla.redhat.com/1290163 (NEW) Component: distribution Last change: 2015-12-10 Summary: Tracker: Review requests for new RDO Mitaka packages [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2016-01-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-12-10 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-12-07 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (2 bugs) [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2016-01-04 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (12 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2015-11-25 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2016-01-04 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (5 bugs) [1291047 ] http://bugzilla.redhat.com/1291047 (NEW) Component: openstack-heat Last change: 2016-01-07 Summary: (RDO Mitaka) Overcloud deployment failed: Exceeded max scheduling attempts [1293961 ] http://bugzilla.redhat.com/1293961 (ASSIGNED) Component: openstack-heat Last change: 2016-01-07 Summary: [SFCI] Heat template failed to start because Property error: ... net_cidr (constraint not found) [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (2 bugs) [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: openstack-ironic Last change: 2016-01-04 Summary: IPMI driver for Ironic should support RAID for operating system/root parition [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (1 bug) [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (10 bugs) [1289267 ] http://bugzilla.redhat.com/1289267 (NEW) Component: openstack-keystone Last change: 2015-12-09 Summary: Mitaka: keystone.py is deprecated for WSGI implementation [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-12-07 Summary: keystone: add token flush cronjob script to keystone package [1280530 ] http://bugzilla.redhat.com/1280530 (NEW) Component: openstack-keystone Last change: 2015-11-12 Summary: Fernet tokens cannot read key files with SELInuxz enabeld [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1284871 ] http://bugzilla.redhat.com/1284871 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: /usr/share/keystone/wsgi-keystone.conf is missing group=keystone [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf ### openstack-manila (10 bugs) [1278918 ] http://bugzilla.redhat.com/1278918 (NEW) Component: openstack-manila Last change: 2015-12-06 Summary: manila-api fails to start without updates from upstream stable/liberty [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1277787 ] http://bugzilla.redhat.com/1277787 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: Glusterfs_driver: Export location for Glusterfs NFS- Ganesha is incorrect [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1277792 ] http://bugzilla.redhat.com/1277792 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: glusterfs_driver: Access-deny for glusterfs driver should be dynamic [1278919 ] http://bugzilla.redhat.com/1278919 (NEW) Component: openstack-manila Last change: 2015-12-06 Summary: AvailabilityZoneFilter is not working in manila- scheduler [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (12 bugs) [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2016-01-11 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-11-19 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-12-22 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1281308 ] http://bugzilla.redhat.com/1281308 (NEW) Component: openstack-neutron Last change: 2015-12-30 Summary: QoS policy is not enforced when using a previously used port [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1280258 ] http://bugzilla.redhat.com/1280258 (NEW) Component: openstack-neutron Last change: 2015-11-11 Summary: tenants seem like they are able to detach admin enforced QoS policies from ports or networks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https [1065826 ] http://bugzilla.redhat.com/1065826 (ASSIGNED) Component: openstack-neutron Last change: 2015-12-15 Summary: [RFE] [neutron] neutron services needs more RPM granularity ### openstack-nova (20 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1278808 ] http://bugzilla.redhat.com/1278808 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Guest fails to use more than 1 vCPU with smpboot: do_boot_cpu failed(-1) to wakeup [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova: fail to edit project quota with DataError from nova [1294747 ] http://bugzilla.redhat.com/1294747 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Migration fails when the SRIOV PF is not online [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2016-01-08 Summary: logrotate should copytruncate to avoid openstack logging to deleted files [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: v4-fixed-ip= not working with juno nova networking ### openstack-packstack (83 bugs) [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1284182 ] http://bugzilla.redhat.com/1284182 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Unable start Keystone, core dump [1296844 ] http://bugzilla.redhat.com/1296844 (NEW) Component: openstack-packstack Last change: 2016-01-08 Summary: RDO Kilo packstack AIO install fails on CentOS 7.2. Error: Unable to connect to mongodb server! (192.169.142.54:27017) [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-11-06 Summary: Error: service-update is not currently supported by the keystone sql driver [1188491 ] http://bugzilla.redhat.com/1188491 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: Packstack wording is unclear for demo and testing provisioning. [1201612 ] http://bugzilla.redhat.com/1201612 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: Interactive - Packstack asks for Tempest details even when Tempest install is declined [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1292271 ] http://bugzilla.redhat.com/1292271 (NEW) Component: openstack-packstack Last change: 2015-12-18 Summary: Receive Msg 'Error: Could not find user glance' [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1255369 ] http://bugzilla.redhat.com/1255369 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: Improve session settings for horizon [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1254389 ] http://bugzilla.redhat.com/1254389 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-13 Summary: Can no longer run packstack to maintain cluster [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1063393 ] http://bugzilla.redhat.com/1063393 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: RFE: Provide option to set bind_host/bind_port for API services [1290415 ] http://bugzilla.redhat.com/1290415 (NEW) Component: openstack-packstack Last change: 2016-01-09 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1282746 ] http://bugzilla.redhat.com/1282746 (NEW) Component: openstack-packstack Last change: 2016-01-08 Summary: Swift's proxy-server is not configured to use ceilometer [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1242647 ] http://bugzilla.redhat.com/1242647 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: Nova keypair doesn't work with Nova Networking [1239027 ] http://bugzilla.redhat.com/1239027 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: please move httpd log files to corresponding dirs [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1116019 ] http://bugzilla.redhat.com/1116019 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: AMQP1.0 server configurations needed [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1184806 ] http://bugzilla.redhat.com/1184806 (NEW) Component: openstack-packstack Last change: 2015-12-02 Summary: [RFE] Packstack should support deploying Nova and Glance with RBD images and Ceph as a backend [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1279642 ] http://bugzilla.redhat.com/1279642 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run fails when running with DEMO [1200129 ] http://bugzilla.redhat.com/1200129 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: [RFE] add support for ceilometer workload partitioning via tooz/redis [1194678 ] http://bugzilla.redhat.com/1194678 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: On aarch64, nova.conf should default to vnc_enabled=False [1293693 ] http://bugzilla.redhat.com/1293693 (NEW) Component: openstack-packstack Last change: 2015-12-23 Summary: Keystone setup fails on missing required parameter [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1286995 ] http://bugzilla.redhat.com/1286995 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: PackStack should configure LVM filtering with LVM/iSCSI [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1279641 ] http://bugzilla.redhat.com/1279641 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run does not install keystoneauth1 [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1276277 ] http://bugzilla.redhat.com/1276277 (NEW) Component: openstack-packstack Last change: 2015-10-31 Summary: packstack --allinone fails on CentOS 7 x86_64 1503-01 [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [903645 ] http://bugzilla.redhat.com/903645 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: RFE: Include the ability in PackStack to support SSL for all REST services and message bus communication [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1289761 ] http://bugzilla.redhat.com/1289761 (NEW) Component: openstack-packstack Last change: 2015-12-10 Summary: PackStack installs Nova crontab that nova user can't run [1286828 ] http://bugzilla.redhat.com/1286828 (NEW) Component: openstack-packstack Last change: 2015-12-04 Summary: Packstack should have the option to install QoS (neutron) [1283261 ] http://bugzilla.redhat.com/1283261 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: ceilometer-nova is not configured [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2016-01-09 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1285494 ] http://bugzilla.redhat.com/1285494 (NEW) Component: openstack-packstack Last change: 2015-11-25 Summary: openstack- packstack-7.0.0-0.5.dev1661.gaf13b7e.el7.noarch cripples(?) httpd.conf [1227298 ] http://bugzilla.redhat.com/1227298 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: Packstack should support MTU settings [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1005073 ] http://bugzilla.redhat.com/1005073 (NEW) Component: openstack-packstack Last change: 2015-12-02 Summary: [RFE] Please add glance and nova lib folder config [1168113 ] http://bugzilla.redhat.com/1168113 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: The warning message " NetworkManager is active " appears even when the NetworkManager is inactive [1172310 ] http://bugzilla.redhat.com/1172310 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: support Keystone LDAP [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1202922 ] http://bugzilla.redhat.com/1202922 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: packstack key injection fails with legacy networking (Nova networking) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1282928 ] http://bugzilla.redhat.com/1282928 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-09 Summary: Trove-api fails to start when deployed using packstack on RHEL 7.2 RC1.1 [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server ### openstack-puppet-modules (17 bugs) [1288533 ] http://bugzilla.redhat.com/1288533 (NEW) Component: openstack-puppet-modules Last change: 2015-12-04 Summary: packstack fails on installing mongodb [1289309 ] http://bugzilla.redhat.com/1289309 (NEW) Component: openstack-puppet-modules Last change: 2015-12-07 Summary: Neutron module needs updating in OPM [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-puppet-modules Last change: 2015-12-16 Summary: puppet module for manila should include service type - shareV2 [1285900 ] http://bugzilla.redhat.com/1285900 (NEW) Component: openstack-puppet-modules Last change: 2015-11-26 Summary: Typo in log file name for trove-guestagent [1297535 ] http://bugzilla.redhat.com/1297535 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-01-12 Summary: Undercloud installation fails ::aodh::keystone::auth not found for instack [1285897 ] http://bugzilla.redhat.com/1285897 (NEW) Component: openstack-puppet-modules Last change: 2015-11-26 Summary: trove-guestagent.conf should define the configuration for backups [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start ### openstack-selinux (11 bugs) [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2016-01-04 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1284879 ] http://bugzilla.redhat.com/1284879 (NEW) Component: openstack-selinux Last change: 2015-11-24 Summary: Keystone via mod_wsgi is missing permission to read /etc/keystone/fernet-keys [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2016-01-04 Summary: Nova rootwrap-daemon requires a selinux exception [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied ### openstack-swift (3 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-12-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (27 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2015-12-11 Summary: missing python-proliantutils [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1284664 ] http://bugzilla.redhat.com/1284664 (NEW) Component: openstack-tripleo Last change: 2015-11-23 Summary: NtpServer is passed as string by "openstack overcloud deploy" [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1290156 ] http://bugzilla.redhat.com/1290156 (NEW) Component: openstack-trove Last change: 2015-12-09 Summary: Move guestagent settings to default section ### openstack-tuskar (2 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (1 bug) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2016-01-04 Summary: Can't enable OpenStack service after openstack-service disable ### Package Review (9 bugs) [1283295 ] http://bugzilla.redhat.com/1283295 (NEW) Component: Package Review Last change: 2015-11-18 Summary: Review Request: CloudKitty - Rating as a Service [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2015-12-03 Summary: Review Request: openstack-mistral - workflow Service for OpenStack cloud [1290090 ] http://bugzilla.redhat.com/1290090 (ASSIGNED) Component: Package Review Last change: 2015-12-10 Summary: Review Request: python-networking-midonet [1290308 ] http://bugzilla.redhat.com/1290308 (NEW) Component: Package Review Last change: 2015-12-10 Summary: Review Request: python-midonetclient [1288149 ] http://bugzilla.redhat.com/1288149 (NEW) Component: Package Review Last change: 2015-12-07 Summary: Review Request: python-os-win - Windows / Hyper-V library for OpenStack projects [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2015-11-05 Summary: Review Request: Murano - is an application catalog for OpenStack [1293948 ] http://bugzilla.redhat.com/1293948 (NEW) Component: Package Review Last change: 2015-12-23 Summary: Review Request: python-kuryr [1292794 ] http://bugzilla.redhat.com/1292794 (NEW) Component: Package Review Last change: 2016-01-12 Summary: Review Request: openstack-magnum - Container Management project for OpenStack [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2015-11-13 Summary: New Package: python-dracclient ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (3 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1281352 ] http://bugzilla.redhat.com/1281352 (NEW) Component: python-neutronclient Last change: 2015-11-12 Summary: Internal server error when running qos-bandwidth-limit- rule-update as a tenant Edit ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: appdirs requirement ### python-oslo-config (2 bugs) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2016-01-04 Summary: oslo_config != oslo.config [1282093 ] http://bugzilla.redhat.com/1282093 (NEW) Component: python-oslo-config Last change: 2016-01-04 Summary: please rebase oslo.log to 1.12.0 ### rdo-manager (52 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1294599 ] http://bugzilla.redhat.com/1294599 (NEW) Component: rdo-manager Last change: 2015-12-29 Summary: Virtual environment overcloud deploy fails with default memory allocation [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support IPv6 [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1294085 ] http://bugzilla.redhat.com/1294085 (NEW) Component: rdo-manager Last change: 2016-01-04 Summary: Creating an instance on RDO overcloud, errors out [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1296475 ] http://bugzilla.redhat.com/1296475 (NEW) Component: rdo-manager Last change: 2016-01-07 Summary: Deploying Manila is not possible due to missing template [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] Support enabling the port security extension [1294683 ] http://bugzilla.redhat.com/1294683 (NEW) Component: rdo-manager Last change: 2016-01-01 Summary: instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux. [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-11-18 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1292253 ] http://bugzilla.redhat.com/1292253 (NEW) Component: rdo-manager Last change: 2016-01-01 Summary: Production + EPEL + yum-plugin-priorities results in wrong version of hiera [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-31 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (210 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2016-01-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (10 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Ceilometer requires pymongo>=3.0.2 [1265721 ] http://bugzilla.redhat.com/1265721 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (POST) Component: openstack-ceilometer Last change: 2016-01-04 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer-common missing python-babel dependency [1287252 ] http://bugzilla.redhat.com/1287252 (POST) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer-alarm-notifier does not start: unit file is missing [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Ceilometer dbsync failing during HA deployment [1265818 ] http://bugzilla.redhat.com/1265818 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: ceilometer polling agent does not start [1214928 ] http://bugzilla.redhat.com/1214928 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: package ceilometermiddleware missing ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1278962 ] http://bugzilla.redhat.com/1278962 (ON_QA) Component: openstack-glance Last change: 2015-11-13 Summary: python-cryptography requires pyasn1>=0.1.8 but only 0.1.6 is available in Centos [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency ### openstack-heat (3 bugs) [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2016-01-04 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1281920 ] http://bugzilla.redhat.com/1281920 (POST) Component: openstack-neutron Last change: 2015-11-16 Summary: neutron-server will not start: fails with pbr version issue [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2016-01-04 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2016-01-04 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all ### openstack-packstack (69 bugs) [1252483 ] http://bugzilla.redhat.com/1252483 (POST) Component: openstack-packstack Last change: 2015-12-07 Summary: Demo network provisioning: public and private are shared, private has no tenant [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: Cannot start nova-network on juno - Centos7 [1288179 ] http://bugzilla.redhat.com/1288179 (POST) Component: openstack-packstack Last change: 2015-12-08 Summary: Mitaka: Packstack image provisioning fails with "Store filesystem could not be configured correctly" [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1259354 ] http://bugzilla.redhat.com/1259354 (MODIFIED) Component: openstack-packstack Last change: 2015-11-10 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: rdo release RPM not installed on all fedora hosts [1266028 ] http://bugzilla.redhat.com/1266028 (POST) Component: openstack-packstack Last change: 2015-12-15 Summary: Packstack should use pymysql database driver since Liberty [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1290429 ] http://bugzilla.redhat.com/1290429 (POST) Component: openstack-packstack Last change: 2015-12-10 Summary: Packstack does not correctly configure Nova notifications for Neutron in Mitaka-1 [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: packstack requires 2 runs to install ceilometer [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1150652 ] http://bugzilla.redhat.com/1150652 (POST) Component: openstack-packstack Last change: 2015-12-07 Summary: PackStack does not provide an option to register hosts to Red Hat Satellite 6 [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1295503 ] http://bugzilla.redhat.com/1295503 (MODIFIED) Component: openstack-packstack Last change: 2016-01-08 Summary: Packstack master branch is in the liberty repositories (was: Packstack installation fails with unsupported db backend) [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1124982 ] http://bugzilla.redhat.com/1124982 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Help text for SSL is incorrect regarding passphrase on the cert [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1297518 ] http://bugzilla.redhat.com/1297518 (POST) Component: openstack-packstack Last change: 2016-01-12 Summary: Sahara installation fails with ArgumentError: Could not find declared class ::sahara::notify::rabbitmq [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1285314 ] http://bugzilla.redhat.com/1285314 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Packstack needs to support aodh services since Mitaka [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1187412 ] http://bugzilla.redhat.com/1187412 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Script wording for service installation should be consistent [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher ### openstack-puppet-modules (20 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1297052 ] http://bugzilla.redhat.com/1297052 (POST) Component: openstack-puppet-modules Last change: 2016-01-12 Summary: Packstack installation fails with unsupported db backend [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (2 bugs) [1290387 ] http://bugzilla.redhat.com/1290387 (POST) Component: openstack-sahara Last change: 2015-12-10 Summary: openstack-sahara-api fails to start in Mitaka-1, cannot find api-paste.ini [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (13 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2016-01-04 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1219406 ] http://bugzilla.redhat.com/1219406 (MODIFIED) Component: openstack-selinux Last change: 2015-11-06 Summary: Glance over nfs fails due to selinux [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (2 bugs) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account [1272572 ] http://bugzilla.redhat.com/1272572 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-12-10 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon ### openstack-trove (2 bugs) [1278608 ] http://bugzilla.redhat.com/1278608 (MODIFIED) Component: openstack-trove Last change: 2015-11-06 Summary: trove-api fails to start [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1229493 ] http://bugzilla.redhat.com/1229493 (POST) Component: openstack-tuskar Last change: 2015-12-04 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (3 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (POST) Component: openstack-utils Last change: 2016-01-05 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2016-01-04 Summary: openstack-status as admin falsely shows zero instances [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2016-01-04 Summary: update openstack-status for rdo-manager ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content ### python-glanceclient (2 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2016-01-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2016-01-04 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2016-01-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2016-01-04 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (10 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1271335 ] http://bugzilla.redhat.com/1271335 (POST) Component: rdo-manager Last change: 2015-12-30 Summary: [RFE] Support explicit configuration of L2 population [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1272180 ] http://bugzilla.redhat.com/1272180 (POST) Component: rdo-manager Last change: 2015-12-04 Summary: Horizon doesn't load when deploying without pacemaker [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-11-04 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (10 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2015-11-08 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Wed Jan 13 13:07:18 2016 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 13 Jan 2016 14:07:18 +0100 Subject: [Rdo-list] neutron database migration failed In-Reply-To: References: Message-ID: FYI was fixed by https://review.openstack.org/#/c/253150/ Hui Kang wrote: > Hi, I installed openstack neutron package from rdo. My OS is centos > 7.1. The neutron database migration fails these days. The error output > is > > NFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> > 2a16083502f3, Metaplugin removal > INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> > 2e5352a0ad4d, Add missing foreign keys > INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> > 11926bcfe72d, add geneve ml2 type driver > INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> > 4af11ca47297, Drop cisco monolithic tables > INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> > 1b294093239c, Drop embrane plugin table > INFO [alembic.runtime.migration] Running upgrade 1b294093239c, > 32e5974ada25 -> 8a6d8bdae39, standardattributes migration > Traceback (most recent call last): > File "/var/lib/kolla/venv/bin/neutron-db-manage", line 10, in > sys.exit(main()) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", > line 692, in main > CONF.command.func(config, CONF.command.name) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", > line 217, in do_upgrade > desc=branch, sql=CONF.command.sql) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", > line 124, in do_alembic_command > getattr(alembic_command, cmd)(config, *args, **kwargs) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/command.py", > line 174, in upgrade > script.run_env() > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/script/base.py", > line 397, in run_env > util.load_python_file(self.dir, 'env.py') > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/pyfiles.py", > line 81, in load_python_file > module = load_module_py(module_id, path) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/compat.py", > line 79, in load_module_py > mod = imp.load_source(module_id, path, fp) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py", > line 135, in > run_migrations_online() > File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py", > line 125, in run_migrations_online > context.run_migrations() > File "", line 8, in run_migrations > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/runtime/environment.py", > line 797, in run_migrations > self.get_context().run_migrations(**kw) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/runtime/migration.py", > line 312, in run_migrations > step.migration_fn(**kw) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/mitaka/contract/8a6d8bdae39_migrate_neutron_resources_table.py", > line 60, in upgrade > existing_server_default=False) > File "", line 8, in alter_column > File "", line 3, in alter_column > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/operations/ops.py", > line 1414, in alter_column > return operations.invoke(alt) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/operations/base.py", > line 318, in invoke > return fn(self, operation) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/operations/toimpl.py", > line 53, in alter_column > **operation.kw > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/ddl/mysql.py", > line 66, in alter_column > else existing_autoincrement > File "/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/ddl/impl.py", > line 118, in _exec > return conn.execute(construct, *multiparams, **params) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", > line 914, in execute > return meth(self, multiparams, params) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", > line 68, in _execute_on_connection > return connection._execute_ddl(self, multiparams, params) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", > line 968, in _execute_ddl > compiled > File "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", > line 1146, in _execute_context > context) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", > line 1337, in _handle_dbapi_exception > util.raise_from_cause(newraise, exc_info) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", > line 199, in raise_from_cause > reraise(type(exception), exception, tb=exc_tb) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", > line 1139, in _execute_context > context) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", > line 450, in do_execute > cursor.execute(statement, parameters) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/MySQLdb/cursors.py", > line 205, in execute > self.errorhandler(self, exc, value) > File "/var/lib/kolla/venv/lib/python2.7/site-packages/MySQLdb/connections.py", > line 36, in defaulterrorhandler > raise errorclass, errorvalue > sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) > (1832, "Cannot change column 'standard_attr_id': used in a foreign key > constraint 'ports_ibfk_2'") [SQL: u'ALTER TABLE ports MODIFY > standard_attr_id BIGINT NOT NULL'] > > Is there anything wrong with the neutron rpm in RDO? Thanks. - Hui > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ichavero at redhat.com Wed Jan 13 15:37:37 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Wed, 13 Jan 2016 10:37:37 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <61375750.3463806.1452656202171.JavaMail.yahoo@mail.yahoo.com> References: <895813556.10402525.1452654170910.JavaMail.zimbra@redhat.com> <61375750.3463806.1452656202171.JavaMail.yahoo@mail.yahoo.com> Message-ID: <192905161.7117075.1452699457337.JavaMail.zimbra@redhat.com> > > Here is the pastebin, basically a repeat of the "quickstart" website > > http://paste.openstack.org/show/483685/ Looking at your history i noticed that you run packstack withuout the sudo command. Are you sure it finished correctly? It should be run as root. Cheers, Ivan From lbezdick at redhat.com Wed Jan 13 16:36:30 2016 From: lbezdick at redhat.com (Lukas Bezdicka) Date: Wed, 13 Jan 2016 17:36:30 +0100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <56954313.1010001@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> Message-ID: <1452702990.1970.6.camel@redhat.com> On Tue, 2016-01-12 at 13:16 -0500, Emilien Macchi wrote: > Also, the way we're packaging OPM is really bad. > > * we have no SHA1 for each module we have in OPM /usr/share/openstack-puppet/Puppetfile > * we are not able to validate each module you have Puppetfile and our own patches as patches to tar > * package tarball is not pure. All other OpenStack RPMS take upstream > tarball so we can easily compare but in OPM... no way to do it. Tarballs are always taken from github releases: https://github.com/redhat-openstack/openstack-puppet-modules/releases And yes, dropping single package and creating metapackage is the way to go. > > Those issues are really critical, I would like to hear from OPM > folks, > and find solutions that we will work on during the following weeks. > > Thanks > > On 01/12/2016 12:37 PM, Emilien Macchi wrote: > > So I started an etherpad to discuss why we have so much downstream > > patches in Puppet modules. > > > > https://etherpad.openstack.org/p/opm-patches > > > > In my opinion, we should follow some best practices: > > > > * upstream first. If you find a bug, submit the patch upstream, > > wait for > > at least a positive review from a core and also successful CI jobs. > > Then > > you can backport it downstream if urgent. > > * backport it to stable branches when needed. The patch we want is > > in > > master and not stable? It's too easy to backport it in OPM. Do the > > backport in upstream/stable first, it will help to stay updated > > with > > upstream. > > * don't change default parameters, don't override them. Our > > installers > > are able to override any parameter so do not hardcode this kind of > > change. > > * keep up with upstream: if you have an upstream patch under review > > that > > is already in OPM: keep it alive and make sure it lands as soon as > > possible. > > > > UPSTREAM FIRST please please please (I'll send you cookies if you > > want). > > > > If you have any question about an upstream patch, please join > > #puppet-openstack (freenode) and talk to the group. We're doing > > reviews > > every day and it's not difficult to land a patch. > > > > In the meantime, I would like to justify each of our backports in > > the > > etherpad and clean-up a maximum of them. > > > > Thank you for reading so far, > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Wed Jan 13 16:54:44 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 13 Jan 2016 17:54:44 +0100 Subject: [Rdo-list] [Meeting Minutes] RDO meeting (2016-01-13) Message-ID: ============================== #rdo: RDO meeting (2016-01-13) ============================== Meeting started by number80 at 15:02:04 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2016-01-13/rdo_meeting_(2016-01-13).2016-01-13-15.02.log.html . Meeting summary --------------- * LINK: https://etherpad.openstack.org/p/RDO-Packaging (number80, 15:02:50) * murano/mistral unresponsive maintainer (number80, 15:03:52) * ACTION: degorenko contact Daniil about murano/mistral (number80, 15:09:38) * dynamic UID/GID for openstack services (number80, 15:10:22) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1297580 (dmsimard, 15:13:24) * LINK: https://git.fedorahosted.org/cgit/setup.git/tree/uidgid (number80, 15:15:51) * reserved UIDs in Fedora https://git.fedorahosted.org/cgit/setup.git/tree/uidgid (apevec, 15:17:12) * ACTION: number80 ping RHEL setup maintainer (number80, 15:26:10) * AGREED: keep using reserved UID/GID for existing services and use dynamic UID/GID for new services by default (number80, 15:27:50) * Fedora uid guidelines https://fedoraproject.org/wiki/Packaging:UsersAndGroups (apevec, 15:28:29) * trim changelog entries that are older than 2 years (number80, 15:31:13) * LINK: http://docs.openstack.org/icehouse/install-guide/install/yum/content/reserved_uids.html (number80, 15:32:36) * trim changelog entries that are older than 2 years (number80, 15:33:42) * LINK: https://review.gerrithub.io/#/c/258590/ (number80, 15:33:50) * AGREED: add in rdo-rpm-macros trim changelog entries that are older than 2 years (number80, 15:37:18) * missing deps from EPEL required by RDO Manager (number80, 15:37:30) * LINK: https://trello.com/c/6uXVuQcC/115-rebuild-epel-packages-required-for-rdo-manager (number80, 15:37:39) * good news we don't need dkms in RDO repositories :) (number80, 15:37:59) * LINK: http://docs.ceph.com/docs/master/install/get-packages/ (dmsimard, 15:40:12) * AGREED: temporarily use EPEL ceph package until we find a better alternative w/ ceph folks or storage SIG (number80, 15:46:56) * CI promote job - To Khaleesi or not to Khaleesi (number80, 15:47:43) * AGREED: put tripleo-quiackstart on rdo-openstack (number80, 15:54:04) * an update on WeiRDO (number80, 15:54:29) * LINK: https://github.com/redhat-openstack/weirdo) (number80, 15:54:40) * LINK: https://www.redhat.com/archives/rdo-list/2015-December/msg00043.html (dmsimard, 15:54:45) * weirdo is already setup to use gerrithub and weirdo already gates against itself (though non-voting) (number80, 15:56:45) * Upcoming events (number80, 15:57:06) * Doc day, Jan 20-21 (number80, 15:58:12) * test day Mitaka-2 - Jan 27-28 (number80, 15:58:33) * rdoinfo - any tweaks needed to prevent forks and have all releases in one? (number80, 16:01:57) * AGREED: use apevec proposal as foundation for the next-gen rdoinfo database format (number80, 16:06:17) * ACTION: jruzicka to lead rdoinfo reformat (number80, 16:06:38) * open floor (number80, 16:08:39) * AGREED: jruzicka chairing next week meeting (number80, 16:08:50) * ACTION: dmsimard will migrate delorean instance to a less crowded compute node (number80, 16:10:16) Meeting ended at 16:14:00 UTC. Action Items ------------ * degorenko contact Daniil about murano/mistral * number80 ping RHEL setup maintainer * jruzicka to lead rdoinfo reformat * dmsimard will migrate delorean instance to a less crowded compute node Action Items, by person ----------------------- * degorenko * degorenko contact Daniil about murano/mistral * dmsimard * dmsimard will migrate delorean instance to a less crowded compute node * jruzicka * jruzicka to lead rdoinfo reformat * number80 * number80 ping RHEL setup maintainer * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * number80 (139) * apevec (98) * dmsimard (51) * trown (26) * rbowen (18) * jruzicka (18) * zodbot (13) * EmilienM (9) * chandankumar (7) * degorenko (7) * jpena (6) * imcsk8 (3) * jschlueter (3) * sidx64_Cern_ (2) * mflobo (2) * dtrishkin (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From thaleslv at yahoo.com Wed Jan 13 17:26:21 2016 From: thaleslv at yahoo.com (Thales) Date: Wed, 13 Jan 2016 17:26:21 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <192905161.7117075.1452699457337.JavaMail.zimbra@redhat.com> References: <192905161.7117075.1452699457337.JavaMail.zimbra@redhat.com> Message-ID: <1551045829.3665763.1452705981245.JavaMail.yahoo@mail.yahoo.com> Ivan, You're right, I ran it without the sudo command. ?I was following the directions here, where they don't use sudo:https://www.rdoproject.org/install/quickstart/ Is that wrong? Regards,...John On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero wrote: > > Here is the pastebin, basically a repeat of the "quickstart" website > > http://paste.openstack.org/show/483685/ Looking at your history i noticed that you run packstack withuout the sudo command. Are you sure it finished correctly? It should be run as root. Cheers, Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From thaleslv at yahoo.com Wed Jan 13 17:29:13 2016 From: thaleslv at yahoo.com (Thales) Date: Wed, 13 Jan 2016 17:29:13 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1931522934.11892588.1452671815938.JavaMail.zimbra@redhat.com> References: <1931522934.11892588.1452671815938.JavaMail.zimbra@redhat.com> Message-ID: <1496787795.3648750.1452706153888.JavaMail.yahoo@mail.yahoo.com> Hello Javier, "Can you paste the contents of /var/log/nova/*.log somewhere?" ? There are eight log files, and two really large ones. ?One is over 4 megs. ? ?Is there one in particular, or should I put them in a zip file and make an attachment? Regards,...John On Wednesday, January 13, 2016 1:56 AM, Javier Pena wrote: ----- Original Message ----- > I'm still struggling with this one, so if anyone has any other ideas, I'm all > ears (or eyes!). Hi John, Can you paste the contents of /var/log/nova/*.log somewhere? It might help understanding why it is not working. Regards, Javier > Regards, > ...John > On Tuesday, January 12, 2016 2:52 PM, Thales wrote: > Thanks! I'm just going through the beginning tutorials now, so I'm not aware > of those commands, nor do I know where to find them, however I ran the > first, and, unfortunately, I get the same error: > [root at localhost keystone(keystone_admin)]# nova keypair-add cloudkey > > cloudkey.priv; chmod 600 cloudkey.priv > ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint > (HTTP 400) (Request-ID: req-e722e315-e701-40d8-9fd6-0b5e71df3de1) > Regards, > ...John > On Tuesday, January 12, 2016 2:21 PM, Sasha Chuzhoy wrote: > You need to source the file with the variables. > Run "source /root/keystonerc_admin" before attempting the "nova keypair-add" > command. > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Thales" < thaleslv at yahoo.com > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > Cc: "Marius Cornea" < marius at remote-lab.net >, rdo-list at redhat.com > > Sent: Tuesday, January 12, 2016 2:49:48 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Hello Sasha, > > I tried your command, and got the following error: > > [ john at localhost ~]$ nova keypair-add cloudkey > cloudkey.priv; chmod 600 > > cloudkey.priv > > ERROR (CommandError): You must provide a username or user id via > > --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID] > > > > > > ...John > > > > > > On Monday, January 11, 2016 6:28 PM, Sasha Chuzhoy < sasha at redhat.com > > > wrote: > > > > > > Hi John, > > Does this command work for you: > > nova keypair-add cloudkey >cloudkey.priv; chmod 600 cloudkey.priv > > > > If it works (nova keypair-list), you could ssh to the instance launched > > with > > the key using: > > ssh -i cloudkey.priv @ > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" < thaleslv at yahoo.com > > > > To: "Marius Cornea" < marius at remote-lab.net > > > > Cc: rdo-list at redhat.com > > > Sent: Sunday, January 10, 2016 10:50:12 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Thanks, Marius! > > > > > > I get the same error. > > > > > > "Error (BadRequest): Keypair data is invalid: failed to generate > > > fingerprint > > > (HTTP p 400) (Requested ID: ...)" > > > > > > I'm just staring out with OpenStack here, trying to get through my first > > > tutorial, so I'm fumbling around. > > > > > > ...John > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Wed Jan 13 17:55:51 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 13 Jan 2016 12:55:51 -0500 Subject: [Rdo-list] Planned maintenance on Delorean repositories @ 21:30 UTC, Jan 14th Message-ID: Greetings, A maintenance is required on the server where the Delorean instances and repositories are hosted in order to resolve ongoing performance issues. Delorean will stop processing new commits around 21:00 UTC and the repositories will become unavailable around 21:30 UTC. The maintenance shouldn't last more than 30 minutes before the repositories are back up and Delorean resumes processing commits. We'll send an e-mail once the maintenance is complete. Thanks ! David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From bderzhavets at hotmail.com Wed Jan 13 19:16:40 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 13 Jan 2016 19:16:40 +0000 Subject: [Rdo-list] Please, explain me what actually happens ? In-Reply-To: <569699A9.9050804@redhat.com> References: <56969326.20902@redhat.com> <56969901.5010500@redhat.com>,<569699A9.9050804@redhat.com> Message-ID: ________________________________________ From: Rich Bowen Sent: Wednesday, January 13, 2016 1:38 PM To: Perry Myers; Boris Derzhavets; rdo-list-owner at redhat.com; Alan Pevec Subject: Re: Please, explain me what actually happens ? Hmm. Digging a little deeper, it appears that *all* of the bounce notifications were to hotmail addresses - some hotmail.com and some hotmail.it. So maybe 1) a server bounced at hotmail, or 2) we actually have a blacklist problem with hotmail. Can you re-enable your subscription and see if it happens again? Already done. If hotmail.com is a problem I can replace it with alive account @gmail.com B. --Rich On 01/13/2016 01:35 PM, Rich Bowen wrote: > I see those notifications every few days, and then today I saw 14 of > them all within a minute of one another. I'll bet that the problem was > on our end - some system somewhere went offline or was rebooted or > something. > > On 01/13/2016 01:10 PM, Perry Myers wrote: >> Alan/Rich do you have any idea what is going on with this? I saw a TON >> of these bounces just go out recently and no idea why. >> >> Boris, typically these would go out if attempts for the mailing list to >> send email to your address consistently fail and get bounces back of the >> messages. If your mail server is bouncing messages back to the list >> server, then this would be the result. I'm not sure if that's actually >> happening here or if there is some other cause. >> >> This isn't about you posting to the list, it's about the ability of the >> list to deliver list traffic/messages to you. >> >> Perry >> >> On 01/13/2016 01:08 PM, Boris Derzhavets wrote: >>> I just received following message :- >>> >>> >>> Your membership in the mailing list Rdo-list has been disabled due to >>> excessive bounces The last bounce received from you was dated >>> 13-Jan-2016. You will not get any more messages from this list until >>> you re-enable your membership. You will receive 3 more reminders like >>> this before your membership in the list is deleted. >>> >>> To re-enable your membership, you can simply respond to this message >>> (leaving the Subject: line intact), or visit the confirmation page at >>> >>> . . . . . . . >>> >>> >>> Please, check >>> >>> https://www.redhat.com/archives/rdo-list/2016-January/author.html >>> >>> >>> *Boris Derzhavets* >>> >>> * Re: [Rdo-list] RDO Liberty Multi-Node Networking Config Issue >>> Wed >>> Jan 06 13:54:43 GMT 2016 >>> * Re: [Rdo-list] Do you blog about OpenStack? >>> Fri >>> Jan 08 20:03:55 GMT 2016 >>> * Re: [Rdo-list] Confusion with Floating IPs in RDO Liberty Packstack >>> 3-node setup >>> Sat >>> Jan 09 07:43:09 GMT 2016 >>> * Re: [Rdo-list] HA setup in Kilo >>> Mon >>> Jan 11 15:11:19 GMT 2016 >>> >>> >>> I didn't send anything to RDO mailing list since 01/11/2016 >>> >>> >>> Please, explain me what I am doing wrong posting to rdo mailing list, >>> >>> what results excessive bounces in time frame when my activity on list >>> is zero ? >>> >>> >>> Thanks. >>> >>> Boris Derzhavets. >>> >> > > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From sasha at redhat.com Wed Jan 13 20:46:16 2016 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 13 Jan 2016 15:46:16 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1551045829.3665763.1452705981245.JavaMail.yahoo@mail.yahoo.com> References: <192905161.7117075.1452699457337.JavaMail.zimbra@redhat.com> <1551045829.3665763.1452705981245.JavaMail.yahoo@mail.yahoo.com> Message-ID: <998152316.10970433.1452717976531.JavaMail.zimbra@redhat.com> John, please try to run the " packstack --allinone" command as root (or with sudo). Then see if the error reproduces. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Ivan Chavero" > Cc: rdo-list at redhat.com > Sent: Wednesday, January 13, 2016 12:26:21 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Ivan, > > You're right, I ran it without the sudo command. I was following the > directions here, where they don't use sudo: > https://www.rdoproject.org/install/quickstart/ > > > Is that wrong? > > Regards, > ...John > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > wrote: > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > http://paste.openstack.org/show/483685/ > > > Looking at your history i noticed that you run packstack withuout the sudo > command. > Are you sure it finished correctly? It should be run as root. > > Cheers, > Ivan > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From emilien at redhat.com Wed Jan 13 21:04:30 2016 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 13 Jan 2016 16:04:30 -0500 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <1452702990.1970.6.camel@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <1452702990.1970.6.camel@redhat.com> Message-ID: <5696BBDE.3080701@redhat.com> On 01/13/2016 11:36 AM, Lukas Bezdicka wrote: > On Tue, 2016-01-12 at 13:16 -0500, Emilien Macchi wrote: >> Also, the way we're packaging OPM is really bad. >> >> * we have no SHA1 for each module we have in OPM > /usr/share/openstack-puppet/Puppetfile Nope, this file is not containing SHA1 for the actual modules we have in OPM. This file was used (and is 100% useless now) by SpinalStack. >> * we are not able to validate each module > you have Puppetfile and our own patches as patches to tar No, see my previous comment. >> * package tarball is not pure. All other OpenStack RPMS take upstream >> tarball so we can easily compare but in OPM... no way to do it. > Tarballs are always taken from github releases: > https://github.com/redhat-openstack/openstack-puppet-modules/releases You can't pull a tarball per module. Modules have been introduced by a RAW copy-paste. > And yes, dropping single package and creating metapackage is the way to > go. I would like to see a more constructive reply from people actually authors on OPM. We want to build a plan for the future release that will radically change the way we build & ship OPM. >> >> Those issues are really critical, I would like to hear from OPM >> folks, >> and find solutions that we will work on during the following weeks. >> >> Thanks >> >> On 01/12/2016 12:37 PM, Emilien Macchi wrote: >>> So I started an etherpad to discuss why we have so much downstream >>> patches in Puppet modules. >>> >>> https://etherpad.openstack.org/p/opm-patches >>> >>> In my opinion, we should follow some best practices: >>> >>> * upstream first. If you find a bug, submit the patch upstream, >>> wait for >>> at least a positive review from a core and also successful CI jobs. >>> Then >>> you can backport it downstream if urgent. >>> * backport it to stable branches when needed. The patch we want is >>> in >>> master and not stable? It's too easy to backport it in OPM. Do the >>> backport in upstream/stable first, it will help to stay updated >>> with >>> upstream. >>> * don't change default parameters, don't override them. Our >>> installers >>> are able to override any parameter so do not hardcode this kind of >>> change. >>> * keep up with upstream: if you have an upstream patch under review >>> that >>> is already in OPM: keep it alive and make sure it lands as soon as >>> possible. >>> >>> UPSTREAM FIRST please please please (I'll send you cookies if you >>> want). >>> >>> If you have any question about an upstream patch, please join >>> #puppet-openstack (freenode) and talk to the group. We're doing >>> reviews >>> every day and it's not difficult to land a patch. >>> >>> In the meantime, I would like to justify each of our backports in >>> the >>> etherpad and clean-up a maximum of them. >>> >>> Thank you for reading so far, >>> >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From thaleslv at yahoo.com Wed Jan 13 21:38:29 2016 From: thaleslv at yahoo.com (Thales) Date: Wed, 13 Jan 2016 21:38:29 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <998152316.10970433.1452717976531.JavaMail.zimbra@redhat.com> References: <998152316.10970433.1452717976531.JavaMail.zimbra@redhat.com> Message-ID: <602332309.3777027.1452721109832.JavaMail.yahoo@mail.yahoo.com> Sasha, Okay, ?I ran it. ? ?I hope there is not a conflict with the previous install. ?I ran the web interface, Dashboard, and the same error pops up, the HTTP 400 error. ? ? I then ran the keypair command line command at root, and get a different error, a n HTTP 401 authentication error: Here are the commands and the output:http://paste.openstack.org/show/483817/ ...John On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy wrote: John, please try to run the " packstack --allinone" command as root (or with sudo). Then see if the error reproduces. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Ivan Chavero" > Cc: rdo-list at redhat.com > Sent: Wednesday, January 13, 2016 12:26:21 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Ivan, > > You're right, I ran it without the sudo command. I was following the > directions here, where they don't use sudo: > https://www.rdoproject.org/install/quickstart/ > > > Is that wrong? > > Regards, > ...John > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > wrote: > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > http://paste.openstack.org/show/483685/ > > > Looking at your history i noticed that you run packstack withuout the sudo > command. > Are you sure it finished correctly? It should be run as root. > > Cheers, > Ivan > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Wed Jan 13 21:52:54 2016 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 13 Jan 2016 16:52:54 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <602332309.3777027.1452721109832.JavaMail.yahoo@mail.yahoo.com> References: <998152316.10970433.1452717976531.JavaMail.zimbra@redhat.com> <602332309.3777027.1452721109832.JavaMail.yahoo@mail.yahoo.com> Message-ID: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> Did the run of "packstack --allinone" completed successfully or exited with error? The should be no conflict with the previous install (not sure if the previous install completed successfully). Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Ivan Chavero" , rdo-list at redhat.com > Sent: Wednesday, January 13, 2016 4:38:29 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Sasha, > Okay, ?I ran it. ? ?I hope there is not a conflict with the previous install. > ?I ran the web interface, Dashboard, and the same error pops up, the HTTP > 400 error. ? ? I then ran the keypair command line command at root, and get > a different error, a n HTTP 401 authentication error: > Here are the commands and the output:http://paste.openstack.org/show/483817/ > > > > ...John > > > On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy > wrote: > > > John, > please try to run the " packstack --allinone" command as root (or with sudo). > > Then see if the error reproduces. > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Ivan Chavero" > > Cc: rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Ivan, > > > > You're right, I ran it without the sudo command. I was following the > > directions here, where they don't use sudo: > > https://www.rdoproject.org/install/quickstart/ > > > > > > Is that wrong? > > > > Regards, > > ...John > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > wrote: > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > http://paste.openstack.org/show/483685/ > > > > > > Looking at your history i noticed that you run packstack withuout the sudo > > command. > > Are you sure it finished correctly? It should be run as root. > > > > Cheers, > > Ivan > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From thaleslv at yahoo.com Wed Jan 13 22:05:57 2016 From: thaleslv at yahoo.com (Thales) Date: Wed, 13 Jan 2016 22:05:57 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> References: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> Message-ID: <1008102011.3805883.1452722757277.JavaMail.yahoo@mail.yahoo.com> Sasha, Yes, it confirmed with the following: "**** Installation completed successfully ******"?It's on the screen in front of me now.? The previous install also gave the same message. ?? ...John On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy wrote: Did the run of "packstack --allinone" completed successfully or exited with error? The should be no conflict with the previous install (not sure if the previous install completed successfully). Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Ivan Chavero" , rdo-list at redhat.com > Sent: Wednesday, January 13, 2016 4:38:29 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Sasha, > Okay, ?I ran it. ? ?I hope there is not a conflict with the previous install. > ?I ran the web interface, Dashboard, and the same error pops up, the HTTP > 400 error. ? ? I then ran the keypair command line command at root, and get > a different error, a n HTTP 401 authentication error: > Here are the commands and the output:http://paste.openstack.org/show/483817/ > > > > ...John >? > >? ? On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy >? ? wrote: >? > >? John, > please try to run the " packstack --allinone" command as root (or with sudo). > > Then see if the error reproduces. > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Ivan Chavero" > > Cc: rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Ivan, > > > > You're right, I ran it without the sudo command. I was following the > > directions here, where they don't use sudo: > > https://www.rdoproject.org/install/quickstart/ > > > > > > Is that wrong? > > > > Regards, > > ...John > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > wrote: > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > http://paste.openstack.org/show/483685/ > > > > > > Looking at your history i noticed that you run packstack withuout the sudo > > command. > > Are you sure it finished correctly? It should be run as root. > > > > Cheers, > > Ivan > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Wed Jan 13 22:36:19 2016 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 13 Jan 2016 23:36:19 +0100 Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1008102011.3805883.1452722757277.JavaMail.yahoo@mail.yahoo.com> References: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> <1008102011.3805883.1452722757277.JavaMail.yahoo@mail.yahoo.com> Message-ID: <5696D163.1050508@redhat.com> On 13/01/16 23:05, Thales wrote: > Sasha, > > Yes, it confirmed with the following: > > "**** Installation completed successfully ******" > It's on the screen in front of me now. > > The previous install also gave the same message. > > ...John > > And something like nova list succeeds for you? Matthias From thaleslv at yahoo.com Wed Jan 13 22:49:31 2016 From: thaleslv at yahoo.com (Thales) Date: Wed, 13 Jan 2016 22:49:31 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> References: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> Message-ID: <430393862.3850094.1452725371815.JavaMail.yahoo@mail.yahoo.com> Okay, running it on the command line now gives the original HTTP 400 error. ? ?So, it's doing the same thing it did at the start. On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy wrote: Did the run of "packstack --allinone" completed successfully or exited with error? The should be no conflict with the previous install (not sure if the previous install completed successfully). Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Ivan Chavero" , rdo-list at redhat.com > Sent: Wednesday, January 13, 2016 4:38:29 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Sasha, > Okay, ?I ran it. ? ?I hope there is not a conflict with the previous install. > ?I ran the web interface, Dashboard, and the same error pops up, the HTTP > 400 error. ? ? I then ran the keypair command line command at root, and get > a different error, a n HTTP 401 authentication error: > Here are the commands and the output:http://paste.openstack.org/show/483817/ > > > > ...John >? > >? ? On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy >? ? wrote: >? > >? John, > please try to run the " packstack --allinone" command as root (or with sudo). > > Then see if the error reproduces. > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Ivan Chavero" > > Cc: rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Ivan, > > > > You're right, I ran it without the sudo command. I was following the > > directions here, where they don't use sudo: > > https://www.rdoproject.org/install/quickstart/ > > > > > > Is that wrong? > > > > Regards, > > ...John > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > wrote: > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > http://paste.openstack.org/show/483685/ > > > > > > Looking at your history i noticed that you run packstack withuout the sudo > > command. > > Are you sure it finished correctly? It should be run as root. > > > > Cheers, > > Ivan > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thaleslv at yahoo.com Thu Jan 14 00:47:34 2016 From: thaleslv at yahoo.com (Thales) Date: Thu, 14 Jan 2016 00:47:34 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> References: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> Message-ID: <1764746640.3874156.1452732454867.JavaMail.yahoo@mail.yahoo.com> Okay,?I decided to go back to the very start, to a clean install of CentOS 7. ? ?I ran all of the commands to install rdo as root.That is, the commands from the quick start website here:?https://www.rdoproject.org/install/quickstart/ Alas, the same HTTP 400 error crops up! ? I ran the keypair commands from the CLI as well. ?Wow. ? ? ? It has to be something simple and obvious. ...John On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy wrote: Did the run of "packstack --allinone" completed successfully or exited with error? The should be no conflict with the previous install (not sure if the previous install completed successfully). Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Ivan Chavero" , rdo-list at redhat.com > Sent: Wednesday, January 13, 2016 4:38:29 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Sasha, > Okay, ?I ran it. ? ?I hope there is not a conflict with the previous install. > ?I ran the web interface, Dashboard, and the same error pops up, the HTTP > 400 error. ? ? I then ran the keypair command line command at root, and get > a different error, a n HTTP 401 authentication error: > Here are the commands and the output:http://paste.openstack.org/show/483817/ > > > > ...John >? > >? ? On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy >? ? wrote: >? > >? John, > please try to run the " packstack --allinone" command as root (or with sudo). > > Then see if the error reproduces. > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Ivan Chavero" > > Cc: rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Ivan, > > > > You're right, I ran it without the sudo command. I was following the > > directions here, where they don't use sudo: > > https://www.rdoproject.org/install/quickstart/ > > > > > > Is that wrong? > > > > Regards, > > ...John > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > wrote: > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > http://paste.openstack.org/show/483685/ > > > > > > Looking at your history i noticed that you run packstack withuout the sudo > > command. > > Are you sure it finished correctly? It should be run as root. > > > > Cheers, > > Ivan > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gcerami at redhat.com Thu Jan 14 00:54:44 2016 From: gcerami at redhat.com (Gabriele Cerami) Date: Thu, 14 Jan 2016 01:54:44 +0100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569575DA.9050101@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> Message-ID: <1452732884.4030.120.camel@redhat.com> On Tue, 2016-01-12 at 13:16 -0500, Emilien Macchi wrote: > I have 2 proposals, maybe wrong but I wanted to share. > > Solution #1 - Forks + Puppetfile > > * Forking all our Puppet modules in > https://github.com/redhat-openstack/ > * Apply our custom patches in a specific branch for each module > * Create a Puppetfile per RDO/OSP version that track SHA1 from repos > * Create a script (with r10k for example) that checkout all modules > and > build RPM. > > Solution #2 - Forks + one RPM per module > * Forking all our Puppet modules in > https://github.com/redhat-openstack/ > * Apply our custom patches in a specific branch for each module > * Create a RPM for each module All upstream modules are already forked in https://github.com/rdo-puppet-modules, not under redhat-openstack because of the way gerrithub deals with github replication. Modules are forked, because of the way opm has been handled until now. Here are some informations on these modules, and how CI handles them. - Each fork branch {branch} is an exact and automatically updated copy of upstream module branch {branch} - Each fork may contain a {branch}-patches to host all the patches for the branch {branch} - Each fork is updated after every upstream change is submitted, and after testing every time the result of the merge between {branch} and {branch}-patches on a centos+rdo environment. From this successful and tested merge, a {branch}-tag branch is updated, and can be taken to be packaged. So, in a way we already have the first two points of every proposal. {branch}-patches for the modules should be populated with all the needed patches though, but to do this it should be enough to propose the patches as reviews in gerrithub for the corresponding project, on the branch {branch}-patches. CI should do the rest (test merge with {branch}, test results and update {branch}-tag, then submit) Raw material is there and available for whatever packaging solution is preferred, but I think at this point, one RPM per module with maybe a metapackage to rule them all is the best solution, now that modules update is automated. From sasha at redhat.com Thu Jan 14 03:48:47 2016 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 13 Jan 2016 22:48:47 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1764746640.3874156.1452732454867.JavaMail.yahoo@mail.yahoo.com> References: <68552699.10995154.1452721974034.JavaMail.zimbra@redhat.com> <1764746640.3874156.1452732454867.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1614684427.11099610.1452743327606.JavaMail.zimbra@redhat.com> Hi John, I wasn't able to reproduce your issue. Could you please check the logs for errors and also double check that the system's resources aren't exhausted. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Ivan Chavero" , rdo-list at redhat.com > Sent: Wednesday, January 13, 2016 7:47:34 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Okay,?I decided to go back to the very start, to a clean install of CentOS 7. > ? ?I ran all of the commands to install rdo as root.That is, the commands > from the quick start website here: > ?https://www.rdoproject.org/install/quickstart/ > > Alas, the same HTTP 400 error crops up! ? I ran the keypair commands from the > CLI as well. ?Wow. > > ? It has to be something simple and obvious. > ...John > > On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy > wrote: > > > Did the run of "packstack --allinone" completed successfully or exited with > error? > > The should be no conflict with the previous install (not sure if the previous > install completed successfully). > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Sasha Chuzhoy" > > Cc: "Ivan Chavero" , rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 4:38:29 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Sasha, > > Okay, ?I ran it. ? ?I hope there is not a conflict with the previous > > install. > > ?I ran the web interface, Dashboard, and the same error pops up, the HTTP > > 400 error. ? ? I then ran the keypair command line command at root, and get > > a different error, a n HTTP 401 authentication error: > > Here are the commands and the > > output:http://paste.openstack.org/show/483817/ > > > > > > > > ...John > >? > > > >? ? On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy > >? ? wrote: > >? > > > >? John, > > please try to run the " packstack --allinone" command as root (or with > > sudo). > > > > Then see if the error reproduces. > > Thanks. > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" > > > To: "Ivan Chavero" > > > Cc: rdo-list at redhat.com > > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Ivan, > > > > > > You're right, I ran it without the sudo command. I was following the > > > directions here, where they don't use sudo: > > > https://www.rdoproject.org/install/quickstart/ > > > > > > > > > Is that wrong? > > > > > > Regards, > > > ...John > > > > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > > > http://paste.openstack.org/show/483685/ > > > > > > > > > Looking at your history i noticed that you run packstack withuout the > > > sudo > > > command. > > > Are you sure it finished correctly? It should be run as root. > > > > > > Cheers, > > > Ivan > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > From thaleslv at yahoo.com Thu Jan 14 05:34:11 2016 From: thaleslv at yahoo.com (Thales) Date: Thu, 14 Jan 2016 05:34:11 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1614684427.11099610.1452743327606.JavaMail.zimbra@redhat.com> References: <1614684427.11099610.1452743327606.JavaMail.zimbra@redhat.com> Message-ID: <2139766959.3989879.1452749651522.JavaMail.yahoo@mail.yahoo.com> "I wasn't able to reproduce your issue." Thanks for that, Sasha! I have looked through the eight log files in /var/log/nova and I found three things, which you can see here:http://paste.openstack.org/show/483844/ It looks like the SQL server is not working right? System resources. ? I'm looking at the Gnome System Monitor.?Total Memory is 9.4 gigabytes, with 3.2 gigbytes used. But, this is strange, it says that CPU usage is at 100%. ? ?Not sure how that can be. vmtoolsd is using around 98% of the cpu! ? ? I have an 8 core machine, so I'm not sure how this is measuring. ...John On Wednesday, January 13, 2016 9:48 PM, Sasha Chuzhoy wrote: Hi John, I wasn't able to reproduce your issue. Could you please check the logs for errors and also double check that the system's resources aren't exhausted. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Thales" > To: "Sasha Chuzhoy" > Cc: "Ivan Chavero" , rdo-list at redhat.com > Sent: Wednesday, January 13, 2016 7:47:34 PM > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > Okay,?I decided to go back to the very start, to a clean install of CentOS 7. > ? ?I ran all of the commands to install rdo as root.That is, the commands > from the quick start website here: > ?https://www.rdoproject.org/install/quickstart/ > > Alas, the same HTTP 400 error crops up! ? I ran the keypair commands from the > CLI as well. ?Wow. > > ? It has to be something simple and obvious. > ...John > >? ? On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy >? ? wrote: >? > >? Did the run of "packstack --allinone" completed successfully or exited with >? error? > > The should be no conflict with the previous install (not sure if the previous > install completed successfully). > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Thales" > > To: "Sasha Chuzhoy" > > Cc: "Ivan Chavero" , rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 4:38:29 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Sasha, > > Okay, ?I ran it. ? ?I hope there is not a conflict with the previous > > install. > > ?I ran the web interface, Dashboard, and the same error pops up, the HTTP > > 400 error. ? ? I then ran the keypair command line command at root, and get > > a different error, a n HTTP 401 authentication error: > > Here are the commands and the > > output:http://paste.openstack.org/show/483817/ > > > > > > > > ...John > >? > > > >? ? On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy > >? ? wrote: > >? > > > >? John, > > please try to run the " packstack --allinone" command as root (or with > > sudo). > > > > Then see if the error reproduces. > > Thanks. > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" > > > To: "Ivan Chavero" > > > Cc: rdo-list at redhat.com > > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Ivan, > > > > > > You're right, I ran it without the sudo command. I was following the > > > directions here, where they don't use sudo: > > > https://www.rdoproject.org/install/quickstart/ > > > > > > > > > Is that wrong? > > > > > > Regards, > > > ...John > > > > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > > > http://paste.openstack.org/show/483685/ > > > > > > > > > Looking at your history i noticed that you run packstack withuout the > > > sudo > > > command. > > > Are you sure it finished correctly? It should be run as root. > > > > > > Cheers, > > > Ivan > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Thu Jan 14 07:56:05 2016 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 14 Jan 2016 02:56:05 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <2139766959.3989879.1452749651522.JavaMail.yahoo@mail.yahoo.com> References: <1614684427.11099610.1452743327606.JavaMail.zimbra@redhat.com> <2139766959.3989879.1452749651522.JavaMail.yahoo@mail.yahoo.com> Message-ID: <404941246.12374602.1452758165188.JavaMail.zimbra@redhat.com> ----- Original Message ----- > "I wasn't able to reproduce your issue." > Thanks for that, Sasha! > I have looked through the eight log files in /var/log/nova and I found three > things, which you can see here: > http://paste.openstack.org/show/483844/ > It looks like the SQL server is not working right? Hi John, This just looks like a startup thing, where the Nova services were up before the database was available. It is working as expected, as services are retrying and eventually connect. To simplify the log issue, let's try this: - openstack-service stop nova - go to /var/log/nova and move all log files to a backup location, so they are empty - openstack-service start nova - Try to reproduce the issue - openstak-service stop nova The log file size should be manageable now and there should be little noise. If they are not too big, could you try pasting them? There should be something in api.log to move forward. Regards, Javier > System resources. I'm looking at the Gnome System Monitor. > Total Memory is 9.4 gigabytes, with 3.2 gigbytes used. > But, this is strange, it says that CPU usage is at 100%. Not sure how that > can be. > vmtoolsd is using around 98% of the cpu! I have an 8 core machine, so I'm not > sure how this is measuring. > ...John > On Wednesday, January 13, 2016 9:48 PM, Sasha Chuzhoy > wrote: > Hi John, > I wasn't able to reproduce your issue. > Could you please check the logs for errors and also double check that the > system's resources aren't exhausted. > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Thales" < thaleslv at yahoo.com > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 7:47:34 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Okay, I decided to go back to the very start, to a clean install of CentOS > > 7. > > I ran all of the commands to install rdo as root.That is, the commands > > from the quick start website here: > > https://www.rdoproject.org/install/quickstart/ > > > > Alas, the same HTTP 400 error crops up! I ran the keypair commands from the > > CLI as well. Wow. > > > > It has to be something simple and obvious. > > ...John > > > > On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy < sasha at redhat.com > > > wrote: > > > > > > Did the run of "packstack --allinone" completed successfully or exited with > > error? > > > > The should be no conflict with the previous install (not sure if the > > previous > > install completed successfully). > > Thanks. > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" < thaleslv at yahoo.com > > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com > > > Sent: Wednesday, January 13, 2016 4:38:29 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Sasha, > > > Okay, I ran it. I hope there is not a conflict with the previous > > > install. > > > I ran the web interface, Dashboard, and the same error pops up, the HTTP > > > 400 error. I then ran the keypair command line command at root, and get > > > a different error, a n HTTP 401 authentication error: > > > Here are the commands and the > > > output: http://paste.openstack.org/show/483817/ > > > > > > > > > > > > ...John > > > > > > > > > On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy < sasha at redhat.com > > > > > > > wrote: > > > > > > > > > John, > > > please try to run the " packstack --allinone" command as root (or with > > > sudo). > > > > > > Then see if the error reproduces. > > > Thanks. > > > > > > Best regards, > > > Sasha Chuzhoy. > > > > > > ----- Original Message ----- > > > > From: "Thales" < thaleslv at yahoo.com > > > > > To: "Ivan Chavero" < ichavero at redhat.com > > > > > Cc: rdo-list at redhat.com > > > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > > > Ivan, > > > > > > > > You're right, I ran it without the sudo command. I was following the > > > > directions here, where they don't use sudo: > > > > https://www.rdoproject.org/install/quickstart/ > > > > > > > > > > > > Is that wrong? > > > > > > > > Regards, > > > > ...John > > > > > > > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > > > < ichavero at redhat.com > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > > > > > http://paste.openstack.org/show/483685/ > > > > > > > > > > > > Looking at your history i noticed that you run packstack withuout the > > > > sudo > > > > command. > > > > Are you sure it finished correctly? It should be run as root. > > > > > > > > Cheers, > > > > Ivan > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From puthi at live.com Thu Jan 14 11:20:50 2016 From: puthi at live.com (Soputhi Sea) Date: Thu, 14 Jan 2016 18:20:50 +0700 Subject: [Rdo-list] Openstack Juno Live Migration --block-migrate failed "ValueError: A NetworkModel is required here" Message-ID: Hi,Openstack Juno's Live Migration, I've been trying to get live-migration to work on this version but i keep getting the same error as below.I wonder if anybody can point me to the right direction to where to debug the problem. Or if anybody come across this problem before please share some ideas.I google around for a few days already but so far I haven't got any luck.Note: the same nova, neutron and libvirt configuration work on Icehouse and Liberty on a different cluster, as i tested.ThanksPuthiNova Version tested: 2014.2.3 and 2014.2.4Nova Error Log============2016-01-14 17:34:08.818 6173 ERROR oslo.messaging.rpc.dispatcher [req-54581412-a194-40d5-9208-b1bf6d04f8d8 ] Exception during message handling: A NetworkModel is required here2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher incoming.message))2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher payload)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 335, in decorated_function2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info())2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 323, in decorated_function2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4978, in live_migration2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher expected_attrs=expected)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 300, in _from_db_object2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher db_inst['info_cache'])2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/instance_info_cache.py", line 45, in _from_db_object2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher info_cache[field] = db_obj[field]2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 474, in __setitem__2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher setattr(self, name, value)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 75, in setter2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher field_value = field.coerce(self, name, value)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 189, in coerce2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return self._type.coerce(obj, attr, value)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 516, in coerce2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher raise ValueError(_('A NetworkModel is required here'))2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher ValueError: A NetworkModel is required here2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcherNova Config===========================[DEFAULT]rpc_backend = qpidqpid_hostname = management-hostauth_strategy = keystonemy_ip = 10.201.171.244vnc_enabled = Truenovncproxy_host=0.0.0.0novncproxy_port=6080novncproxy_base_url=http://management-host:6080/vnc_auto.htmlnetwork_api_class = nova.network.neutronv2.api.APIlinuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriverfirewall_driver = nova.virt.firewall.NoopFirewallDrivervncserver_listen=0.0.0.0vncserver_proxyclient_address=10.201.171.244[baremetal][cells][cinder][conductor][database]connection = mysql://nova:novadbpassword at db-host/nova[ephemeral_storage_encryption][glance]host = glance-hostport = 9292api_servers=$host:$port[hyperv][image_file_url][ironic][keymgr][keystone_authtoken]auth_uri = http://management-host:5000/v2.0identity_uri = http://management-host:35357admin_user = novaadmin_tenant_name = serviceadmin_password = nova2014agprod2[libvirt]live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE #, VIR_MIGRATE_TUNNELLEDblock_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_LIVE[matchmaker_redis][matchmaker_ring][metrics][neutron]url = http://management-host:9696admin_username = neutronadmin_password = neutronpasswordadmin_tenant_name = serviceadmin_auth_url = http://management-host:35357/v2.0auth_strategy = keystone[osapi_v3][rdp][serial_console][spice][ssl][trusted_computing][upgrade_levels]compute=icehouseconductor=icehouse[vmware][xenserver][zookeeper]Neutron Config============[DEFAULT]auth_strategy = keystonerpc_backend = neutron.openstack.common.rpc.impl_qpidqpid_hostname = management-hostcore_plugin = ml2service_plugins = routerdhcp_lease_duration = 604800dhcp_agents_per_network = 3[matchmaker_redis][matchmaker_ring][quotas][agent][keystone_authtoken]auth_uri = http://management-host:5000identity_uri = http://management-host:35357admin_tenant_name = serviceadmin_user = neutronadmin_password = neutronpasswordauth_host = management-hostauth_protocol = httpauth_port = 35357[database][service_providers]service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:defaultservice_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:defaultNeutron Plugin============[ml2]type_drivers = local,flatmechanism_drivers = openvswitch[ml2_type_flat]flat_networks = physnet3[ml2_type_vlan][ml2_type_gre]tunnel_id_ranges = 1:1000[ml2_type_vxlan][securitygroup]firewall_driver = neutron.agent.firewall.NoopFirewallDriverenable_security_group = False[ovs]enable_tunneling = Falselocal_ip = 10.201.171.244network_vlan_ranges = physnet3bridge_mappings = physnet3:br-bond0Libvirt Config===========/etc/sysconfig/libvirtdUncommentLIBVIRTD_ARGS="--listen"/etc/libvirt/libvirtd.conf listen_tls = 0listen_tcp = 1auth_tcp = ?none? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriele.guaglianone at gmail.com Thu Jan 14 13:20:54 2016 From: gabriele.guaglianone at gmail.com (Gabriele Guaglianone) Date: Thu, 14 Jan 2016 13:20:54 +0000 Subject: [Rdo-list] Openstack Juno Live Migration --block-migrate failed "ValueError: A NetworkModel is required here" In-Reply-To: References: Message-ID: Hi all, I'm trying to poupulate the compute dbs on controller node, but I'm getting this error, I can't figure out why 'cause I'm able to login : *[root at controller nova]# su -s /bin/sh -c "nova-manage db sync" nova* *Command failed, please check log for more info* *[root at controller nova]# more /var/log/nova/nova-manage.log* *2016-01-14 13:11:15.269 4286 CRITICAL nova [-] OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'nova'@'localhost' (using password: YES)")* *2016-01-14 13:11:15.269 4286 ERROR nova Traceback (most recent call last):* *2016-01-14 13:11:15.269 4286 ERROR nova File "/usr/bin/nova-manage", line 10, in * *2016-01-14 13:11:15.269 4286 ERROR nova sys.exit(main())* but : *[root at controller nova]# mysql -u nova -p * *Enter password: r00tme* *Welcome to the MariaDB monitor. Commands end with ; or \g.* *Your MariaDB connection id is 12* *Server version: 5.5.44-MariaDB MariaDB Server* *Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.* *Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.* *MariaDB [(none)]> * connection string in nova.conf file is *connection=mysql://nova:r00tme at controller/nova* Any suggestions ? Cheers Gabriele 2016-01-14 11:20 GMT+00:00 Soputhi Sea : > Hi, > > Openstack Juno's Live Migration, I've been trying to get live-migration to > work on this version but i keep getting the same error as below. > I wonder if anybody can point me to the right direction to where to debug > the problem. Or if anybody come across this problem before please share > some ideas. > I google around for a few days already but so far I haven't got any luck. > > Note: the same nova, neutron and libvirt configuration work on Icehouse > and Liberty on a different cluster, as i tested. > > Thanks > Puthi > > Nova Version tested: 2014.2.3 and 2014.2.4 > Nova Error Log > ============ > 2016-01-14 17:34:08.818 6173 ERROR oslo.messaging.rpc.dispatcher > [req-54581412-a194-40d5-9208-b1bf6d04f8d8 ] Exception during message > handling: A NetworkModel is required here > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher Traceback > (most recent call last): > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line > 134, in _dispatch_and_reply > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > incoming.message)) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line > 177, in _dispatch > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > return self._do_dispatch(endpoint, method, ctxt, args) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line > 123, in _do_dispatch > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > result = getattr(endpoint, method)(ctxt, **new_args) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > payload) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line > 82, in __exit__ > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > six.reraise(self.type_, self.value, self.tb) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > return f(self, context, *args, **kw) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 335, in > decorated_function > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > kwargs['instance'], e, sys.exc_info()) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line > 82, in __exit__ > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > six.reraise(self.type_, self.value, self.tb) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 323, in > decorated_function > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > return function(self, context, *args, **kwargs) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4978, in > live_migration > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > expected_attrs=expected) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 300, in > _from_db_object > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > db_inst['info_cache']) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/objects/instance_info_cache.py", > line 45, in _from_db_object > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > info_cache[field] = db_obj[field] > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 474, in > __setitem__ > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > setattr(self, name, value) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 75, in setter > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > field_value = field.coerce(self, name, value) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 189, in > coerce > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > return self._type.coerce(obj, attr, value) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 516, in > coerce > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher raise > ValueError(_('A NetworkModel is required here')) > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > ValueError: A NetworkModel is required here > 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher > > > Nova Config > =========================== > [DEFAULT] > rpc_backend = qpid > qpid_hostname = management-host > auth_strategy = keystone > my_ip = 10.201.171.244 > vnc_enabled = True > novncproxy_host=0.0.0.0 > novncproxy_port=6080 > novncproxy_base_url=http://management-host:6080/vnc_auto.html > network_api_class = nova.network.neutronv2.api.API > linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver > firewall_driver = nova.virt.firewall.NoopFirewallDriver > vncserver_listen=0.0.0.0 > vncserver_proxyclient_address=10.201.171.244 > [baremetal] > [cells] > [cinder] > [conductor] > [database] > connection = mysql://nova:novadbpassword at db-host/nova > [ephemeral_storage_encryption] > [glance] > host = glance-host > port = 9292 > api_servers=$host:$port > [hyperv] > [image_file_url] > [ironic] > [keymgr] > [keystone_authtoken] > auth_uri = http://management-host:5000/v2.0 > identity_uri = http://management-host:35357 > admin_user = nova > admin_tenant_name = service > admin_password = nova2014agprod2 > [libvirt] > live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, > VIR_MIGRATE_LIVE #, VIR_MIGRATE_TUNNELLED > block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, > VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_LIVE > [matchmaker_redis] > [matchmaker_ring] > [metrics] > [neutron] > url = http://management-host:9696 > admin_username = neutron > admin_password = neutronpassword > admin_tenant_name = service > admin_auth_url = http://management-host:35357/v2.0 > auth_strategy = keystone > [osapi_v3] > [rdp] > [serial_console] > [spice] > [ssl] > [trusted_computing] > [upgrade_levels] > compute=icehouse > conductor=icehouse > [vmware] > [xenserver] > [zookeeper] > > > > > Neutron Config > ============ > [DEFAULT] > auth_strategy = keystone > rpc_backend = neutron.openstack.common.rpc.impl_qpid > qpid_hostname = management-host > core_plugin = ml2 > service_plugins = router > dhcp_lease_duration = 604800 > dhcp_agents_per_network = 3 > [matchmaker_redis] > [matchmaker_ring] > [quotas] > [agent] > [keystone_authtoken] > auth_uri = http://management-host:5000 > identity_uri = http://management-host:35357 > admin_tenant_name = service > admin_user = neutron > admin_password = neutronpassword > auth_host = management-host > auth_protocol = http > auth_port = 35357 > [database] > [service_providers] > > service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default > > service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default > > > Neutron Plugin > ============ > [ml2] > type_drivers = local,flat > mechanism_drivers = openvswitch > [ml2_type_flat] > flat_networks = physnet3 > [ml2_type_vlan] > [ml2_type_gre] > tunnel_id_ranges = 1:1000 > [ml2_type_vxlan] > [securitygroup] > firewall_driver = neutron.agent.firewall.NoopFirewallDriver > enable_security_group = False > [ovs] > enable_tunneling = False > local_ip = 10.201.171.244 > network_vlan_ranges = physnet3 > bridge_mappings = physnet3:br-bond0 > > > Libvirt Config > =========== > > /etc/sysconfig/libvirtd > > Uncomment > > LIBVIRTD_ARGS="--listen" > > > /etc/libvirt/libvirtd.conf > > listen_tls = 0 > > listen_tcp = 1 > > auth_tcp = ?none? > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Thu Jan 14 14:02:57 2016 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 14 Jan 2016 09:02:57 -0500 Subject: [Rdo-list] Openstack Juno Live Migration --block-migrate failed "ValueError: A NetworkModel is required here" In-Reply-To: References: Message-ID: <577C97D4-394A-4BA7-ACEE-F65581FD3BA8@ltgfederal.com> Two things that I would test: Check etc hosts file to see that you have one entry for controller with the node ip You can also ping controller to see if it resolves correctly Restart nova service to ensure it is reading the latest configuration file - Ignacio Bravo LTG federal > On Jan 14, 2016, at 8:20 AM, Gabriele Guaglianone wrote: > > Hi all, > I'm trying to poupulate the compute dbs on controller node, but I'm getting this error, I can't figure out why 'cause I'm able to login : > > [root at controller nova]# su -s /bin/sh -c "nova-manage db sync" nova > Command failed, please check log for more info > [root at controller nova]# more /var/log/nova/nova-manage.log > 2016-01-14 13:11:15.269 4286 CRITICAL nova [-] OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'nova'@'localhost' (using password: YES)") > 2016-01-14 13:11:15.269 4286 ERROR nova Traceback (most recent call last): > 2016-01-14 13:11:15.269 4286 ERROR nova File "/usr/bin/nova-manage", line 10, in > 2016-01-14 13:11:15.269 4286 ERROR nova sys.exit(main()) > > but : > > [root at controller nova]# mysql -u nova -p > Enter password: r00tme > Welcome to the MariaDB monitor. Commands end with ; or \g. > Your MariaDB connection id is 12 > Server version: 5.5.44-MariaDB MariaDB Server > > Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others. > > Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. > > MariaDB [(none)]> > > > connection string in nova.conf file is > > connection=mysql://nova:r00tme at controller/nova > > Any suggestions ? > > Cheers > > Gabriele > > > > 2016-01-14 11:20 GMT+00:00 Soputhi Sea : >> Hi, >> >> Openstack Juno's Live Migration, I've been trying to get live-migration to work on this version but i keep getting the same error as below. >> I wonder if anybody can point me to the right direction to where to debug the problem. Or if anybody come across this problem before please share some ideas. >> I google around for a few days already but so far I haven't got any luck. >> >> Note: the same nova, neutron and libvirt configuration work on Icehouse and Liberty on a different cluster, as i tested. >> >> Thanks >> Puthi >> >> Nova Version tested: 2014.2.3 and 2014.2.4 >> Nova Error Log >> ============ >> 2016-01-14 17:34:08.818 6173 ERROR oslo.messaging.rpc.dispatcher [req-54581412-a194-40d5-9208-b1bf6d04f8d8 ] Exception during message handling: A NetworkModel is required here >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher incoming.message)) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher payload) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__ >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 335, in decorated_function >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info()) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__ >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 323, in decorated_function >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4978, in live_migration >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher expected_attrs=expected) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 300, in _from_db_object >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher db_inst['info_cache']) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/instance_info_cache.py", line 45, in _from_db_object >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher info_cache[field] = db_obj[field] >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 474, in __setitem__ >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher setattr(self, name, value) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 75, in setter >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher field_value = field.coerce(self, name, value) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 189, in coerce >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return self._type.coerce(obj, attr, value) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 516, in coerce >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher raise ValueError(_('A NetworkModel is required here')) >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher ValueError: A NetworkModel is required here >> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >> >> >> Nova Config >> =========================== >> [DEFAULT] >> rpc_backend = qpid >> qpid_hostname = management-host >> auth_strategy = keystone >> my_ip = 10.201.171.244 >> vnc_enabled = True >> novncproxy_host=0.0.0.0 >> novncproxy_port=6080 >> novncproxy_base_url=http://management-host:6080/vnc_auto.html >> network_api_class = nova.network.neutronv2.api.API >> linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver >> firewall_driver = nova.virt.firewall.NoopFirewallDriver >> vncserver_listen=0.0.0.0 >> vncserver_proxyclient_address=10.201.171.244 >> [baremetal] >> [cells] >> [cinder] >> [conductor] >> [database] >> connection = mysql://nova:novadbpassword at db-host/nova >> [ephemeral_storage_encryption] >> [glance] >> host = glance-host >> port = 9292 >> api_servers=$host:$port >> [hyperv] >> [image_file_url] >> [ironic] >> [keymgr] >> [keystone_authtoken] >> auth_uri = http://management-host:5000/v2.0 >> identity_uri = http://management-host:35357 >> admin_user = nova >> admin_tenant_name = service >> admin_password = nova2014agprod2 >> [libvirt] >> live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE #, VIR_MIGRATE_TUNNELLED >> block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_LIVE >> [matchmaker_redis] >> [matchmaker_ring] >> [metrics] >> [neutron] >> url = http://management-host:9696 >> admin_username = neutron >> admin_password = neutronpassword >> admin_tenant_name = service >> admin_auth_url = http://management-host:35357/v2.0 >> auth_strategy = keystone >> [osapi_v3] >> [rdp] >> [serial_console] >> [spice] >> [ssl] >> [trusted_computing] >> [upgrade_levels] >> compute=icehouse >> conductor=icehouse >> [vmware] >> [xenserver] >> [zookeeper] >> >> >> >> >> Neutron Config >> ============ >> [DEFAULT] >> auth_strategy = keystone >> rpc_backend = neutron.openstack.common.rpc.impl_qpid >> qpid_hostname = management-host >> core_plugin = ml2 >> service_plugins = router >> dhcp_lease_duration = 604800 >> dhcp_agents_per_network = 3 >> [matchmaker_redis] >> [matchmaker_ring] >> [quotas] >> [agent] >> [keystone_authtoken] >> auth_uri = http://management-host:5000 >> identity_uri = http://management-host:35357 >> admin_tenant_name = service >> admin_user = neutron >> admin_password = neutronpassword >> auth_host = management-host >> auth_protocol = http >> auth_port = 35357 >> [database] >> [service_providers] >> service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default >> service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default >> >> >> Neutron Plugin >> ============ >> [ml2] >> type_drivers = local,flat >> mechanism_drivers = openvswitch >> [ml2_type_flat] >> flat_networks = physnet3 >> [ml2_type_vlan] >> [ml2_type_gre] >> tunnel_id_ranges = 1:1000 >> [ml2_type_vxlan] >> [securitygroup] >> firewall_driver = neutron.agent.firewall.NoopFirewallDriver >> enable_security_group = False >> [ovs] >> enable_tunneling = False >> local_ip = 10.201.171.244 >> network_vlan_ranges = physnet3 >> bridge_mappings = physnet3:br-bond0 >> >> >> Libvirt Config >> =========== >> /etc/sysconfig/libvirtd >> >> Uncomment >> >> LIBVIRTD_ARGS="--listen" >> >> >> >> /etc/libvirt/libvirtd.conf >> >> listen_tls = 0 >> >> listen_tcp = 1 >> >> auth_tcp = ?none? >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriele.guaglianone at gmail.com Thu Jan 14 15:35:49 2016 From: gabriele.guaglianone at gmail.com (Gabriele Guaglianone) Date: Thu, 14 Jan 2016 15:35:49 +0000 Subject: [Rdo-list] Openstack Juno Live Migration --block-migrate failed "ValueError: A NetworkModel is required here" In-Reply-To: References: <577C97D4-394A-4BA7-ACEE-F65581FD3BA8@ltgfederal.com> Message-ID: Rookie error ...I changed the wrong section: [api_database] instead of [database] SOLVED... Many thanks.. Gabriele 2016-01-14 14:41 GMT+00:00 Gabriele Guaglianone < gabriele.guaglianone at gmail.com>: > Hi Ignacio, > thnak u so much, here my host file > > *cat /etc/hosts* > *127.0.0.1 localhost localhost.localdomain localhost4 > localhost4.localdomain4 * > *::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 * > *### RDO ip * > *10.20.15.11 controller* > *10.20.15.12 compute0* > *10.20.15.13 compute1* > *10.20.15.14 compute2* > > and > # ping -c 1 controller > PING controller (10.20.15.11) 56(84) bytes of data. > 64 bytes from controller (10.20.15.11): icmp_seq=1 ttl=64 time=0.043 ms > > Nova restarted but I'm getting the same error when I run: > > [root at controller nova]# su -s /bin/sh -c "nova-manage db sync" nova > Command failed, please check log for more info > [root at controller nova]# more /var/log/nova/nova-manage.log > 2016-01-14 14:40:31.698 5715 CRITICAL nova [-] OperationalError: > (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'nova'@'localhost' > (using password: YES)") > 2016-01-14 14:40:31.698 5715 ERROR nova Traceback (most recent call last): > 2016-01-14 14:40:31.698 5715 ERROR nova File "/usr/bin/nova-manage", > line 10, in > 2016-01-14 14:40:31.698 5715 ERROR nova sys.exit(main()) > 2016-01-14 14:40:31.698 5715 ERROR nova File > "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1443, in main > 2016-01-14 14:40:31.698 5715 ERROR nova ret = fn(*fn_args, **fn_kwargs) > > > 2016-01-14 14:02 GMT+00:00 Ignacio Bravo : > >> Two things that I would test: >> Check etc hosts file to see that you have one entry for controller with >> the node ip >> You can also ping controller to see if it resolves correctly >> Restart nova service to ensure it is reading the latest configuration file >> >> - >> Ignacio Bravo >> LTG federal >> >> >> On Jan 14, 2016, at 8:20 AM, Gabriele Guaglianone < >> gabriele.guaglianone at gmail.com> wrote: >> >> Hi all, >> I'm trying to poupulate the compute dbs on controller node, but I'm >> getting this error, I can't figure out why 'cause I'm able to login : >> >> *[root at controller nova]# su -s /bin/sh -c "nova-manage db sync" nova* >> *Command failed, please check log for more info* >> *[root at controller nova]# more /var/log/nova/nova-manage.log* >> *2016-01-14 13:11:15.269 4286 CRITICAL nova [-] OperationalError: >> (_mysql_exceptions.OperationalError) (1045, "Access denied for user >> 'nova'@'localhost' (using password: YES)")* >> *2016-01-14 13:11:15.269 4286 ERROR nova Traceback (most recent call >> last):* >> *2016-01-14 13:11:15.269 4286 ERROR nova File "/usr/bin/nova-manage", >> line 10, in * >> *2016-01-14 13:11:15.269 4286 ERROR nova sys.exit(main())* >> >> but : >> >> *[root at controller nova]# mysql -u nova -p * >> *Enter password: r00tme* >> *Welcome to the MariaDB monitor. Commands end with ; or \g.* >> *Your MariaDB connection id is 12* >> *Server version: 5.5.44-MariaDB MariaDB Server* >> >> *Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.* >> >> *Type 'help;' or '\h' for help. Type '\c' to clear the current input >> statement.* >> >> *MariaDB [(none)]> * >> >> >> connection string in nova.conf file is >> >> *connection=mysql://nova:r00tme at controller/nova* >> >> Any suggestions ? >> >> Cheers >> >> Gabriele >> >> >> >> 2016-01-14 11:20 GMT+00:00 Soputhi Sea : >> >>> Hi, >>> >>> Openstack Juno's Live Migration, I've been trying to get live-migration >>> to work on this version but i keep getting the same error as below. >>> I wonder if anybody can point me to the right direction to where to >>> debug the problem. Or if anybody come across this problem before please >>> share some ideas. >>> I google around for a few days already but so far I haven't got any luck. >>> >>> Note: the same nova, neutron and libvirt configuration work on Icehouse >>> and Liberty on a different cluster, as i tested. >>> >>> Thanks >>> Puthi >>> >>> Nova Version tested: 2014.2.3 and 2014.2.4 >>> Nova Error Log >>> ============ >>> 2016-01-14 17:34:08.818 6173 ERROR oslo.messaging.rpc.dispatcher >>> [req-54581412-a194-40d5-9208-b1bf6d04f8d8 ] Exception during message >>> handling: A NetworkModel is required here >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> Traceback (most recent call last): >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line >>> 134, in _dispatch_and_reply >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> incoming.message)) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line >>> 177, in _dispatch >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> return self._do_dispatch(endpoint, method, ctxt, args) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line >>> 123, in _do_dispatch >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> result = getattr(endpoint, method)(ctxt, **new_args) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> payload) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line >>> 82, in __exit__ >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> six.reraise(self.type_, self.value, self.tb) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> return f(self, context, *args, **kw) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 335, in >>> decorated_function >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> kwargs['instance'], e, sys.exc_info()) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line >>> 82, in __exit__ >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> six.reraise(self.type_, self.value, self.tb) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 323, in >>> decorated_function >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> return function(self, context, *args, **kwargs) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4978, in >>> live_migration >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> expected_attrs=expected) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 300, in >>> _from_db_object >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> db_inst['info_cache']) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/objects/instance_info_cache.py", >>> line 45, in _from_db_object >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> info_cache[field] = db_obj[field] >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 474, in >>> __setitem__ >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> setattr(self, name, value) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 75, in setter >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> field_value = field.coerce(self, name, value) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 189, in >>> coerce >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> return self._type.coerce(obj, attr, value) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File >>> "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 516, in >>> coerce >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> raise ValueError(_('A NetworkModel is required here')) >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> ValueError: A NetworkModel is required here >>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher >>> >>> >>> Nova Config >>> =========================== >>> [DEFAULT] >>> rpc_backend = qpid >>> qpid_hostname = management-host >>> auth_strategy = keystone >>> my_ip = 10.201.171.244 >>> vnc_enabled = True >>> novncproxy_host=0.0.0.0 >>> novncproxy_port=6080 >>> novncproxy_base_url=http://management-host:6080/vnc_auto.html >>> network_api_class = nova.network.neutronv2.api.API >>> linuxnet_interface_driver = >>> nova.network.linux_net.LinuxOVSInterfaceDriver >>> firewall_driver = nova.virt.firewall.NoopFirewallDriver >>> vncserver_listen=0.0.0.0 >>> vncserver_proxyclient_address=10.201.171.244 >>> [baremetal] >>> [cells] >>> [cinder] >>> [conductor] >>> [database] >>> connection = mysql://nova:novadbpassword at db-host/nova >>> [ephemeral_storage_encryption] >>> [glance] >>> host = glance-host >>> port = 9292 >>> api_servers=$host:$port >>> [hyperv] >>> [image_file_url] >>> [ironic] >>> [keymgr] >>> [keystone_authtoken] >>> auth_uri = http://management-host:5000/v2.0 >>> identity_uri = http://management-host:35357 >>> admin_user = nova >>> admin_tenant_name = service >>> admin_password = nova2014agprod2 >>> [libvirt] >>> live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, >>> VIR_MIGRATE_LIVE #, VIR_MIGRATE_TUNNELLED >>> block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, >>> VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_LIVE >>> [matchmaker_redis] >>> [matchmaker_ring] >>> [metrics] >>> [neutron] >>> url = http://management-host:9696 >>> admin_username = neutron >>> admin_password = neutronpassword >>> admin_tenant_name = service >>> admin_auth_url = http://management-host:35357/v2.0 >>> auth_strategy = keystone >>> [osapi_v3] >>> [rdp] >>> [serial_console] >>> [spice] >>> [ssl] >>> [trusted_computing] >>> [upgrade_levels] >>> compute=icehouse >>> conductor=icehouse >>> [vmware] >>> [xenserver] >>> [zookeeper] >>> >>> >>> >>> >>> Neutron Config >>> ============ >>> [DEFAULT] >>> auth_strategy = keystone >>> rpc_backend = neutron.openstack.common.rpc.impl_qpid >>> qpid_hostname = management-host >>> core_plugin = ml2 >>> service_plugins = router >>> dhcp_lease_duration = 604800 >>> dhcp_agents_per_network = 3 >>> [matchmaker_redis] >>> [matchmaker_ring] >>> [quotas] >>> [agent] >>> [keystone_authtoken] >>> auth_uri = http://management-host:5000 >>> identity_uri = http://management-host:35357 >>> admin_tenant_name = service >>> admin_user = neutron >>> admin_password = neutronpassword >>> auth_host = management-host >>> auth_protocol = http >>> auth_port = 35357 >>> [database] >>> [service_providers] >>> >>> service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default >>> >>> service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default >>> >>> >>> Neutron Plugin >>> ============ >>> [ml2] >>> type_drivers = local,flat >>> mechanism_drivers = openvswitch >>> [ml2_type_flat] >>> flat_networks = physnet3 >>> [ml2_type_vlan] >>> [ml2_type_gre] >>> tunnel_id_ranges = 1:1000 >>> [ml2_type_vxlan] >>> [securitygroup] >>> firewall_driver = neutron.agent.firewall.NoopFirewallDriver >>> enable_security_group = False >>> [ovs] >>> enable_tunneling = False >>> local_ip = 10.201.171.244 >>> network_vlan_ranges = physnet3 >>> bridge_mappings = physnet3:br-bond0 >>> >>> >>> Libvirt Config >>> =========== >>> >>> /etc/sysconfig/libvirtd >>> >>> Uncomment >>> >>> LIBVIRTD_ARGS="--listen" >>> >>> >>> /etc/libvirt/libvirtd.conf >>> >>> listen_tls = 0 >>> >>> listen_tcp = 1 >>> >>> auth_tcp = ?none? >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thaleslv at yahoo.com Thu Jan 14 20:43:03 2016 From: thaleslv at yahoo.com (Thales) Date: Thu, 14 Jan 2016 20:43:03 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <404941246.12374602.1452758165188.JavaMail.zimbra@redhat.com> References: <404941246.12374602.1452758165188.JavaMail.zimbra@redhat.com> Message-ID: <2059404935.4223021.1452804183973.JavaMail.yahoo@mail.yahoo.com> Hello Javier, Okay, I did what you said. ?This took some fumbling around. ? There are 7 files. ?The two biggest are 13k and 12k , the others are 4k and below. ?Hope they aren't too big! == 1> nova-api.log:============================= 2016-01-14 14:24:54.955 5982 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative2016-01-14 14:24:55.025 5982 WARNING nova.api.ec2.cloud [-] Deprecated: The in tree EC2 API is deprecated as of Kilo release and may be removed in a future release. The stackforge ec2-api project http://git.openstack.org/cgit/stackforge/ec2-api/ is the target replacement for this functionality.2016-01-14 14:24:55.836 5982 INFO nova.wsgi [-] ec2 listening on 0.0.0.0:87732016-01-14 14:24:55.836 5982 INFO oslo_service.service [-] Starting 1 workers2016-01-14 14:24:55.908 5982 INFO oslo_service.service [-] Started child 60722016-01-14 14:24:55.984 6072 INFO nova.ec2.wsgi.server [-] (6072) wsgi starting up on http://0.0.0.0:8773/2016-01-14 14:24:58.471 5982 INFO nova.api.openstack [-] Loaded extensions: ['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 'os-evacuate', 'os-extended-availability-zone', 'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 'os-instance-actions', 'os-instance-usage-audit-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 'server-metadata', 'servers', 'versions']2016-01-14 14:24:58.474 5982 WARNING keystonemiddleware.auth_token [-] Use of the auth_admin_prefix, auth_host, auth_port, auth_protocol, identity_uri, admin_token, admin_user, admin_password, and admin_tenant_name configuration options is deprecated in favor of auth_plugin and related options and may be removed in a future release.2016-01-14 14:24:58.784 5982 INFO nova.api.openstack [-] Loaded extensions: ['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 'os-evacuate', 'os-extended-availability-zone', 'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 'os-instance-actions', 'os-instance-usage-audit-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 'server-metadata', 'servers', 'versions']2016-01-14 14:24:58.786 5982 WARNING keystonemiddleware.auth_token [-] Use of the auth_admin_prefix, auth_host, auth_port, auth_protocol, identity_uri, admin_token, admin_user, admin_password, and admin_tenant_name configuration options is deprecated in favor of auth_plugin and related options and may be removed in a future release.2016-01-14 14:24:59.339 5982 INFO nova.api.openstack [-] Loaded extensions: ['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 'os-evacuate', 'os-extended-availability-zone', 'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 'os-instance-actions', 'os-instance-usage-audit-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 'server-metadata', 'servers', 'versions']2016-01-14 14:24:59.341 5982 WARNING keystonemiddleware.auth_token [-] Use of the auth_admin_prefix, auth_host, auth_port, auth_protocol, identity_uri, admin_token, admin_user, admin_password, and admin_tenant_name configuration options is deprecated in favor of auth_plugin and related options and may be removed in a future release.2016-01-14 14:24:59.343 5982 INFO nova.wsgi [-] osapi_compute listening on 0.0.0.0:87742016-01-14 14:24:59.343 5982 INFO oslo_service.service [-] Starting 1 workers2016-01-14 14:24:59.356 5982 INFO oslo_service.service [-] Started child 60992016-01-14 14:24:59.363 5982 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'2016-01-14 14:24:59.364 5982 WARNING oslo_config.cfg [-] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency".2016-01-14 14:24:59.380 6099 INFO nova.osapi_compute.wsgi.server [-] (6099) wsgi starting up on http://0.0.0.0:8774/2016-01-14 14:24:59.693 5982 INFO nova.wsgi [-] metadata listening on 0.0.0.0:87752016-01-14 14:24:59.694 5982 INFO oslo_service.service [-] Starting 1 workers2016-01-14 14:24:59.696 5982 INFO oslo_service.service [-] Started child 61062016-01-14 14:24:59.726 6106 INFO nova.metadata.wsgi.server [-] (6106) wsgi starting up on http://0.0.0.0:8775/2016-01-14 14:24:59.737 5982 WARNING oslo_config.cfg [-] Option "amqp_durable_queues" from group "DEFAULT" is deprecated. Use option "amqp_durable_queues" from group "oslo_messaging_rabbit".2016-01-14 14:24:59.752 5982 WARNING oslo_config.cfg [-] Option "admin_auth_url" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:59.752 5982 WARNING oslo_config.cfg [-] Option "admin_password" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:59.753 5982 WARNING oslo_config.cfg [-] Option "admin_tenant_name" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:59.753 5982 WARNING oslo_config.cfg [-] Option "admin_username" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:59.753 5982 WARNING oslo_config.cfg [-] Option "auth_strategy" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:59.754 5982 WARNING oslo_config.cfg [-] Option "vnc_enabled" from group "DEFAULT" is deprecated. Use option "enabled" from group "vnc".2016-01-14 14:24:59.755 5982 WARNING oslo_config.cfg [-] Option "vnc_keymap" from group "DEFAULT" is deprecated. Use option "keymap" from group "vnc".2016-01-14 14:24:59.755 5982 WARNING oslo_config.cfg [-] Option "novncproxy_base_url" from group "DEFAULT" is deprecated. Use option "novncproxy_base_url" from group "vnc".2016-01-14 14:24:59.755 5982 WARNING oslo_config.cfg [-] Option "vncserver_listen" from group "DEFAULT" is deprecated. Use option "vncserver_listen" from group "vnc".2016-01-14 14:24:59.756 5982 WARNING oslo_config.cfg [-] Option "vncserver_proxyclient_address" from group "DEFAULT" is deprecated. Use option "vncserver_proxyclient_address" from group "vnc".2016-01-14 14:24:59.761 5982 WARNING oslo_config.cfg [-] Option "sql_connection" from group "DEFAULT" is deprecated. Use option "connection" from group "database".2016-01-14 14:26:40.507 6099 INFO nova.osapi_compute.wsgi.server [req-fc48f9a9-87cf-4697-83b1-204c484dee12 c2114578a647492c985508e88c06f24b c73d477ea608499eb117fb79b28bff80 - - -] 192.168.1.12 "GET /v2/ HTTP/1.1" status: 200 len: 574 time: 4.52623992016-01-14 14:26:43.786 6099 WARNING oslo_config.cfg [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b c73d477ea608499eb117fb79b28bff80 - - -] Option "sql_connection" from group "DEFAULT" is deprecated. Use option "connection" from group "database".2016-01-14 14:26:44.407 6099 INFO nova.api.openstack.wsgi [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b c73d477ea608499eb117fb79b28bff80 - - -] HTTP exception thrown: Keypair data is invalid: failed to generate fingerprint2016-01-14 14:26:44.408 6099 INFO nova.osapi_compute.wsgi.server [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b c73d477ea608499eb117fb79b28bff80 - - -] 192.168.1.12 "POST /v2/c73d477ea608499eb117fb79b28bff80/os-keypairs HTTP/1.1" status: 400 len: 319 time: 3.44190722016-01-14 14:27:27.897 6072 INFO nova.wsgi [-] Stopping WSGI server.2016-01-14 14:27:27.898 6072 INFO nova.ec2.wsgi.server [-] (6072) wsgi exited, is_accepting=True2016-01-14 14:27:27.899 6072 INFO nova.wsgi [-] WSGI server has stopped.2016-01-14 14:27:27.906 6099 INFO nova.wsgi [-] Stopping WSGI server.2016-01-14 14:27:27.906 6099 INFO nova.osapi_compute.wsgi.server [-] (6099) wsgi exited, is_accepting=True2016-01-14 14:27:27.906 6099 INFO nova.wsgi [-] WSGI server has stopped.2016-01-14 14:27:27.916 6106 INFO nova.wsgi [-] Stopping WSGI server.2016-01-14 14:27:27.917 6106 INFO nova.metadata.wsgi.server [-] (6106) wsgi exited, is_accepting=True2016-01-14 14:27:27.917 6106 INFO nova.wsgi [-] WSGI server has stopped.2016-01-14 14:27:27.924 5982 INFO oslo_service.service [-] Caught SIGTERM, stopping children2016-01-14 14:27:27.925 5982 INFO nova.wsgi [-] Stopping WSGI server.2016-01-14 14:27:27.925 5982 INFO nova.wsgi [-] Stopping WSGI server.2016-01-14 14:27:27.925 5982 INFO nova.wsgi [-] Stopping WSGI server.2016-01-14 14:27:27.926 5982 INFO oslo_service.service [-] Waiting on 3 children to exit2016-01-14 14:27:27.926 5982 INFO oslo_service.service [-] Child 6072 exited with status 02016-01-14 14:27:27.926 5982 INFO oslo_service.service [-] Child 6106 exited with status 02016-01-14 14:27:28.060 5982 INFO oslo_service.service [-] Child 6099 exited with status 0 == 2> nova-cert.log:============================= 2016-01-14 14:24:54.625 5983 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative2016-01-14 14:24:54.742 5983 WARNING oslo_config.cfg [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Option "amqp_durable_queues" from group "DEFAULT" is deprecated. Use option "amqp_durable_queues" from group "oslo_messaging_rabbit".2016-01-14 14:24:54.744 5983 WARNING oslo_config.cfg [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Option "sql_connection" from group "DEFAULT" is deprecated. Use option "connection" from group "database".2016-01-14 14:24:54.831 5983 WARNING oslo_config.cfg [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency".2016-01-14 14:24:54.833 5983 WARNING oslo_config.cfg [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Option "admin_auth_url" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.834 5983 WARNING oslo_config.cfg [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Option "admin_password" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.834 5983 WARNING oslo_config.cfg [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Option "admin_tenant_name" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.835 5983 WARNING oslo_config.cfg [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Option "admin_username" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.835 5983 WARNING oslo_config.cfg [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Option "auth_strategy" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.836 5983 INFO nova.service [-] Starting cert node (version 12.0.0-3.94d6b69git.el7)2016-01-14 14:24:56.215 5983 INFO oslo.messaging._drivers.impl_rabbit [req-31394480-e4c8-48ed-ad6e-c688f6a48aab - - - - -] Connecting to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.326 5983 INFO oslo.messaging._drivers.impl_rabbit [req-31394480-e4c8-48ed-ad6e-c688f6a48aab - - - - -] Connected to AMQP server on 192.168.1.12:56722016-01-14 14:27:27.941 5983 INFO oslo_service.service [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] Caught SIGTERM, exiting2016-01-14 14:27:27.941 5983 WARNING oslo_messaging.server [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] start/stop/wait must be called in the same thread2016-01-14 14:27:27.942 5983 WARNING oslo_messaging.server [req-b41610bf-c7c6-428f-802a-5e9caa359c25 - - - - -] start/stop/wait must be called in the same thread == 3> nova-compute.log:============================= 2016-01-14 14:24:54.763 5984 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative2016-01-14 14:24:55.194 5984 INFO nova.virt.driver [-] Loading compute driver 'libvirt.LibvirtDriver'2016-01-14 14:24:55.896 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "amqp_durable_queues" from group "DEFAULT" is deprecated. Use option "amqp_durable_queues" from group "oslo_messaging_rabbit".2016-01-14 14:24:55.904 5984 INFO oslo.messaging._drivers.impl_rabbit [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Connecting to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.051 5984 INFO oslo.messaging._drivers.impl_rabbit [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Connected to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.068 5984 INFO oslo.messaging._drivers.impl_rabbit [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Connecting to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.144 5984 INFO oslo.messaging._drivers.impl_rabbit [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Connected to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.537 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "admin_auth_url" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:56.538 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "admin_password" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:56.538 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "admin_tenant_name" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:56.539 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "admin_username" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:56.539 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "auth_strategy" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:56.540 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "vnc_enabled" from group "DEFAULT" is deprecated. Use option "enabled" from group "vnc".2016-01-14 14:24:56.541 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "vnc_keymap" from group "DEFAULT" is deprecated. Use option "keymap" from group "vnc".2016-01-14 14:24:56.541 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "novncproxy_base_url" from group "DEFAULT" is deprecated. Use option "novncproxy_base_url" from group "vnc".2016-01-14 14:24:56.541 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "vncserver_listen" from group "DEFAULT" is deprecated. Use option "vncserver_listen" from group "vnc".2016-01-14 14:24:56.542 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "vncserver_proxyclient_address" from group "DEFAULT" is deprecated. Use option "vncserver_proxyclient_address" from group "vnc".2016-01-14 14:24:56.546 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency".2016-01-14 14:24:56.547 5984 WARNING oslo_config.cfg [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Option "sql_connection" from group "DEFAULT" is deprecated. Use option "connection" from group "database".2016-01-14 14:24:56.550 5984 INFO nova.service [-] Starting compute node (version 12.0.0-3.94d6b69git.el7)2016-01-14 14:24:57.413 5984 INFO nova.virt.libvirt.driver [-] Connection event '1' reason 'None'2016-01-14 14:24:57.448 5984 INFO nova.virt.libvirt.host [req-89a58e3d-bc1f-4f48-8737-ce3e079d16c7 - - - - -] Libvirt host capabilities ? ? ? f3817390-5b5b-4413-9eb1-4691d7883a56? ? ? ? ? x86_64? ? ? cpu64-rhel6? ? ? AMD? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? tcp? ? ? ? rdma? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 10239544? ? ? ? ? 2559886? ? ? ? ? 0? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? selinux? ? ? 0? ? ? system_u:system_r:svirt_t:s0? ? ? system_u:system_r:svirt_tcg_t:s0? ? ? ? ? ? ? dac? ? ? 0? ? ? +107:+107? ? ? +107:+107? ? ? ? ? ? hvm? ? ? ? ? 32? ? ? /usr/libexec/qemu-kvm? ? ? pc-i440fx-rhel7.0.0? ? ? pc? ? ? rhel6.0.0? ? ? rhel6.1.0? ? ? rhel6.2.0? ? ? rhel6.3.0? ? ? rhel6.4.0? ? ? rhel6.5.0? ? ? rhel6.6.0? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? hvm? ? ? ? ? 64? ? ? /usr/libexec/qemu-kvm? ? ? pc-i440fx-rhel7.0.0? ? ? pc? ? ? rhel6.0.0? ? ? rhel6.1.0? ? ? rhel6.2.0? ? ? rhel6.3.0? ? ? rhel6.4.0? ? ? rhel6.5.0? ? ? rhel6.6.0? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2016-01-14 14:24:57.799 5984 WARNING nova.compute.monitors [req-89a58e3d-bc1f-4f48-8737-ce3e079d16c7 - - - - -] Excluding nova.compute.monitors.cpu monitor virt_driver. Not in the list of enabled monitors (CONF.compute_monitors).2016-01-14 14:24:57.801 5984 INFO nova.compute.resource_tracker [req-89a58e3d-bc1f-4f48-8737-ce3e079d16c7 - - - - -] Auditing locally available compute resources for node localhost.localdomain2016-01-14 14:24:58.145 5984 INFO nova.compute.resource_tracker [req-89a58e3d-bc1f-4f48-8737-ce3e079d16c7 - - - - -] Total usable vcpus: 1, total allocated vcpus: 02016-01-14 14:24:58.146 5984 INFO nova.compute.resource_tracker [req-89a58e3d-bc1f-4f48-8737-ce3e079d16c7 - - - - -] Final resource view: name=localhost.localdomain phys_ram=9601MB used_ram=512MB phys_disk=44GB used_disk=0GB total_vcpus=1 used_vcpus=0 pci_stats=None2016-01-14 14:24:58.234 5984 INFO nova.compute.resource_tracker [req-89a58e3d-bc1f-4f48-8737-ce3e079d16c7 - - - - -] Compute_service record updated for localhost.localdomain:localhost.localdomain2016-01-14 14:24:58.242 5984 INFO oslo.messaging._drivers.impl_rabbit [req-89a58e3d-bc1f-4f48-8737-ce3e079d16c7 - - - - -] Connecting to AMQP server on 192.168.1.12:56722016-01-14 14:24:58.289 5984 INFO oslo.messaging._drivers.impl_rabbit [req-89a58e3d-bc1f-4f48-8737-ce3e079d16c7 - - - - -] Connected to AMQP server on 192.168.1.12:56722016-01-14 14:25:54.472 5984 INFO nova.compute.resource_tracker [req-3584b44e-f99e-41e3-bc33-da31d79ed887 - - - - -] Auditing locally available compute resources for node localhost.localdomain2016-01-14 14:25:54.791 5984 INFO nova.compute.resource_tracker [req-3584b44e-f99e-41e3-bc33-da31d79ed887 - - - - -] Total usable vcpus: 1, total allocated vcpus: 02016-01-14 14:25:54.792 5984 INFO nova.compute.resource_tracker [req-3584b44e-f99e-41e3-bc33-da31d79ed887 - - - - -] Final resource view: name=localhost.localdomain phys_ram=9601MB used_ram=512MB phys_disk=44GB used_disk=0GB total_vcpus=1 used_vcpus=0 pci_stats=None2016-01-14 14:25:54.863 5984 INFO nova.compute.resource_tracker [req-3584b44e-f99e-41e3-bc33-da31d79ed887 - - - - -] Compute_service record updated for localhost.localdomain:localhost.localdomain2016-01-14 14:26:54.473 5984 INFO nova.compute.resource_tracker [req-3584b44e-f99e-41e3-bc33-da31d79ed887 - - - - -] Auditing locally available compute resources for node localhost.localdomain2016-01-14 14:26:54.618 5984 INFO nova.compute.resource_tracker [req-3584b44e-f99e-41e3-bc33-da31d79ed887 - - - - -] Total usable vcpus: 1, total allocated vcpus: 02016-01-14 14:26:54.618 5984 INFO nova.compute.resource_tracker [req-3584b44e-f99e-41e3-bc33-da31d79ed887 - - - - -] Final resource view: name=localhost.localdomain phys_ram=9601MB used_ram=512MB phys_disk=44GB used_disk=0GB total_vcpus=1 used_vcpus=0 pci_stats=None2016-01-14 14:26:54.691 5984 INFO nova.compute.resource_tracker [req-3584b44e-f99e-41e3-bc33-da31d79ed887 - - - - -] Compute_service record updated for localhost.localdomain:localhost.localdomain2016-01-14 14:27:27.945 5984 INFO oslo_service.service [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Caught SIGTERM, exiting2016-01-14 14:27:27.948 5984 WARNING oslo_messaging.server [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] start/stop/wait must be called in the same thread2016-01-14 14:27:27.948 5984 WARNING oslo_messaging.server [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] start/stop/wait must be called in the same thread == 4> nova-conductor.log:============================= 2016-01-14 14:24:54.731 5985 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative2016-01-14 14:24:54.880 5985 WARNING oslo_config.cfg [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Option "amqp_durable_queues" from group "DEFAULT" is deprecated. Use option "amqp_durable_queues" from group "oslo_messaging_rabbit".2016-01-14 14:24:54.882 5985 WARNING oslo_config.cfg [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Option "sql_connection" from group "DEFAULT" is deprecated. Use option "connection" from group "database".2016-01-14 14:24:54.885 5985 WARNING oslo_config.cfg [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency".2016-01-14 14:24:54.886 5985 WARNING oslo_config.cfg [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Option "admin_auth_url" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.886 5985 WARNING oslo_config.cfg [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Option "admin_password" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.887 5985 WARNING oslo_config.cfg [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Option "admin_tenant_name" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.887 5985 WARNING oslo_config.cfg [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Option "admin_username" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.887 5985 WARNING oslo_config.cfg [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Option "auth_strategy" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:54.889 5985 INFO nova.service [-] Starting conductor node (version 12.0.0-3.94d6b69git.el7)2016-01-14 14:24:56.295 5985 INFO oslo.messaging._drivers.impl_rabbit [req-8fcaaa51-6659-4581-949c-5bc1e18e8035 - - - - -] Connecting to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.406 5985 INFO oslo.messaging._drivers.impl_rabbit [req-8fcaaa51-6659-4581-949c-5bc1e18e8035 - - - - -] Connected to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.433 5985 INFO oslo.messaging._drivers.impl_rabbit [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Connecting to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.498 5985 INFO oslo.messaging._drivers.impl_rabbit [req-82efeee3-f46f-4d44-b2d2-3358020c83a7 - - - - -] Connected to AMQP server on 192.168.1.12:56722016-01-14 14:27:27.953 5985 INFO oslo_service.service [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] Caught SIGTERM, exiting2016-01-14 14:27:27.953 5985 WARNING oslo_messaging.server [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] start/stop/wait must be called in the same thread2016-01-14 14:27:27.954 5985 WARNING oslo_messaging.server [req-2519cd40-5299-46fd-b3e4-6188cf6a679e - - - - -] start/stop/wait must be called in the same thread == 5> nova-consoleauth.log:============================= 2016-01-14 14:24:54.897 5986 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative2016-01-14 14:24:54.964 5986 WARNING oslo_config.cfg [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Option "amqp_durable_queues" from group "DEFAULT" is deprecated. Use option "amqp_durable_queues" from group "oslo_messaging_rabbit".2016-01-14 14:24:54.966 5986 WARNING oslo_config.cfg [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Option "sql_connection" from group "DEFAULT" is deprecated. Use option "connection" from group "database".2016-01-14 14:24:54.969 5986 WARNING oslo_config.cfg [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency".2016-01-14 14:24:55.054 5986 WARNING oslo_config.cfg [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Option "admin_auth_url" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:55.055 5986 WARNING oslo_config.cfg [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Option "admin_password" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:55.055 5986 WARNING oslo_config.cfg [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Option "admin_tenant_name" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:55.056 5986 WARNING oslo_config.cfg [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Option "admin_username" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:55.056 5986 WARNING oslo_config.cfg [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Option "auth_strategy" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:55.057 5986 INFO nova.service [-] Starting consoleauth node (version 12.0.0-3.94d6b69git.el7)2016-01-14 14:24:56.477 5986 INFO oslo.messaging._drivers.impl_rabbit [req-0f87275c-dcfd-4ae2-b314-38f6cb2cfa3c - - - - -] Connecting to AMQP server on 192.168.1.12:56722016-01-14 14:24:56.591 5986 INFO oslo.messaging._drivers.impl_rabbit [req-0f87275c-dcfd-4ae2-b314-38f6cb2cfa3c - - - - -] Connected to AMQP server on 192.168.1.12:56722016-01-14 14:27:27.956 5986 INFO oslo_service.service [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] Caught SIGTERM, exiting2016-01-14 14:27:27.957 5986 WARNING oslo_messaging.server [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] start/stop/wait must be called in the same thread2016-01-14 14:27:27.957 5986 WARNING oslo_messaging.server [req-80a56dbd-140e-4770-b698-9742e0df223c - - - - -] start/stop/wait must be called in the same thread == 6> nova-novncproxy.log:============================= 2016-01-14 14:24:40.633 5987 INFO nova.console.websocketproxy [-] WebSocket server settings:2016-01-14 14:24:40.634 5987 INFO nova.console.websocketproxy [-] ? - Listen on 0.0.0.0:60802016-01-14 14:24:40.634 5987 INFO nova.console.websocketproxy [-] ? - Flash security policy server2016-01-14 14:24:40.634 5987 INFO nova.console.websocketproxy [-] ? - Web server. Web root: /usr/share/novnc2016-01-14 14:24:40.634 5987 INFO nova.console.websocketproxy [-] ? - No SSL/TLS support (no cert file)2016-01-14 14:24:40.636 5987 INFO nova.console.websocketproxy [-] ? - proxying from 0.0.0.0:6080 to None:None2016-01-14 14:27:27.961 5987 INFO nova.console.websocketproxy [-] Got SIGTERM, exiting2016-01-14 14:27:27.961 5987 INFO nova.console.websocketproxy [-] In exit == 7> nova-scheduler.log:============================= 2016-01-14 14:24:56.156 5988 WARNING oslo_config.cfg [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Option "sql_connection" from group "DEFAULT" is deprecated. Use option "connection" from group "database".2016-01-14 14:24:57.195 5988 INFO oslo_service.periodic_task [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Skipping periodic task _periodic_update_dns because its interval is negative2016-01-14 14:24:57.259 5988 WARNING oslo_config.cfg [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Option "amqp_durable_queues" from group "DEFAULT" is deprecated. Use option "amqp_durable_queues" from group "oslo_messaging_rabbit".2016-01-14 14:24:57.263 5988 WARNING oslo_config.cfg [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency".2016-01-14 14:24:57.265 5988 WARNING oslo_config.cfg [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Option "admin_auth_url" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:57.265 5988 WARNING oslo_config.cfg [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Option "admin_password" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:57.300 5988 WARNING oslo_config.cfg [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Option "admin_tenant_name" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:57.300 5988 WARNING oslo_config.cfg [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Option "admin_username" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:57.301 5988 WARNING oslo_config.cfg [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Option "auth_strategy" from group "neutron" is deprecated for removal. ?Its value may be silently ignored in the future.2016-01-14 14:24:57.346 5988 INFO nova.service [-] Starting scheduler node (version 12.0.0-3.94d6b69git.el7)2016-01-14 14:24:57.645 5988 INFO oslo.messaging._drivers.impl_rabbit [req-ed1ac78a-34ff-4403-8201-42fa2eb620f1 - - - - -] Connecting to AMQP server on 192.168.1.12:56722016-01-14 14:24:57.728 5988 INFO oslo.messaging._drivers.impl_rabbit [req-ed1ac78a-34ff-4403-8201-42fa2eb620f1 - - - - -] Connected to AMQP server on 192.168.1.12:56722016-01-14 14:26:54.442 5988 INFO nova.scheduler.host_manager [req-75b7b2b0-29bb-4004-864d-6be13cc6ccf8 - - - - -] Received a sync request from an unknown host 'localhost.localdomain'. Re-created its InstanceList.2016-01-14 14:27:27.975 5988 INFO oslo_service.service [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] Caught SIGTERM, exiting2016-01-14 14:27:27.975 5988 WARNING oslo_messaging.server [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] start/stop/wait must be called in the same thread2016-01-14 14:27:27.976 5988 WARNING oslo_messaging.server [req-ab6740c0-b5c9-483f-838c-112adf53ab3b - - - - -] start/stop/wait must be called in the same thread On Thursday, January 14, 2016 1:56 AM, Javier Pena wrote: ----- Original Message ----- > "I wasn't able to reproduce your issue." > Thanks for that, Sasha! > I have looked through the eight log files in /var/log/nova and I found three > things, which you can see here: > http://paste.openstack.org/show/483844/ > It looks like the SQL server is not working right? Hi John, This just looks like a startup thing, where the Nova services were up before the database was available. It is working as expected, as services are retrying and eventually connect. To simplify the log issue, let's try this: - openstack-service stop nova - go to /var/log/nova and move all log files to a backup location, so they are empty - openstack-service start nova - Try to reproduce the issue - openstak-service stop nova The log file size should be manageable now and there should be little noise. If they are not too big, could you try pasting them? There should be something in api.log to move forward. Regards, Javier > System resources. I'm looking at the Gnome System Monitor. > Total Memory is 9.4 gigabytes, with 3.2 gigbytes used. > But, this is strange, it says that CPU usage is at 100%. Not sure how that > can be. > vmtoolsd is using around 98% of the cpu! I have an 8 core machine, so I'm not > sure how this is measuring. > ...John > On Wednesday, January 13, 2016 9:48 PM, Sasha Chuzhoy > wrote: > Hi John, > I wasn't able to reproduce your issue. > Could you please check the logs for errors and also double check that the > system's resources aren't exhausted. > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Thales" < thaleslv at yahoo.com > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 7:47:34 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Okay, I decided to go back to the very start, to a clean install of CentOS > > 7. > > I ran all of the commands to install rdo as root.That is, the commands > > from the quick start website here: > > https://www.rdoproject.org/install/quickstart/ > > > > Alas, the same HTTP 400 error crops up! I ran the keypair commands from the > > CLI as well. Wow. > > > > It has to be something simple and obvious. > > ...John > > > > On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy < sasha at redhat.com > > > wrote: > > > > > > Did the run of "packstack --allinone" completed successfully or exited with > > error? > > > > The should be no conflict with the previous install (not sure if the > > previous > > install completed successfully). > > Thanks. > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" < thaleslv at yahoo.com > > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com > > > Sent: Wednesday, January 13, 2016 4:38:29 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Sasha, > > > Okay, I ran it. I hope there is not a conflict with the previous > > > install. > > > I ran the web interface, Dashboard, and the same error pops up, the HTTP > > > 400 error. I then ran the keypair command line command at root, and get > > > a different error, a n HTTP 401 authentication error: > > > Here are the commands and the > > > output: http://paste.openstack.org/show/483817/ > > > > > > > > > > > > ...John > > > > > > > > > On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy < sasha at redhat.com > > > > > > > wrote: > > > > > > > > > John, > > > please try to run the " packstack --allinone" command as root (or with > > > sudo). > > > > > > Then see if the error reproduces. > > > Thanks. > > > > > > Best regards, > > > Sasha Chuzhoy. > > > > > > ----- Original Message ----- > > > > From: "Thales" < thaleslv at yahoo.com > > > > > To: "Ivan Chavero" < ichavero at redhat.com > > > > > Cc: rdo-list at redhat.com > > > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > > > Ivan, > > > > > > > > You're right, I ran it without the sudo command. I was following the > > > > directions here, where they don't use sudo: > > > > https://www.rdoproject.org/install/quickstart/ > > > > > > > > > > > > Is that wrong? > > > > > > > > Regards, > > > > ...John > > > > > > > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > > > < ichavero at redhat.com > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > > > > > http://paste.openstack.org/show/483685/ > > > > > > > > > > > > Looking at your history i noticed that you run packstack withuout the > > > > sudo > > > > command. > > > > Are you sure it finished correctly? It should be run as root. > > > > > > > > Cheers, > > > > Ivan > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jguiditt at redhat.com Thu Jan 14 21:00:44 2016 From: jguiditt at redhat.com (Jason Guiditta) Date: Thu, 14 Jan 2016 16:00:44 -0500 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <1452732884.4030.120.camel@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> Message-ID: <20160114210044.GB12536@redhat.com> On 14/01/16 01:54 +0100, Gabriele Cerami wrote: >On Tue, 2016-01-12 at 13:16 -0500, Emilien Macchi wrote: >> I have 2 proposals, maybe wrong but I wanted to share. >> >> Solution #1 - Forks + Puppetfile >> >> * Forking all our Puppet modules in >> https://github.com/redhat-openstack/ >> * Apply our custom patches in a specific branch for each module >> * Create a Puppetfile per RDO/OSP version that track SHA1 from repos >> * Create a script (with r10k for example) that checkout all modules >> and >> build RPM. >> >> Solution #2 - Forks + one RPM per module >> * Forking all our Puppet modules in >> https://github.com/redhat-openstack/ >> * Apply our custom patches in a specific branch for each module >> * Create a RPM for each module > >All upstream modules are already forked in >https://github.com/rdo-puppet-modules, not under redhat-openstack >because of the way gerrithub deals with github replication. > >Modules are forked, because of the way opm has been handled until now. > >Here are some informations on these modules, and how CI handles them. > >- Each fork branch {branch} is an exact and automatically updated copy >of upstream module branch {branch} >- Each fork may contain a {branch}-patches to host all the patches for >the branch {branch} I could be wrong, but I think this is consistent with what opm _used_ to do, which itself was _inconsistent_ with the rest of rdo (not sure how, or if this relates to pure upstream). So, for liberty, for example, we have upstream-liberty, which is _exactly_ what was in stable/liberty for the various openstack puppet modules at the time of the last (yes, manual) sync. Then we (opm) have stable/liberty, which has everything from upstream-liberty, with whatever patches have been added on top of it. If I am correct that is needed for rdo, but inconsistent with what you have described, how hard would it be to alter the rdo-puppet-modules stuff to work with the new reality? >- Each fork is updated after every upstream change is submitted, and >after testing every time the result of the merge between {branch} and >{branch}-patches on a centos+rdo environment. From this successful and >tested merge, a {branch}-tag branch is updated, and can be taken to be >packaged. > >So, in a way we already have the first two points of every proposal. >{branch}-patches for the modules should be populated with all the >needed patches though, but to do this it should be enough to propose >the patches as reviews in gerrithub for the corresponding project, on >the branch {branch}-patches. CI should do the rest (test merge with >{branch}, test results and update {branch}-tag, then submit) > >Raw material is there and available for whatever packaging solution is >preferred, but I think at this point, one RPM per module with maybe a >metapackage to rule them all is the best solution, now that modules >update is automated. > +1000 to individual packages, and opm as metapackage only. -j From dms at redhat.com Thu Jan 14 21:18:16 2016 From: dms at redhat.com (David Moreau Simard) Date: Thu, 14 Jan 2016 16:18:16 -0500 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <20160114210044.GB12536@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> Message-ID: On Thu, Jan 14, 2016 at 4:00 PM, Jason Guiditta wrote: > If I am correct that is needed for rdo Can you define "RDO" ? Packstack ? RDO manager ? Both ? Something else entirely ? I'd really, really like to run both with pure upstream modules to see what - if anything - breaks and draw conclusions from that as to: 1) Is it the consumer's fault (i.e, Packstack or RDO Manager that aren't passing the proper settings to puppet modules and the puppet module is patched instead ?) 2) Is it the puppet module's fault (i.e, upstream wrongfully refused to merge/backport a necessary patch ?) 3) Something else ? I'm convinced that we must keep OPM without any patches in RDO (OSP is another matter) even if it requires a little bit a work. Sure, we need to put visibility on the work that needs to be done to accomplish this but we can't do that before trying it out first. I can only assume that consumers and users of RDO do not expect midstream patches in their packages. We don't ship anything custom for Nova, Cinder, etc - OPM shouldn't be any different. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From dms at redhat.com Thu Jan 14 23:08:32 2016 From: dms at redhat.com (David Moreau Simard) Date: Thu, 14 Jan 2016 18:08:32 -0500 Subject: [Rdo-list] Planned maintenance on Delorean repositories @ 21:30 UTC, Jan 14th In-Reply-To: References: Message-ID: Hi ! The downtime took longer than anticipated and we slightly went over the 30 minutes of downtime window. Everything is back online and we are monitoring things are commits are processed again. Please let me know if you encounter any issues ! Thanks. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Jan 13, 2016 at 12:55 PM, David Moreau Simard wrote: > Greetings, > > A maintenance is required on the server where the Delorean instances > and repositories are hosted in order to resolve ongoing performance > issues. > Delorean will stop processing new commits around 21:00 UTC and the > repositories will become unavailable around 21:30 UTC. > > The maintenance shouldn't last more than 30 minutes before the > repositories are back up and Delorean resumes processing commits. > > We'll send an e-mail once the maintenance is complete. > > Thanks ! > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] From gcerami at redhat.com Thu Jan 14 23:26:01 2016 From: gcerami at redhat.com (Gabriele Cerami) Date: Fri, 15 Jan 2016 00:26:01 +0100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <20160114210044.GB12536@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> Message-ID: <1452813961.4030.155.camel@redhat.com> On Thu, 2016-01-14 at 16:00 -0500, Jason Guiditta wrote: > I could be wrong, but I think this is consistent with what opm _used_ > to do, which itself was _inconsistent_ with the rest of rdo (not sure > how, or if this relates to pure upstream). So, for liberty, for > example, we have upstream-liberty, which is _exactly_ what was in > stable/liberty for the various openstack puppet modules at the time > of > the last (yes, manual) sync. Then we (opm) have stable/liberty, > which > has everything from upstream-liberty, with whatever patches have been > added on top of it. If I am correct that is needed for rdo, but > inconsistent with what you have described, how hard would it be to > alter the rdo-puppet-modules stuff to work with the new reality? OPM has never used rdopkg for its update, if it's what you mean with "this is needed for rdo": the patches are applied directly to the midstream repo, and not saved in patch files to be applied to the package. I don't know why OPM had this "privilege" and if we're going to change it in the future, but this manual process of merging upstream -liberty with the patches it's what has been automated in OPMCI. For example the module puppet-nova in rdo-puppet-modules, contains three branches for every upstream branch: - stable/liberty (tracks upstream/stable/liberty) - stable/liberty-patches - stable/liberty-tag (created with git merge liberty liberty-patches) what you call stable/liberty in rdo-puppet-modules/puppet-nova is called liberty-tag. It contains upstream liberty branch to the latest revision merged with all the patches present in liberty-patches (and if a patch added to liberty-patches is finally merged upstream, it is automatically removed from liberty-patches). The scripts that handle OPMCI have been modified recently (in the hope that they could be used to handle other repositories as well), to handle repositories much closely to what rdopkg requires, so it may possible to alter to a new reality (if I understood what it means) (At the moment OPM CI is configured to follow only master branches from upstream so you will not really find a liberty branch, but the concept is the same for any upstream branch) From gcerami at redhat.com Thu Jan 14 23:36:54 2016 From: gcerami at redhat.com (Gabriele Cerami) Date: Fri, 15 Jan 2016 00:36:54 +0100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> Message-ID: <1452814614.4030.163.camel@redhat.com> On Thu, 2016-01-14 at 16:18 -0500, David Moreau Simard wrote: > I can only assume that consumers and users of RDO do not expect > midstream patches in their packages. > We don't ship anything custom for Nova, Cinder, etc - OPM shouldn't > be > any different. OPM is a package with content taken from 59 repositories. Only 22 of those 59 come from openstack, the remaining 37 upstream repositories do not even have a gerrit instance for contribution, most of them do not have CI, almost all of them are tested only in ubuntu. I think it's enough for it to be treated differently. From alan.pevec at redhat.com Thu Jan 14 23:51:14 2016 From: alan.pevec at redhat.com (Alan Pevec) Date: Fri, 15 Jan 2016 00:51:14 +0100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <1452813961.4030.155.camel@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> <1452813961.4030.155.camel@redhat.com> Message-ID: <56983472.3070906@redhat.com> > OPM has never used rdopkg for its update, if it's what you mean with > "this is needed for rdo": This (last branch renaming in redhat-openstack/openstack-puppet-modules repo) was needed for the current Delorean/rdoinfo setup where "master" and "stable/liberty" were expected as source branches to build from. There's work in progress to expand rdoinfo extension to handle deviations per project and release[1]. Cheers, Alan [1] https://trello.com/c/KxKtVkTz/62-restructure-rdoinfo-for-new-needs From gilles at redhat.com Fri Jan 15 04:30:24 2016 From: gilles at redhat.com (Gilles Dubreuil) Date: Fri, 15 Jan 2016 15:30:24 +1100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569575DA.9050101@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> Message-ID: <569875E0.5050909@redhat.com> On 13/01/16 08:53, Emilien Macchi wrote: > > > On 01/12/2016 01:16 PM, Emilien Macchi wrote: >> Also, the way we're packaging OPM is really bad. >> >> * we have no SHA1 for each module we have in OPM >> * we are not able to validate each module >> * package tarball is not pure. All other OpenStack RPMS take upstream >> tarball so we can easily compare but in OPM... no way to do it. >> >> Those issues are really critical, I would like to hear from OPM folks, >> and find solutions that we will work on during the following weeks. > > I have 2 proposals, maybe wrong but I wanted to share. > > Solution #1 - Forks + Puppetfile > > * Forking all our Puppet modules in https://github.com/redhat-openstack/ > * Apply our custom patches in a specific branch for each module > * Create a Puppetfile per RDO/OSP version that track SHA1 from repos > * Create a script (with r10k for example) that checkout all modules and > build RPM. > > Solution #2 - Forks + one RPM per module > * Forking all our Puppet modules in https://github.com/redhat-openstack/ > * Apply our custom patches in a specific branch for each module > * Create a RPM for each module > > Of course, I don't take in consideration CI work so I'm willing to > suggestions. > > Feedback is welcome here! > So far, we seems to agree to: - Do upstream work as much as possible and even more to reduce the backlog of downstream patches. - Keep Puppet related patches the closest to its upstream source - Keep non Puppet related patches close the package (RPM) when needed - Limit forking - Have 1 package per module - Use meta/wrapper packages - There are two groups of modules, the Openstack and non Openstack ones. My input: - Confusion between RDO and OSP - There is an obvious structural complexity with OPM and also related to Openstack structure itself. I think this is part of the challenge we need to address in order to reduce the complexity factor. - Creating a Bugzilla every time a patch needs to be submitted and/or backported to any RDO or OSP (or other distro) is a workflow burden (politely said!). - For validation purpose each installer needs each module to point to a specific version (commit). This cannot be done when using a common OPM. I believe it's important to help with the RDO/OSP confusion and get some facts about RDO and OSP relationship. I believe (please jump in if needed): - RDO and OSP are both opensource projects on top of Openstack. - RDO is the community version, more cutting edge but unsupported. - OSP is the Enterprise version, certified and supported. - OSP inheriting from RDO (downstream). The latter is the important part because the life cycle work flow is supposed to go from RDO to OSP. Historically, it has not always happened mainly because scarce resources were working on either one at a time. That has created some confusion too. Back to OPM, Openstack Puppet Modules repo was initially created as a common denominator placeholder between Packstack and Quickstack related installers - Quickstack is github repo name containing the puppet core used by Foreman/Red Hat Satellite 6, Staypuft. At the time, there wasn't enough resources to handle puppet modules for each installer's team. So all modules ended up in the same bucket, here was sadly born 'OPM'. Another temporary solution to last. The bad was that RDO-Manager/OSP-Director installers started taping into it too! ;) The important change that happened in the meantime was Openstack puppet projects moved under Openstack main tent. This offers Openstack puppet modules to directly benefit from the stable branches structure. I understand making use of this is already happening but we have the opportunity to use it to reorganize the whole OPM structure. I think we would gain of having each Openstack Puppet module to extend its current branch structure in a 'distro sub-branch' way, such as: master + + | | | +--> RDO9 +--> OSP9 | back|port | +--> stable/liberty +--> RDO8 +--> OSP8 | | +--> stable/kilo +--> RDO7 +--> OSP7 The structural complexity could be reduced by benefiting from limiting forks, as describe above, using branch for each related Openstack distro (RDO/OSP/etc). If this is not possible then each Opentack installer has to fork every Openstack modules it needs. For, the non Openstack puppet modules, because there isn't any link to Openstack branches. A correspond fork has to be created. Hence, a solution #3: - For each installer - Create a meta/wrapper package - For each required/desired Openstack Puppet module - If no 'distro sub-branches' available - Module's source repo is forked - Openstack equivalent branches structure are created - A RPM package is created - Non Puppet patches are applied if needed - Add module to list of meta/wrapper package - For each required/desired non Openstack Puppet module - Module's source repo is forked - Openstack equivalent branches structure are created - Puppet patches are applied in correspond branch if needed - A RPM package is created - Non Puppet patches are applied if needed - Add module to list of meta/wrapper package -Gilles >> Thanks >> >> On 01/12/2016 12:37 PM, Emilien Macchi wrote: >>> So I started an etherpad to discuss why we have so much downstream >>> patches in Puppet modules. >>> >>> https://etherpad.openstack.org/p/opm-patches >>> >>> In my opinion, we should follow some best practices: >>> >>> * upstream first. If you find a bug, submit the patch upstream, wait for >>> at least a positive review from a core and also successful CI jobs. Then >>> you can backport it downstream if urgent. >>> * backport it to stable branches when needed. The patch we want is in >>> master and not stable? It's too easy to backport it in OPM. Do the >>> backport in upstream/stable first, it will help to stay updated with >>> upstream. >>> * don't change default parameters, don't override them. Our installers >>> are able to override any parameter so do not hardcode this kind of change. >>> * keep up with upstream: if you have an upstream patch under review that >>> is already in OPM: keep it alive and make sure it lands as soon as possible. >>> >>> UPSTREAM FIRST please please please (I'll send you cookies if you want). >>> >>> If you have any question about an upstream patch, please join >>> #puppet-openstack (freenode) and talk to the group. We're doing reviews >>> every day and it's not difficult to land a patch. >>> >>> In the meantime, I would like to justify each of our backports in the >>> etherpad and clean-up a maximum of them. >>> >>> Thank you for reading so far, >>> >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From dradez at redhat.com Fri Jan 15 04:58:33 2016 From: dradez at redhat.com (Dan Radez) Date: Thu, 14 Jan 2016 23:58:33 -0500 Subject: [Rdo-list] OPNFV B Release RC0 Message-ID: <56987C79.4090105@redhat.com> Project Apex is the OPNFV project that is using RDO manager to deploy OFNFV. We posted Release Brahmaputra RCO ( release 2 RCO ) today: RPM to use on a virutalization host install of CentOS7: http://artifacts.opnfv.org/apex/brahmaputra/opnfv-apex-2.7-brahmaputra.1.rc0.noarch.rpm ISO install including CentOS 7: http://artifacts.opnfv.org/apex/brahmaputra/opnfv-brahmaputra.1.rc0.iso Installation Documentation: http://artifacts.opnfv.org/apex/docs/installation-instructions/installation-instructions.html One correction to the documentation, the settings files referenced in the docs have not yet been moved to /etc and can be found in /var/opt/opnfv/ Please join #opnfv-apex or email opnfv-users at lists.opnfv.org to provide feedback or ask questions if you have a chance to take this for a spin. We would enjoy hearing from you. Dan Radez irc: radez From thaleslv at yahoo.com Fri Jan 15 12:35:58 2016 From: thaleslv at yahoo.com (Thales) Date: Fri, 15 Jan 2016 12:35:58 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <404941246.12374602.1452758165188.JavaMail.zimbra@redhat.com> References: <404941246.12374602.1452758165188.JavaMail.zimbra@redhat.com> Message-ID: <1545350315.4562557.1452861358498.JavaMail.yahoo@mail.yahoo.com> Hello Javier, I sent the log files to the list yesterday, but they don't seem to be there? ?Should I resend? ...John On Thursday, January 14, 2016 1:56 AM, Javier Pena wrote: ----- Original Message ----- > "I wasn't able to reproduce your issue." > Thanks for that, Sasha! > I have looked through the eight log files in /var/log/nova and I found three > things, which you can see here: > http://paste.openstack.org/show/483844/ > It looks like the SQL server is not working right? Hi John, This just looks like a startup thing, where the Nova services were up before the database was available. It is working as expected, as services are retrying and eventually connect. To simplify the log issue, let's try this: - openstack-service stop nova - go to /var/log/nova and move all log files to a backup location, so they are empty - openstack-service start nova - Try to reproduce the issue - openstak-service stop nova The log file size should be manageable now and there should be little noise. If they are not too big, could you try pasting them? There should be something in api.log to move forward. Regards, Javier > System resources. I'm looking at the Gnome System Monitor. > Total Memory is 9.4 gigabytes, with 3.2 gigbytes used. > But, this is strange, it says that CPU usage is at 100%. Not sure how that > can be. > vmtoolsd is using around 98% of the cpu! I have an 8 core machine, so I'm not > sure how this is measuring. > ...John > On Wednesday, January 13, 2016 9:48 PM, Sasha Chuzhoy > wrote: > Hi John, > I wasn't able to reproduce your issue. > Could you please check the logs for errors and also double check that the > system's resources aren't exhausted. > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Thales" < thaleslv at yahoo.com > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com > > Sent: Wednesday, January 13, 2016 7:47:34 PM > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > Okay, I decided to go back to the very start, to a clean install of CentOS > > 7. > > I ran all of the commands to install rdo as root.That is, the commands > > from the quick start website here: > > https://www.rdoproject.org/install/quickstart/ > > > > Alas, the same HTTP 400 error crops up! I ran the keypair commands from the > > CLI as well. Wow. > > > > It has to be something simple and obvious. > > ...John > > > > On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy < sasha at redhat.com > > > wrote: > > > > > > Did the run of "packstack --allinone" completed successfully or exited with > > error? > > > > The should be no conflict with the previous install (not sure if the > > previous > > install completed successfully). > > Thanks. > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Thales" < thaleslv at yahoo.com > > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com > > > Sent: Wednesday, January 13, 2016 4:38:29 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Sasha, > > > Okay, I ran it. I hope there is not a conflict with the previous > > > install. > > > I ran the web interface, Dashboard, and the same error pops up, the HTTP > > > 400 error. I then ran the keypair command line command at root, and get > > > a different error, a n HTTP 401 authentication error: > > > Here are the commands and the > > > output: http://paste.openstack.org/show/483817/ > > > > > > > > > > > > ...John > > > > > > > > > On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy < sasha at redhat.com > > > > > > > wrote: > > > > > > > > > John, > > > please try to run the " packstack --allinone" command as root (or with > > > sudo). > > > > > > Then see if the error reproduces. > > > Thanks. > > > > > > Best regards, > > > Sasha Chuzhoy. > > > > > > ----- Original Message ----- > > > > From: "Thales" < thaleslv at yahoo.com > > > > > To: "Ivan Chavero" < ichavero at redhat.com > > > > > Cc: rdo-list at redhat.com > > > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > > > Ivan, > > > > > > > > You're right, I ran it without the sudo command. I was following the > > > > directions here, where they don't use sudo: > > > > https://www.rdoproject.org/install/quickstart/ > > > > > > > > > > > > Is that wrong? > > > > > > > > Regards, > > > > ...John > > > > > > > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > > > < ichavero at redhat.com > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" website > > > > > > > > > > http://paste.openstack.org/show/483685/ > > > > > > > > > > > > Looking at your history i noticed that you run packstack withuout the > > > > sudo > > > > command. > > > > Are you sure it finished correctly? It should be run as root. > > > > > > > > Cheers, > > > > Ivan > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Fri Jan 15 12:48:50 2016 From: javier.pena at redhat.com (Javier Pena) Date: Fri, 15 Jan 2016 07:48:50 -0500 (EST) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <2059404935.4223021.1452804183973.JavaMail.yahoo@mail.yahoo.com> References: <404941246.12374602.1452758165188.JavaMail.zimbra@redhat.com> <2059404935.4223021.1452804183973.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1370104289.13327357.1452862130100.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hello Javier, > Okay, I did what you said. This took some fumbling around. There are 7 files. > The two biggest are 13k and 12k , the others are 4k and below. Hope they > aren't too big! > == 1> nova-api.log:============================= [...snip...] > c73d477ea608499eb117fb79b28bff80 - - -] Option "sql_connection" from group > "DEFAULT" is deprecated. Use option "connection" from group "database". > 2016-01-14 14:26:44.407 6099 INFO nova.api.openstack.wsgi > [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b > c73d477ea608499eb117fb79b28bff80 - - -] HTTP exception thrown: Keypair data > is invalid: failed to generate fingerprint > 2016-01-14 14:26:44.408 6099 INFO nova.osapi_compute.wsgi.server > [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b > c73d477ea608499eb117fb79b28bff80 - - -] 192.168.1.12 "POST > /v2/c73d477ea608499eb117fb79b28bff80/os-keypairs HTTP/1.1" status: 400 len: > 319 time: 3.4419072 So this is all the Nova logs show, it is just complaining that the keypair data is not valid. While we try to get some other ideas, does dmesg show any application segfaulting? Javier [... all other logs, snipping ...] > On Thursday, January 14, 2016 1:56 AM, Javier Pena > wrote: > ----- Original Message ----- > > "I wasn't able to reproduce your issue." > > Thanks for that, Sasha! > > I have looked through the eight log files in /var/log/nova and I found > > three > > things, which you can see here: > > http://paste.openstack.org/show/483844/ > > It looks like the SQL server is not working right? > Hi John, > This just looks like a startup thing, where the Nova services were up before > the database was available. It is working as expected, as services are > retrying and eventually connect. > To simplify the log issue, let's try this: > - openstack-service stop nova > - go to /var/log/nova and move all log files to a backup location, so they > are empty > - openstack-service start nova > - Try to reproduce the issue > - openstak-service stop nova > The log file size should be manageable now and there should be little noise. > If they are not too big, could you try pasting them? There should be > something in api.log to move forward. > Regards, > Javier > > System resources. I'm looking at the Gnome System Monitor. > > Total Memory is 9.4 gigabytes, with 3.2 gigbytes used. > > But, this is strange, it says that CPU usage is at 100%. Not sure how that > > can be. > > vmtoolsd is using around 98% of the cpu! I have an 8 core machine, so I'm > > not > > sure how this is measuring. > > ...John > > On Wednesday, January 13, 2016 9:48 PM, Sasha Chuzhoy < sasha at redhat.com > > > wrote: > > Hi John, > > I wasn't able to reproduce your issue. > > Could you please check the logs for errors and also double check that the > > system's resources aren't exhausted. > > Thanks. > > Best regards, > > Sasha Chuzhoy. > > ----- Original Message ----- > > > From: "Thales" < thaleslv at yahoo.com > > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com > > > Sent: Wednesday, January 13, 2016 7:47:34 PM > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > Okay, I decided to go back to the very start, to a clean install of > > > CentOS > > > 7. > > > I ran all of the commands to install rdo as root.That is, the commands > > > from the quick start website here: > > > https://www.rdoproject.org/install/quickstart/ > > > > > > Alas, the same HTTP 400 error crops up! I ran the keypair commands from > > > the > > > CLI as well. Wow. > > > > > > It has to be something simple and obvious. > > > ...John > > > > > > On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy < sasha at redhat.com > > > > > > > wrote: > > > > > > > > > Did the run of "packstack --allinone" completed successfully or exited > > > with > > > error? > > > > > > The should be no conflict with the previous install (not sure if the > > > previous > > > install completed successfully). > > > Thanks. > > > > > > Best regards, > > > Sasha Chuzhoy. > > > > > > ----- Original Message ----- > > > > From: "Thales" < thaleslv at yahoo.com > > > > > To: "Sasha Chuzhoy" < sasha at redhat.com > > > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com > > > > Sent: Wednesday, January 13, 2016 4:38:29 PM > > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > > > Sasha, > > > > Okay, I ran it. I hope there is not a conflict with the previous > > > > install. > > > > I ran the web interface, Dashboard, and the same error pops up, the > > > > HTTP > > > > 400 error. I then ran the keypair command line command at root, and get > > > > a different error, a n HTTP 401 authentication error: > > > > Here are the commands and the > > > > output: http://paste.openstack.org/show/483817/ > > > > > > > > > > > > > > > > ...John > > > > > > > > > > > > On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy < > > > > sasha at redhat.com > > > > > > > > > wrote: > > > > > > > > > > > > John, > > > > please try to run the " packstack --allinone" command as root (or with > > > > sudo). > > > > > > > > Then see if the error reproduces. > > > > Thanks. > > > > > > > > Best regards, > > > > Sasha Chuzhoy. > > > > > > > > ----- Original Message ----- > > > > > From: "Thales" < thaleslv at yahoo.com > > > > > > To: "Ivan Chavero" < ichavero at redhat.com > > > > > > Cc: rdo-list at redhat.com > > > > > Sent: Wednesday, January 13, 2016 12:26:21 PM > > > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing > > > > > > > > > > Ivan, > > > > > > > > > > You're right, I ran it without the sudo command. I was following the > > > > > directions here, where they don't use sudo: > > > > > https://www.rdoproject.org/install/quickstart/ > > > > > > > > > > > > > > > Is that wrong? > > > > > > > > > > Regards, > > > > > ...John > > > > > > > > > > > > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero > > > > > < ichavero at redhat.com > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Here is the pastebin, basically a repeat of the "quickstart" > > > > > > website > > > > > > > > > > > > http://paste.openstack.org/show/483685/ > > > > > > > > > > > > > > > Looking at your history i noticed that you run packstack withuout the > > > > > sudo > > > > > command. > > > > > Are you sure it finished correctly? It should be run as root. > > > > > > > > > > Cheers, > > > > > Ivan > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From marius at remote-lab.net Fri Jan 15 12:59:12 2016 From: marius at remote-lab.net (Marius Cornea) Date: Fri, 15 Jan 2016 13:59:12 +0100 Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1370104289.13327357.1452862130100.JavaMail.zimbra@redhat.com> References: <404941246.12374602.1452758165188.JavaMail.zimbra@redhat.com> <2059404935.4223021.1452804183973.JavaMail.yahoo@mail.yahoo.com> <1370104289.13327357.1452862130100.JavaMail.zimbra@redhat.com> Message-ID: Giving another shot - I saw that the shell prompt shows the localhost name. Could you try setting a proper hostname (hostname -f returns a fqdn) and retry? On Fri, Jan 15, 2016 at 1:48 PM, Javier Pena wrote: > ----- Original Message ----- > >> Hello Javier, > >> Okay, I did what you said. This took some fumbling around. There are 7 files. >> The two biggest are 13k and 12k , the others are 4k and below. Hope they >> aren't too big! > >> == 1> nova-api.log:============================= > > [...snip...] > >> c73d477ea608499eb117fb79b28bff80 - - -] Option "sql_connection" from group >> "DEFAULT" is deprecated. Use option "connection" from group "database". >> 2016-01-14 14:26:44.407 6099 INFO nova.api.openstack.wsgi >> [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b >> c73d477ea608499eb117fb79b28bff80 - - -] HTTP exception thrown: Keypair data >> is invalid: failed to generate fingerprint >> 2016-01-14 14:26:44.408 6099 INFO nova.osapi_compute.wsgi.server >> [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b >> c73d477ea608499eb117fb79b28bff80 - - -] 192.168.1.12 "POST >> /v2/c73d477ea608499eb117fb79b28bff80/os-keypairs HTTP/1.1" status: 400 len: >> 319 time: 3.4419072 > > So this is all the Nova logs show, it is just complaining that the keypair data is not valid. > > While we try to get some other ideas, does dmesg show any application segfaulting? > > Javier > > [... all other logs, snipping ...] > >> On Thursday, January 14, 2016 1:56 AM, Javier Pena >> wrote: > >> ----- Original Message ----- > >> > "I wasn't able to reproduce your issue." >> > Thanks for that, Sasha! > >> > I have looked through the eight log files in /var/log/nova and I found >> > three >> > things, which you can see here: >> > http://paste.openstack.org/show/483844/ > >> > It looks like the SQL server is not working right? > >> Hi John, > >> This just looks like a startup thing, where the Nova services were up before >> the database was available. It is working as expected, as services are >> retrying and eventually connect. > >> To simplify the log issue, let's try this: > >> - openstack-service stop nova >> - go to /var/log/nova and move all log files to a backup location, so they >> are empty >> - openstack-service start nova >> - Try to reproduce the issue >> - openstak-service stop nova > >> The log file size should be manageable now and there should be little noise. >> If they are not too big, could you try pasting them? There should be >> something in api.log to move forward. > >> Regards, >> Javier > >> > System resources. I'm looking at the Gnome System Monitor. >> > Total Memory is 9.4 gigabytes, with 3.2 gigbytes used. > >> > But, this is strange, it says that CPU usage is at 100%. Not sure how that >> > can be. > >> > vmtoolsd is using around 98% of the cpu! I have an 8 core machine, so I'm >> > not >> > sure how this is measuring. > >> > ...John > >> > On Wednesday, January 13, 2016 9:48 PM, Sasha Chuzhoy < sasha at redhat.com > >> > wrote: > >> > Hi John, >> > I wasn't able to reproduce your issue. >> > Could you please check the logs for errors and also double check that the >> > system's resources aren't exhausted. >> > Thanks. > >> > Best regards, >> > Sasha Chuzhoy. > >> > ----- Original Message ----- >> > > From: "Thales" < thaleslv at yahoo.com > >> > > To: "Sasha Chuzhoy" < sasha at redhat.com > >> > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com >> > > Sent: Wednesday, January 13, 2016 7:47:34 PM >> > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing >> > > >> > > Okay, I decided to go back to the very start, to a clean install of >> > > CentOS >> > > 7. >> > > I ran all of the commands to install rdo as root.That is, the commands >> > > from the quick start website here: >> > > https://www.rdoproject.org/install/quickstart/ >> > > >> > > Alas, the same HTTP 400 error crops up! I ran the keypair commands from >> > > the >> > > CLI as well. Wow. >> > > >> > > It has to be something simple and obvious. >> > > ...John >> > > >> > > On Wednesday, January 13, 2016 3:52 PM, Sasha Chuzhoy < sasha at redhat.com >> > > > >> > > wrote: >> > > >> > > >> > > Did the run of "packstack --allinone" completed successfully or exited >> > > with >> > > error? >> > > >> > > The should be no conflict with the previous install (not sure if the >> > > previous >> > > install completed successfully). >> > > Thanks. >> > > >> > > Best regards, >> > > Sasha Chuzhoy. >> > > >> > > ----- Original Message ----- >> > > > From: "Thales" < thaleslv at yahoo.com > >> > > > To: "Sasha Chuzhoy" < sasha at redhat.com > >> > > > Cc: "Ivan Chavero" < ichavero at redhat.com >, rdo-list at redhat.com >> > > > Sent: Wednesday, January 13, 2016 4:38:29 PM >> > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing >> > > > >> > > > Sasha, >> > > > Okay, I ran it. I hope there is not a conflict with the previous >> > > > install. >> > > > I ran the web interface, Dashboard, and the same error pops up, the >> > > > HTTP >> > > > 400 error. I then ran the keypair command line command at root, and get >> > > > a different error, a n HTTP 401 authentication error: >> > > > Here are the commands and the >> > > > output: http://paste.openstack.org/show/483817/ >> > > > >> > > > >> > > > >> > > > ...John >> > > > >> > > > >> > > > On Wednesday, January 13, 2016 2:46 PM, Sasha Chuzhoy < >> > > > sasha at redhat.com >> > > > > >> > > > wrote: >> > > > >> > > > >> > > > John, >> > > > please try to run the " packstack --allinone" command as root (or with >> > > > sudo). >> > > > >> > > > Then see if the error reproduces. >> > > > Thanks. >> > > > >> > > > Best regards, >> > > > Sasha Chuzhoy. >> > > > >> > > > ----- Original Message ----- >> > > > > From: "Thales" < thaleslv at yahoo.com > >> > > > > To: "Ivan Chavero" < ichavero at redhat.com > >> > > > > Cc: rdo-list at redhat.com >> > > > > Sent: Wednesday, January 13, 2016 12:26:21 PM >> > > > > Subject: Re: [Rdo-list] RDO, packstack, Keypair creation is failing >> > > > > >> > > > > Ivan, >> > > > > >> > > > > You're right, I ran it without the sudo command. I was following the >> > > > > directions here, where they don't use sudo: >> > > > > https://www.rdoproject.org/install/quickstart/ >> > > > > >> > > > > >> > > > > Is that wrong? >> > > > > >> > > > > Regards, >> > > > > ...John >> > > > > >> > > > > >> > > > > On Wednesday, January 13, 2016 9:37 AM, Ivan Chavero >> > > > > < ichavero at redhat.com > >> > > > > wrote: >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > > >> > > > > > Here is the pastebin, basically a repeat of the "quickstart" >> > > > > > website >> > > > > > >> > > > > > http://paste.openstack.org/show/483685/ >> > > > > >> > > > > >> > > > > Looking at your history i noticed that you run packstack withuout the >> > > > > sudo >> > > > > command. >> > > > > Are you sure it finished correctly? It should be run as root. >> > > > > >> > > > > Cheers, >> > > > > Ivan >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > _______________________________________________ >> > > > > Rdo-list mailing list >> > > > > Rdo-list at redhat.com >> > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > >> > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > >> > > > >> > > > >> > > >> > > > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From bsydor at gmail.com Fri Jan 15 14:35:45 2016 From: bsydor at gmail.com (Bohdan Sydor) Date: Fri, 15 Jan 2016 14:35:45 +0000 Subject: [Rdo-list] Brocade VCS vs RDO Kilo Message-ID: Hello, I'm trying to integrate Neutron in RDO Kilo with Brocade VCS (VDX 6740). I've used the brocade mechanism driver and configured the switch credentials. After the neutron-server has been restarted I can see the connection to the fabric: 2016-01-15 15:26:29.028 2568 DEBUG neutron.service [-] ml2_brocade.address = XXX log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2199 2016-01-15 15:26:29.028 2568 DEBUG neutron.service [-] ml2_brocade.ostype = NOS log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2199 2016-01-15 15:26:29.028 2568 DEBUG neutron.service [-] ml2_brocade.osversion = autodetect log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2199 2016-01-15 15:26:29.028 2568 DEBUG neutron.service [-] ml2_brocade.password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2199 2016-01-15 15:26:29.029 2568 DEBUG neutron.service [-] ml2_brocade.physical_networks = physnet2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2199 2016-01-15 15:26:29.029 2568 DEBUG neutron.service [-] ml2_brocade.username = XXX log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2199 However then I try to create a new network the command `neutron net-create bnet1' is handing forever and I see the netconf error in the logs: 2016-01-15 15:19:22.994 141913 ERROR networking_brocade.vdx.ml2driver.nos.nosdriver [req-e47db9a2-dfbb-47c0-b63e-a0be613ad056 ] NETCONF error 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver Traceback (most recent call last): 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver File "/usr/lib/python2.7/site-packages/networking_brocade/vdx/ml2driver/nos/nosdriver.py", line 124, in create_network 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver self.create_vlan_interface(mgr, net_id) 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver File "/usr/lib/python2.7/site-packages/networking_brocade/vdx/ml2driver/nos/nosdriver.py", line 192, in create_vlan_interface 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver mgr.edit_config(target='running', config=confstr) 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver File "/usr/lib/python2.7/site-packages/ncclient/manager.py", line 78, in wrapper 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver return self.execute(op_cls, *args, **kwds) 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver File "/usr/lib/python2.7/site-packages/ncclient/manager.py", line 132, in execute 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver raise_mode=self._raise_mode).request(*args, **kwds) 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver File "/usr/lib/python2.7/site-packages/ncclient/operations/edit.py", line 58, in request 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver return self._request(node) 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver File "/usr/lib/python2.7/site-packages/ncclient/operations/rpc.py", line 294, in _request 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver raise TimeoutExpiredError 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver TimeoutExpiredError 2016-01-15 15:19:22.994 141913 TRACE networking_brocade.vdx.ml2driver.nos.nosdriver 2016-01-15 15:19:22.996 141913 INFO ncclient.operations.rpc [req-e47db9a2-dfbb-47c0-b63e-a0be613ad056 ] Requesting 'CloseSession' I've tried both versions of ncclient: *ncc*lient==0.3.2 from https://code.grnet.gr/git/ncclient as well as the one available in PyPi. Additionally I installed from PyPi networking-*broc*ade==2015.1.1.dev55 as neutron server didn't start complaining about missing networking_brocade modude. I'll appreciate any hints. -- Regards, Bohdan -------------- next part -------------- An HTML attachment was scrubbed... URL: From morazi at redhat.com Fri Jan 15 17:57:39 2016 From: morazi at redhat.com (Mike Orazi) Date: Fri, 15 Jan 2016 12:57:39 -0500 Subject: [Rdo-list] OPNFV B Release RC0 In-Reply-To: <56987C79.4090105@redhat.com> References: <56987C79.4090105@redhat.com> Message-ID: <56993313.9070402@redhat.com> On 01/14/2016 11:58 PM, Dan Radez wrote: > Project Apex is the OPNFV project that is using RDO manager to deploy > OFNFV. > > We posted Release Brahmaputra RCO ( release 2 RCO ) today: > > RPM to use on a virutalization host install of CentOS7: > http://artifacts.opnfv.org/apex/brahmaputra/opnfv-apex-2.7-brahmaputra.1.rc0.noarch.rpm > > > ISO install including CentOS 7: > http://artifacts.opnfv.org/apex/brahmaputra/opnfv-brahmaputra.1.rc0.iso > > Installation Documentation: > http://artifacts.opnfv.org/apex/docs/installation-instructions/installation-instructions.html > > > One correction to the documentation, the settings files referenced in > the docs have not yet been moved to /etc and can be found in > /var/opt/opnfv/ > > Please join #opnfv-apex or email opnfv-users at lists.opnfv.org to provide > feedback or ask questions if you have a chance to take this for a spin. > We would enjoy hearing from you. > > Dan Radez > irc: radez > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com Congratulations! This looks like very cool stuff. - Mike From gcerami at redhat.com Fri Jan 15 18:26:34 2016 From: gcerami at redhat.com (Gabriele Cerami) Date: Fri, 15 Jan 2016 19:26:34 +0100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569875E0.5050909@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <569875E0.5050909@redhat.com> Message-ID: <1452882394.4030.190.camel@redhat.com> On Fri, 2016-01-15 at 15:30 +1100, Gilles Dubreuil wrote: Hi, thanks for the historical background, it was enlightening. > The structural complexity could be reduced by benefiting from > limiting > forks, as describe above, using branch for each related Openstack > distro > (RDO/OSP/etc). If this is not possible then each Opentack installer > has > to fork every Openstack modules it needs. Why are forks so bad ? Do not resist the fork. Embrace it. I'll try to clarify some of the choices in the current OPM CI. We forked every modules because we needed a place to put the patches for that module, and the result of the merge with the main branch. The fork is not static, rdo-puppet-modules/puppet-nova/master is kept in sync with openstack/puppet-nova/master constantly and automatically, and master equivalent on rdo-puppet-modules/puppet-nova is kept really just as a cursor to mark the point where we have tested upstream changes with our patches. We could as well use master-patches from midstream and master from upstream to make the final merge, not touching midstream master at all. So do not consider forks a problem, since their maintenance burden is quite lifted at the moment, at least for OPM. Regarding the naming issues, the OPMCI scripts are pretty flexible, we should be able to change the naming scheme with minimal efforts, and the scripts are already supporting branch mappings (master in upstream can be tracked as stable/liberty in midstream for example) Use the fork! From alan.pevec at redhat.com Fri Jan 15 18:45:42 2016 From: alan.pevec at redhat.com (Alan Pevec) Date: Fri, 15 Jan 2016 19:45:42 +0100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569875E0.5050909@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <569875E0.5050909@redhat.com> Message-ID: <56993E56.3000007@redhat.com> > - Confusion between RDO and OSP ... > | +--> RDO9 +--> OSP9 One clarification: RDO is not using version numbers, it's using upstream release names i.e. RDO Kilo not RDO7 etc. RDO strives to be vanilla upstream sources packaged in RPM hence we're keeping upstream release versioning. Cheers, Alan From emilien at redhat.com Fri Jan 15 20:18:03 2016 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 15 Jan 2016 15:18:03 -0500 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569539CA.9060806@redhat.com> References: <569539CA.9060806@redhat.com> Message-ID: <569953FB.7040906@redhat.com> On 01/12/2016 12:37 PM, Emilien Macchi wrote: > So I started an etherpad to discuss why we have so much downstream > patches in Puppet modules. > > https://etherpad.openstack.org/p/opm-patches > > In my opinion, we should follow some best practices: > > * upstream first. If you find a bug, submit the patch upstream, wait for > at least a positive review from a core and also successful CI jobs. Then > you can backport it downstream if urgent. > * backport it to stable branches when needed. The patch we want is in > master and not stable? It's too easy to backport it in OPM. Do the > backport in upstream/stable first, it will help to stay updated with > upstream. > * don't change default parameters, don't override them. Our installers > are able to override any parameter so do not hardcode this kind of change. > * keep up with upstream: if you have an upstream patch under review that > is already in OPM: keep it alive and make sure it lands as soon as possible. > > UPSTREAM FIRST please please please (I'll send you cookies if you want). > > If you have any question about an upstream patch, please join > #puppet-openstack (freenode) and talk to the group. We're doing reviews > every day and it's not difficult to land a patch. > > In the meantime, I would like to justify each of our backports in the > etherpad and clean-up a maximum of them. > > Thank you for reading so far, Wow. Lot of thoughts, lot of interest, I like it! So this thread is having lot of ideas, proposals, facts, etc. I'm opening https://etherpad.openstack.org/p/rdo-opm and gathering the data from this thread. Feel free to open it and contribute, I hope from this etherpad we'll find a plan for the next steps. Thanks, -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From thaleslv at yahoo.com Sat Jan 16 00:17:37 2016 From: thaleslv at yahoo.com (Thales) Date: Sat, 16 Jan 2016 00:17:37 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: <1230332843.4723131.1452903200187.JavaMail.yahoo@mail.yahoo.com> References: <1370104289.13327357.1452862130100.JavaMail.zimbra@redhat.com> <1230332843.4723131.1452903200187.JavaMail.yahoo@mail.yahoo.com> Message-ID: <934322617.4676215.1452903457331.JavaMail.yahoo@mail.yahoo.com> Javier, Thanks! ?? "While we try to get some other ideas, does dmesg show any application segfaulting?" ? ?I ran:?dmesg | grep "seg" ? And I saw no indication of a segment fault. ? ...John On Friday, January 15, 2016 6:48 AM, Javier Pena wrote: ----- Original Message ----- > Hello Javier, > Okay, I did what you said. This took some fumbling around. There are 7 files. > The two biggest are 13k and 12k , the others are 4k and below. Hope they > aren't too big! > == 1> nova-api.log:============================= [...snip...] > c73d477ea608499eb117fb79b28bff80 - - -] Option "sql_connection" from group > "DEFAULT" is deprecated. Use option "connection" from group "database". > 2016-01-14 14:26:44.407 6099 INFO nova.api.openstack.wsgi > [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b > c73d477ea608499eb117fb79b28bff80 - - -] HTTP exception thrown: Keypair data > is invalid: failed to generate fingerprint > 2016-01-14 14:26:44.408 6099 INFO nova.osapi_compute.wsgi.server > [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b > c73d477ea608499eb117fb79b28bff80 - - -] 192.168.1.12 "POST > /v2/c73d477ea608499eb117fb79b28bff80/os-keypairs HTTP/1.1" status: 400 len: > 319 time: 3.4419072 So this is all the Nova logs show, it is just complaining that the keypair data is not valid. While we try to get some other ideas, does dmesg show any application segfaulting? Javier -------------- next part -------------- An HTML attachment was scrubbed... URL: From thaleslv at yahoo.com Sat Jan 16 00:21:45 2016 From: thaleslv at yahoo.com (Thales) Date: Sat, 16 Jan 2016 00:21:45 +0000 (UTC) Subject: [Rdo-list] RDO, packstack, Keypair creation is failing In-Reply-To: References: Message-ID: <1315960323.4751716.1452903705679.JavaMail.yahoo@mail.yahoo.com> ? Marius, ? ?I'm not sure what a "proper" host name is. ? I looked around for ideas. ?I came up with "centos7.century.net", since centurytel is my provider. ? ?I changed the hostname to that and tried running the nova key-pair command, and the same http 400 error arises. ? ?But, I'll bet I made a rudimentary mistake like this somewhere, since I'm just learning this. ? ? ?...John On Friday, January 15, 2016 6:59 AM, Marius Cornea wrote: Giving another shot - I saw that the shell prompt shows the localhost name. Could you try setting a proper hostname (hostname -f returns a fqdn) and retry? On Fri, Jan 15, 2016 at 1:48 PM, Javier Pena wrote: > ----- Original Message ----- > >> Hello Javier, > >> Okay, I did what you said. This took some fumbling around. There are 7 files. >> The two biggest are 13k and 12k , the others are 4k and below. Hope they >> aren't too big! > >> == 1> nova-api.log:============================= > > [...snip...] > >> c73d477ea608499eb117fb79b28bff80 - - -] Option "sql_connection" from group >> "DEFAULT" is deprecated. Use option "connection" from group "database". >> 2016-01-14 14:26:44.407 6099 INFO nova.api.openstack.wsgi >> [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b >> c73d477ea608499eb117fb79b28bff80 - - -] HTTP exception thrown: Keypair data >> is invalid: failed to generate fingerprint >> 2016-01-14 14:26:44.408 6099 INFO nova.osapi_compute.wsgi.server >> [req-fdc4d3ba-9f6b-4b98-85f9-51dfeb5dca84 c2114578a647492c985508e88c06f24b >> c73d477ea608499eb117fb79b28bff80 - - -] 192.168.1.12 "POST >> /v2/c73d477ea608499eb117fb79b28bff80/os-keypairs HTTP/1.1" status: 400 len: >> 319 time: 3.4419072 > > So this is all the Nova logs show, it is just complaining that the keypair data is not valid. > > While we try to get some other ideas, does dmesg show any application segfaulting? > > Javier > > [... all other logs, snipping ...] ?the log issue, let's try this: -------------- next part -------------- An HTML attachment was scrubbed... URL: From puthi at live.com Mon Jan 18 07:10:33 2016 From: puthi at live.com (Soputhi Sea) Date: Mon, 18 Jan 2016 14:10:33 +0700 Subject: [Rdo-list] Openstack Juno Live Migration --block-migrate failed "ValueError: A NetworkModel is required here" In-Reply-To: References: Message-ID: Hi, I wonder if anybody come across that same situation as me at all.Puthi From: puthi at live.com To: rdo-list at redhat.com Subject: Openstack Juno Live Migration --block-migrate failed "ValueError: A NetworkModel is required here" Date: Thu, 14 Jan 2016 18:20:50 +0700 Hi,Openstack Juno's Live Migration, I've been trying to get live-migration to work on this version but i keep getting the same error as below.I wonder if anybody can point me to the right direction to where to debug the problem. Or if anybody come across this problem before please share some ideas.I google around for a few days already but so far I haven't got any luck.Note: the same nova, neutron and libvirt configuration work on Icehouse and Liberty on a different cluster, as i tested.ThanksPuthiNova Version tested: 2014.2.3 and 2014.2.4Nova Error Log============2016-01-14 17:34:08.818 6173 ERROR oslo.messaging.rpc.dispatcher [req-54581412-a194-40d5-9208-b1bf6d04f8d8 ] Exception during message handling: A NetworkModel is required here2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher incoming.message))2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher payload)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 335, in decorated_function2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info())2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 323, in decorated_function2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4978, in live_migration2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher expected_attrs=expected)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 300, in _from_db_object2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher db_inst['info_cache'])2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/instance_info_cache.py", line 45, in _from_db_object2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher info_cache[field] = db_obj[field]2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 474, in __setitem__2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher setattr(self, name, value)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 75, in setter2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher field_value = field.coerce(self, name, value)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 189, in coerce2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher return self._type.coerce(obj, attr, value)2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 516, in coerce2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher raise ValueError(_('A NetworkModel is required here'))2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher ValueError: A NetworkModel is required here2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcherNova Config===========================[DEFAULT]rpc_backend = qpidqpid_hostname = management-hostauth_strategy = keystonemy_ip = 10.201.171.244vnc_enabled = Truenovncproxy_host=0.0.0.0novncproxy_port=6080novncproxy_base_url=http://management-host:6080/vnc_auto.htmlnetwork_api_class = nova.network.neutronv2.api.APIlinuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriverfirewall_driver = nova.virt.firewall.NoopFirewallDrivervncserver_listen=0.0.0.0vncserver_proxyclient_address=10.201.171.244[baremetal][cells][cinder][conductor][database]connection = mysql://nova:novadbpassword at db-host/nova[ephemeral_storage_encryption][glance]host = glance-hostport = 9292api_servers=$host:$port[hyperv][image_file_url][ironic][keymgr][keystone_authtoken]auth_uri = http://management-host:5000/v2.0identity_uri = http://management-host:35357admin_user = novaadmin_tenant_name = serviceadmin_password = nova2014agprod2[libvirt]live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE #, VIR_MIGRATE_TUNNELLEDblock_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_LIVE[matchmaker_redis][matchmaker_ring][metrics][neutron]url = http://management-host:9696admin_username = neutronadmin_password = neutronpasswordadmin_tenant_name = serviceadmin_auth_url = http://management-host:35357/v2.0auth_strategy = keystone[osapi_v3][rdp][serial_console][spice][ssl][trusted_computing][upgrade_levels]compute=icehouseconductor=icehouse[vmware][xenserver][zookeeper]Neutron Config============[DEFAULT]auth_strategy = keystonerpc_backend = neutron.openstack.common.rpc.impl_qpidqpid_hostname = management-hostcore_plugin = ml2service_plugins = routerdhcp_lease_duration = 604800dhcp_agents_per_network = 3[matchmaker_redis][matchmaker_ring][quotas][agent][keystone_authtoken]auth_uri = http://management-host:5000identity_uri = http://management-host:35357admin_tenant_name = serviceadmin_user = neutronadmin_password = neutronpasswordauth_host = management-hostauth_protocol = httpauth_port = 35357[database][service_providers]service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:defaultservice_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:defaultNeutron Plugin============[ml2]type_drivers = local,flatmechanism_drivers = openvswitch[ml2_type_flat]flat_networks = physnet3[ml2_type_vlan][ml2_type_gre]tunnel_id_ranges = 1:1000[ml2_type_vxlan][securitygroup]firewall_driver = neutron.agent.firewall.NoopFirewallDriverenable_security_group = False[ovs]enable_tunneling = Falselocal_ip = 10.201.171.244network_vlan_ranges = physnet3bridge_mappings = physnet3:br-bond0Libvirt Config===========/etc/sysconfig/libvirtdUncommentLIBVIRTD_ARGS="--listen"/etc/libvirt/libvirtd.conf listen_tls = 0listen_tcp = 1auth_tcp = ?none? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichavero at redhat.com Mon Jan 18 13:43:19 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Mon, 18 Jan 2016 08:43:19 -0500 (EST) Subject: [Rdo-list] OPM downstream patches In-Reply-To: <1452702990.1970.6.camel@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <1452702990.1970.6.camel@redhat.com> Message-ID: <874525968.8636705.1453124599399.JavaMail.zimbra@redhat.com> ----- Mensaje original ----- > De: "Lukas Bezdicka" > Para: "Emilien Macchi" , "Rdo-list at redhat.com" > Enviados: Mi?rcoles, 13 de Enero 2016 10:36:30 > Asunto: Re: [Rdo-list] OPM downstream patches > > On Tue, 2016-01-12 at 13:16 -0500, Emilien Macchi wrote: > > Also, the way we're packaging OPM is really bad. > > > > * we have no SHA1 for each module we have in OPM > /usr/share/openstack-puppet/Puppetfile > > * we are not able to validate each module > you have Puppetfile and our own patches as patches to tar > > * package tarball is not pure. All other OpenStack RPMS take upstream > > tarball so we can easily compare but in OPM... no way to do it. > Tarballs are always taken from github releases: > https://github.com/redhat-openstack/openstack-puppet-modules/releases > > And yes, dropping single package and creating metapackage is the way to > go. +1 This would simplify a lot the puppet module upgrading. > > > > Those issues are really critical, I would like to hear from OPM > > folks, > > and find solutions that we will work on during the following weeks. > > > > Thanks > > > > On 01/12/2016 12:37 PM, Emilien Macchi wrote: > > > So I started an etherpad to discuss why we have so much downstream > > > patches in Puppet modules. > > > > > > https://etherpad.openstack.org/p/opm-patches > > > > > > In my opinion, we should follow some best practices: > > > > > > * upstream first. If you find a bug, submit the patch upstream, > > > wait for > > > at least a positive review from a core and also successful CI jobs. > > > Then > > > you can backport it downstream if urgent. > > > * backport it to stable branches when needed. The patch we want is > > > in > > > master and not stable? It's too easy to backport it in OPM. Do the > > > backport in upstream/stable first, it will help to stay updated > > > with > > > upstream. > > > * don't change default parameters, don't override them. Our > > > installers > > > are able to override any parameter so do not hardcode this kind of > > > change. > > > * keep up with upstream: if you have an upstream patch under review > > > that > > > is already in OPM: keep it alive and make sure it lands as soon as > > > possible. > > > > > > UPSTREAM FIRST please please please (I'll send you cookies if you > > > want). > > > > > > If you have any question about an upstream patch, please join > > > #puppet-openstack (freenode) and talk to the group. We're doing > > > reviews > > > every day and it's not difficult to land a patch. > > > > > > In the meantime, I would like to justify each of our backports in > > > the > > > etherpad and clean-up a maximum of them. > > > > > > Thank you for reading so far, > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ichavero at redhat.com Mon Jan 18 13:56:03 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Mon, 18 Jan 2016 08:56:03 -0500 (EST) Subject: [Rdo-list] OPM downstream patches In-Reply-To: References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> Message-ID: <720802912.8647999.1453125363911.JavaMail.zimbra@redhat.com> > I can only assume that consumers and users of RDO do not expect > midstream patches in their packages. > We don't ship anything custom for Nova, Cinder, etc - OPM shouldn't be > any different. You have a point here, the problem is that sometimes reviews take a lot of time to be accepted and there are bugs that have to be solved really fast. That's why with some exceptions there are a lot of patches into OPM's patches repo, this patches stay there until the upstream patch is accepted. In RDO there should be constant rebases since there's no code freeze like in OSP so patches that are alreay in upstream are deleted more often. >From my point of view whe sould not have any fork of the puppet modules, we should be using the upstream release of the module and add patches when needed. Cheers, Ivan From ichavero at redhat.com Mon Jan 18 13:58:58 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Mon, 18 Jan 2016 08:58:58 -0500 (EST) Subject: [Rdo-list] OPM downstream patches In-Reply-To: <1452813961.4030.155.camel@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> <1452813961.4030.155.camel@redhat.com> Message-ID: <527588740.8652330.1453125538789.JavaMail.zimbra@redhat.com> ----- Mensaje original ----- > De: "Gabriele Cerami" > Para: rdo-list at redhat.com > Enviados: Jueves, 14 de Enero 2016 17:26:01 > Asunto: Re: [Rdo-list] OPM downstream patches > > On Thu, 2016-01-14 at 16:00 -0500, Jason Guiditta wrote: > > > I could be wrong, but I think this is consistent with what opm _used_ > > to do, which itself was _inconsistent_ with the rest of rdo (not sure > > how, or if this relates to pure upstream). So, for liberty, for > > example, we have upstream-liberty, which is _exactly_ what was in > > stable/liberty for the various openstack puppet modules at the time > > of > > the last (yes, manual) sync. Then we (opm) have stable/liberty, > > which > > has everything from upstream-liberty, with whatever patches have been > > added on top of it. If I am correct that is needed for rdo, but > > inconsistent with what you have described, how hard would it be to > > alter the rdo-puppet-modules stuff to work with the new reality? > > OPM has never used rdopkg for its update, if it's what you mean with > "this is needed for rdo": the patches are applied directly to the > midstream repo, and not saved in patch files to be applied to the > package. I don't know why OPM had this "privilege" and if we're going > to change it in the future, but this manual process of merging upstream > -liberty with the patches it's what has been automated in OPMCI. > rdopkg has been used in OPM for a while, i'm not sure if they are still using it through... > For example the module puppet-nova in rdo-puppet-modules, contains > three branches for every upstream branch: > - stable/liberty (tracks upstream/stable/liberty) > - stable/liberty-patches > - stable/liberty-tag (created with git merge liberty liberty-patches) > > what you call stable/liberty in rdo-puppet-modules/puppet-nova is > called liberty-tag. It contains upstream liberty branch to the latest > revision merged with all the patches present in liberty-patches (and if > a patch added to liberty-patches is finally merged upstream, it is > automatically removed from liberty-patches). > > The scripts that handle OPMCI have been modified recently (in the hope > that they could be used to handle other repositories as well), to > handle repositories much closely to what rdopkg requires, so it may > possible to alter to a new reality (if I understood what it means) > > (At the moment OPM CI is configured to follow only master branches from > upstream so you will not really find a liberty branch, but the concept > is the same for any upstream branch) > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ichavero at redhat.com Mon Jan 18 14:09:39 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Mon, 18 Jan 2016 09:09:39 -0500 (EST) Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569875E0.5050909@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <569875E0.5050909@redhat.com> Message-ID: <410113261.8659705.1453126179065.JavaMail.zimbra@redhat.com> > I think we would gain of having each Openstack Puppet module to extend > its current branch structure in a 'distro sub-branch' way, such as: > > master > + + > | | > | +--> RDO9 +--> OSP9 > | > back|port > | > +--> stable/liberty +--> RDO8 +--> OSP8 > | > | > +--> stable/kilo +--> RDO7 +--> OSP7 This should be done already for the official OpenStack puppet modules > > > The structural complexity could be reduced by benefiting from limiting > forks, as describe above, using branch for each related Openstack distro > (RDO/OSP/etc). If this is not possible then each Opentack installer has > to fork every Openstack modules it needs. > > For, the non Openstack puppet modules, because there isn't any link to > Openstack branches. A correspond fork has to be created. > > Hence, a solution #3: > > - For each installer > - Create a meta/wrapper package > - For each required/desired Openstack Puppet module > - If no 'distro sub-branches' available > - Module's source repo is forked > - Openstack equivalent branches structure are created > - A RPM package is created > - Non Puppet patches are applied if needed > - Add module to list of meta/wrapper package > - For each required/desired non Openstack Puppet module > - Module's source repo is forked > - Openstack equivalent branches structure are created > - Puppet patches are applied in correspond branch if needed > - A RPM package is created > - Non Puppet patches are applied if needed > - Add module to list of meta/wrapper package There should be only one OPM package and the installers should be modified to work with it. To maintain an OPM package for earch installer would be a burden for the installer developers and could generate confusion for the people that uses only OPM without any installer. Also, taking in account that every eight months or so there are new iniciative to create a new installer we will end up maintaining a slightly different OPM package for who knows how many installers we will end up with Cheers, Ivan From emilien at redhat.com Mon Jan 18 14:33:39 2016 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 18 Jan 2016 09:33:39 -0500 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <720802912.8647999.1453125363911.JavaMail.zimbra@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> <720802912.8647999.1453125363911.JavaMail.zimbra@redhat.com> Message-ID: <569CF7C3.6000801@redhat.com> On 01/18/2016 08:56 AM, Ivan Chavero wrote: > >> I can only assume that consumers and users of RDO do not expect >> midstream patches in their packages. >> We don't ship anything custom for Nova, Cinder, etc - OPM shouldn't be >> any different. > > You have a point here, the problem is that sometimes reviews take a lot of > time to be accepted and there are bugs that have to be solved really fast. What? Are you saying Puppet OpenStack patches take time to be merged? You can ask to the contributors, if your patch is passing CI, I'm sure it's landed in the same day or the next 2 days maximum. By the way, we have a weekly meeting where we do bug and review triage. If you have any patch that needs attention, please come-up and we'll help it to make it. So please, don't say it takes time to merge Puppet patches upstream. > That's why with some exceptions there are a lot of patches into OPM's > patches repo, this patches stay there until the upstream patch is accepted. > In RDO there should be constant rebases since there's no code freeze > like in OSP so patches that are alreay in upstream are deleted more often. > >>From my point of view whe sould not have any fork of the puppet modules, > we should be using the upstream release of the module and add patches when needed. > > > Cheers, > Ivan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From hguemar at fedoraproject.org Mon Jan 18 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 18 Jan 2016 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160118150003.8F15060A4009@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-01-20 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Packaging ](https://etherpad.openstack.org/p/RDO-Packaging) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From ichavero at redhat.com Mon Jan 18 15:22:56 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Mon, 18 Jan 2016 10:22:56 -0500 (EST) Subject: [Rdo-list] OPM downstream patches In-Reply-To: <569CF7C3.6000801@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> <720802912.8647999.1453125363911.JavaMail.zimbra@redhat.com> <569CF7C3.6000801@redhat.com> Message-ID: <369100653.8704682.1453130576701.JavaMail.zimbra@redhat.com> ----- Mensaje original ----- > De: "Emilien Macchi" > Para: rdo-list at redhat.com > Enviados: Lunes, 18 de Enero 2016 8:33:39 > Asunto: Re: [Rdo-list] OPM downstream patches > > > > On 01/18/2016 08:56 AM, Ivan Chavero wrote: > > > >> I can only assume that consumers and users of RDO do not expect > >> midstream patches in their packages. > >> We don't ship anything custom for Nova, Cinder, etc - OPM shouldn't be > >> any different. > > > > You have a point here, the problem is that sometimes reviews take a lot of > > time to be accepted and there are bugs that have to be solved really fast. > > What? Are you saying Puppet OpenStack patches take time to be merged? > You can ask to the contributors, if your patch is passing CI, I'm sure > it's landed in the same day or the next 2 days maximum. I wasn't criticizing the review process i was justifying the existence of patches on the RPM package. My point was that sometimes you can't wait for the review process to finish > By the way, we have a weekly meeting where we do bug and review triage. > If you have any patch that needs attention, please come-up and we'll > help it to make it. > > So please, don't say it takes time to merge Puppet patches upstream. > I think that you misunderstood my comment i said **sometimes** Cheers, Ivan From rbowen at redhat.com Mon Jan 18 15:54:20 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 18 Jan 2016 10:54:20 -0500 Subject: [Rdo-list] RDO blog roundup, Jan 18, 2016 Message-ID: <569D0AAC.8060105@redhat.com> It's been a slow few weeks on the RDO blog front. Keep those posts coming. Tell us what you've been working on! NFV and Open Networking with RHEL OpenStack Platform, by Nir Yechiel I was honored to be invited to speak on a local Intel event about Red Hat and what we are doing in the NFV space. I only had 30 minutes, so I tried to provide a high level overview of our offering, covering some main points: ? read more at http://tm3.org/4h RDO Community Day @ FOSDEM by Rich Bowen The schedule has now been published at https://www.rdoproject.org/events/rdo-day-fosdem-2016/ ? read more at http://tm3.org/3z Openstack Neutron: troubleshooting and solving common problems by Arie Bergman Important note: this post is based on the great sessions ?I Can?t Ping My VM! Learn How to Debug Neutron and Solve Common Problems? of Rossella Sblendido & OpenStack Neutron Troubleshooting by Assaf Muller . So the credit goes to them. I simply gathered it here in a written form and added little bit of description and examples. Enjoy =) ? read more at http://tm3.org/4i RDO doc day and test day by Rich Bowen With the Mitaka milestone 2 release due very soon, the RDO community has two events in the coming days. ? read more at http://tm3.org/4j Hackery setting up RDO Kilo on CentOS 7.2 with Mongodb && Nagios up and running as of 01/08/2016 by Boris Derzhavets I have noticed several questions (ask.openstack.org,stackoverflow.com) regarding mentioned ongoing issue with mongodb-server and nagios when installing RDO Kilo 2015.1.1 on CentOS 7.2 via packstack. At the moment I see a hack provided bellow which might be applied as pre-installation step or fix after initial packstack crash. Bug submitted to bugzilla.redhat.com ? read more at http://tm3.org/4k AIO RDO Liberty && several external networks VLAN provider setup by Boris Derzhavets Post below is addressing the question when AIO RDO Liberty Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack ?allinone install doesn't allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.57.0/24 vlan tagged (172), 10.10.32.0/24 vlan tagged (200) already exists when RDO install is running. If demo_provision was "y" , then delete router1 and created external network of VXLAN type ? read more at http://tm3.org/4l -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Jan 18 16:15:51 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 18 Jan 2016 11:15:51 -0500 Subject: [Rdo-list] Unanswered ask.openstack.org "RDO" questions (Jan 18, 2016) Message-ID: <569D0FB7.7020407@redhat.com> Thanks for everyone that helped work on the list of unanswered RDO questions last week. This week we have 62 unanswered questions: Create a new dashboard Error https://ask.openstack.org/en/question/87549/create-a-new-dashboard-error/ Tags: dashboard, command, startdash, manage.py Hiera 3.0.1 on CentOS 7.2 errors out https://ask.openstack.org/en/question/87474/hiera-301-on-centos-72-errors-out/ Tags: undercloud Liberty RDO "Neutron with existing external network" https://ask.openstack.org/en/question/87456/liberty-rdo-neutron-with-existing-external-network/ Tags: rdo, liberty-neutron OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Clarification on docs for self service connectivity https://ask.openstack.org/en/question/87183/clarification-on-docs-for-self-service-connectivity/ Tags: liberty, neutron, connectivity, router Can't create volume with cinder https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ Tags: cinder, glusterfs, nfs error installing rdo kilo with proxy https://ask.openstack.org/en/question/85703/error-installing-rdo-kilo-with-proxy/ Tags: rdo, packstack, centos, proxy Why is /usr/bin/openstack domain list ... hanging? https://ask.openstack.org/en/question/85593/why-is-usrbinopenstack-domain-list-hanging/ Tags: puppet, keystone, kilo [ RDO ] Could not find declared class ::remote::db https://ask.openstack.org/en/question/84820/rdo-could-not-find-declared-class-remotedb/ Tags: rdo Sahara SSHException: Error reading SSH protocol banner https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ Tags: sahara, icehouse, ssh, vanila Error Sahara create cluster: 'Error attach volume to instance https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, vanila, icehouse Creating Sahara cluster: Error attach volume to instance https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, hadoop, icehouse, vanilla Routing between two tenants https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ Tags: kilo, fuel, rdo, routing Freeing IP from FLAT network setup https://ask.openstack.org/en/question/84063/freeing-ip-from-flat-network-setup/ Tags: juno, existing-network, rdo, neutron, flat How to deploy Virtual network function (VNF) in Opnstack integrated Opendaylight https://ask.openstack.org/en/question/84061/how-to-deploy-virtual-network-function-vnf-in-opnstack-integrated-opendaylight/ Tags: vnf, kilo, opendaylight, nfv cann't install python-keystone-auth-token [Close Duplicate] https://ask.openstack.org/en/question/83942/cannt-install-python-keystone-auth-token-close-duplicate/ Tags: python-keystone, openstack-swift RDO kilo installation metadata widget doesn't work https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ Tags: kilo, flavor, metadata Not able to ssh into RDO Kilo instance https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ Tags: rdo, instance-ssh No able to create an instance in odl integrated RDO Kilo openstack https://ask.openstack.org/en/question/83700/no-able-to-create-an-instance-in-odl-integrated-rdo-kilo-openstack/ Tags: kilo, rdo, opendaylight, kilo-neutron, integration redhat RDO enable access to swift via S3 https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ Tags: swift, s3 openstack baremetal introspection internal server error https://ask.openstack.org/en/question/82790/openstack-baremetal-introspection-internal-server-error/ Tags: rdo, ironic-inspector, tripleo glance\nova command line SSL failure https://ask.openstack.org/en/question/82692/glancenova-command-line-ssl-failure/ Tags: glance, kilo-openstack, ssl Cannot create/update flavor metadata from horizon https://ask.openstack.org/en/question/82477/cannot-createupdate-flavor-metadata-from-horizon/ Tags: rdo, kilo, flavor, metadata Installing openstack using packstack (rdo) failed https://ask.openstack.org/en/question/82473/installing-openstack-using-packstack-rdo-failed/ Tags: rdo, packstack, installation-error, keystone can't start instances after upgrade/reboot https://ask.openstack.org/en/question/82205/cant-start-instances-after-upgradereboot/ Tags: cinder, iscsi, rdo, juno_rdo Cinder LVM iSCSI can't attach https://ask.openstack.org/en/question/82031/cinder-lvm-iscsi-cant-attach/ Tags: lvmiscsi, cinder, kilo, cento7, rdo external "NFS"-Network for all vms https://ask.openstack.org/en/question/81709/external-nfs-network-for-all-vms/ Tags: external-network, juno-neutron, ovs RDO - Qrouters lose IP on public network https://ask.openstack.org/en/question/80761/rdo-qrouters-lose-ip-on-public-network/ Tags: rdo, juno_rdo, floating-ip, qrouter Missing veth pair bond and wrong/superfluous physical interface? https://ask.openstack.org/en/question/80556/missing-veth-pair-bond-and-wrongsuperfluous-physical-interface/ Tags: rdo, kilo-neutron, neutron-openvswitch, brctl, packstack VMware Host Backend causes No valid host was found. Bug ??? https://ask.openstack.org/en/question/79738/vmware-host-backend-causes-no-valid-host-was-found-bug/ Tags: vmware, rdo -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Jan 18 19:27:30 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 18 Jan 2016 14:27:30 -0500 Subject: [Rdo-list] OPNFV B Release RC0 In-Reply-To: <56987C79.4090105@redhat.com> References: <56987C79.4090105@redhat.com> Message-ID: <569D3CA2.2070907@redhat.com> Where would I find the release announcement? On 01/14/2016 11:58 PM, Dan Radez wrote: > Project Apex is the OPNFV project that is using RDO manager to deploy > OFNFV. > > We posted Release Brahmaputra RCO ( release 2 RCO ) today: > > RPM to use on a virutalization host install of CentOS7: > http://artifacts.opnfv.org/apex/brahmaputra/opnfv-apex-2.7-brahmaputra.1.rc0.noarch.rpm > > > ISO install including CentOS 7: > http://artifacts.opnfv.org/apex/brahmaputra/opnfv-brahmaputra.1.rc0.iso > > Installation Documentation: > http://artifacts.opnfv.org/apex/docs/installation-instructions/installation-instructions.html > > > One correction to the documentation, the settings files referenced in > the docs have not yet been moved to /etc and can be found in > /var/opt/opnfv/ > > Please join #opnfv-apex or email opnfv-users at lists.opnfv.org to provide > feedback or ask questions if you have a chance to take this for a spin. > We would enjoy hearing from you. > > Dan Radez > irc: radez > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Jan 18 19:45:03 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 18 Jan 2016 14:45:03 -0500 Subject: [Rdo-list] RDO/OpenStack Meetups, week of January 18th Message-ID: <569D40BF.1010403@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Monday January 18 in Guadalajara, MX: snap ? A modern and performant telemetry framework - http://www.meetup.com/OpenStack-GDL/events/227277059/ * Tuesday January 19 in Tehran, IR: Mastering OpenStack - Step 16 - Network Design / Docker on OpenStack - Step 8 - http://www.meetup.com/Iran-OpenStack/events/228131053/ * Tuesday January 19 in Houston, TX, US: HP talks OpenStack - http://www.meetup.com/openstackhoustonmeetup/events/225092390/ * Tuesday January 19 in Manchester, 18, GB: Using new clouds, private clouds, and how things can go wrong - http://www.meetup.com/Manchester-OpenStack-Meetup/events/227459376/ * Tuesday January 19 in Chesterfield, MO, US: January meetup - http://www.meetup.com/OpenStack-STL/events/227517187/ * Wednesday January 20 in S?o Paulo, BR: 9? Hangout OpenStack Brasil - http://www.meetup.com/Openstack-Brasil/events/227936853/ * Thursday January 21 in Pasadena, CA, US: Join us at SCaLE 14x! - http://www.meetup.com/OpenStack-LA/events/228076480/ * Thursday January 21 in Austin, TX, US: OpenStack Meetup Jan 21 - Coming soon! - http://www.meetup.com/OpenStack-Austin/events/227830355/ * Thursday January 21 in San Francisco, CA, US: SFBay OpenStack Advanced Track #OSSFO Topic: OpenStack Security - http://www.meetup.com/openstack/events/227192009/ * Thursday January 21 in Atlanta, GA, US: OpenStack Meetup (Topic TBD) - http://www.meetup.com/openstack-atlanta/events/226994579/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From gilles at redhat.com Tue Jan 19 02:13:59 2016 From: gilles at redhat.com (Gilles Dubreuil) Date: Tue, 19 Jan 2016 13:13:59 +1100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <369100653.8704682.1453130576701.JavaMail.zimbra@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <1452732884.4030.120.camel@redhat.com> <20160114210044.GB12536@redhat.com> <720802912.8647999.1453125363911.JavaMail.zimbra@redhat.com> <569CF7C3.6000801@redhat.com> <369100653.8704682.1453130576701.JavaMail.zimbra@redhat.com> Message-ID: <569D9BE7.8010008@redhat.com> On 19/01/16 02:22, Ivan Chavero wrote: > > > ----- Mensaje original ----- >> De: "Emilien Macchi" >> Para: rdo-list at redhat.com >> Enviados: Lunes, 18 de Enero 2016 8:33:39 >> Asunto: Re: [Rdo-list] OPM downstream patches >> >> >> >> On 01/18/2016 08:56 AM, Ivan Chavero wrote: >>> >>>> I can only assume that consumers and users of RDO do not expect >>>> midstream patches in their packages. >>>> We don't ship anything custom for Nova, Cinder, etc - OPM shouldn't be >>>> any different. >>> >>> You have a point here, the problem is that sometimes reviews take a lot of >>> time to be accepted and there are bugs that have to be solved really fast. >> >> What? Are you saying Puppet OpenStack patches take time to be merged? >> You can ask to the contributors, if your patch is passing CI, I'm sure >> it's landed in the same day or the next 2 days maximum. > > I wasn't criticizing the review process i was justifying the existence of > patches on the RPM package. > My point was that sometimes you can't wait for the review process to finish > Thank you Ivan, this is a very important feedback. We all know Emilien is working very hard to make sure things are moving. Meanwhile sometimes patches take longer than 2 days, that's for sure! >> By the way, we have a weekly meeting where we do bug and review triage. >> If you have any patch that needs attention, please come-up and we'll >> help it to make it. > > > > >> >> So please, don't say it takes time to merge Puppet patches upstream. >> > I think that you misunderstood my comment i said **sometimes** > > Cheers, > Ivan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From gilles at redhat.com Tue Jan 19 02:20:48 2016 From: gilles at redhat.com (Gilles Dubreuil) Date: Tue, 19 Jan 2016 13:20:48 +1100 Subject: [Rdo-list] OPM downstream patches In-Reply-To: <410113261.8659705.1453126179065.JavaMail.zimbra@redhat.com> References: <569539CA.9060806@redhat.com> <56954313.1010001@redhat.com> <569575DA.9050101@redhat.com> <569875E0.5050909@redhat.com> <410113261.8659705.1453126179065.JavaMail.zimbra@redhat.com> Message-ID: <569D9D80.2000801@redhat.com> On 19/01/16 01:09, Ivan Chavero wrote: > >> I think we would gain of having each Openstack Puppet module to extend >> its current branch structure in a 'distro sub-branch' way, such as: >> >> master >> + + >> | | >> | +--> RDO9 +--> OSP9 >> | >> back|port >> | >> +--> stable/liberty +--> RDO8 +--> OSP8 >> | >> | >> +--> stable/kilo +--> RDO7 +--> OSP7 > > This should be done already for the official OpenStack puppet modules > Really, I wish. Do you have any pointer? >> >> >> The structural complexity could be reduced by benefiting from limiting >> forks, as describe above, using branch for each related Openstack distro >> (RDO/OSP/etc). If this is not possible then each Opentack installer has >> to fork every Openstack modules it needs. >> >> For, the non Openstack puppet modules, because there isn't any link to >> Openstack branches. A correspond fork has to be created. >> >> Hence, a solution #3: >> >> - For each installer >> - Create a meta/wrapper package >> - For each required/desired Openstack Puppet module >> - If no 'distro sub-branches' available >> - Module's source repo is forked >> - Openstack equivalent branches structure are created >> - A RPM package is created >> - Non Puppet patches are applied if needed >> - Add module to list of meta/wrapper package >> - For each required/desired non Openstack Puppet module >> - Module's source repo is forked >> - Openstack equivalent branches structure are created >> - Puppet patches are applied in correspond branch if needed >> - A RPM package is created >> - Non Puppet patches are applied if needed >> - Add module to list of meta/wrapper package > > > There should be only one OPM package and the installers should be modified > to work with it. To maintain an OPM package for earch installer would be > a burden for the installer developers and could generate confusion for the > people that uses only OPM without any installer. > Also, taking in account that every eight months or so there are new > iniciative to create a new installer we will end up maintaining a slightly > different OPM package for who knows how many installers we will end up with > > Cheers, > Ivan > From javier.pena at redhat.com Tue Jan 19 08:58:22 2016 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 19 Jan 2016 03:58:22 -0500 (EST) Subject: [Rdo-list] [delorean] Delorean planned outage on January 21 In-Reply-To: <1723605311.14978220.1453193649575.JavaMail.zimbra@redhat.com> Message-ID: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> Dear rdo-list, Due to a planned maintenance on the underlying infrastructure, the Delorean server will be under maintenance on January 21, from 9:00 UTC to 23:00 UTC time. During the maintenance window, the Delorean repositories will be available (served from the backup system), but no new commits will be processed. If you have any questions or concerns, please do not hesitate to contact us. Regards, Javier Pe?a From trown at redhat.com Tue Jan 19 11:49:10 2016 From: trown at redhat.com (John Trowbridge) Date: Tue, 19 Jan 2016 06:49:10 -0500 Subject: [Rdo-list] What should be RDO Definition of Done? In-Reply-To: References: Message-ID: <569E22B6.1050002@redhat.com> On 01/12/2016 05:09 AM, Ha?kel wrote: > Hello, > > In an effort to improve RDO release process, we came accross the idea > of having a defined definition of done. > What are the criteria to decide if a release of RDO is DONE? > > * RDO installs w/ packstack > * RDO installs w/ RDO Manager For RDO Manager, I would propose the following detailed criteria: * Deploys with 3 controllers in HA with at least 1 ceph node and 1 compute node * The above setup passes 100% of tempest smoke tagged tests (w/ exception below) * A skipfile is allowed for tempest, but every entry in the skipfile must be associated with a Bugzilla. In order to be allowed under DoD, the associated Bugzilla must be deemed a non-blocker. * There must be documentation on rdoproject.org that can be followed without workaround to get to this setup outside of CI. > * Documentation is up to date > etc .... > > I added the topic to the RDO meeting agenda, but I'd like to enlarge > the discussion outside the pool of people coming > to the meetings and even technical contributors. > > Regards, > H. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Tue Jan 19 20:09:41 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 19 Jan 2016 15:09:41 -0500 Subject: [Rdo-list] Scheduled downtime of RDO website Message-ID: <569E9805.3000603@redhat.com> OS1, the cloud on which the RDO website runs, has scheduled downtime on Thursday, January 21: Scheduled Date: 21-Jan-2016 15:00 UTC/10:00 EST Estimated Time Required: 8 hours This will impact our planned Docs Day. My apologies for the late notice. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From chkumar246 at gmail.com Wed Jan 20 10:11:08 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 20 Jan 2016 15:41:08 +0530 Subject: [Rdo-list] RDO Bug Statistics [2016-01-20] Message-ID: # RDO Bugs on 2016-01-20 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 381 - Fixed (MODIFIED, POST, ON_QA): 211 ## Number of open bugs by component dib-utils [ 2] diskimage-builder [ 2] distribution [ 14] ++++++ dnsmasq [ 1] Documentation [ 4] + instack [ 4] + instack-undercloud [ 28] ++++++++++++ iproute [ 1] openstack-ceilometer [ 2] openstack-cinder [ 12] +++++ openstack-foreman-inst... [ 2] openstack-glance [ 2] openstack-heat [ 5] ++ openstack-horizon [ 2] openstack-ironic [ 2] openstack-ironic-disco... [ 1] openstack-keystone [ 10] ++++ openstack-manila [ 10] ++++ openstack-neutron [ 12] +++++ openstack-nova [ 20] ++++++++ openstack-packstack [ 89] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 18] ++++++++ openstack-selinux [ 11] ++++ openstack-swift [ 3] + openstack-tripleo [ 27] ++++++++++++ openstack-tripleo-heat... [ 5] ++ openstack-tripleo-imag... [ 2] openstack-trove [ 1] openstack-tuskar [ 2] openstack-utils [ 1] Package Review [ 10] ++++ python-glanceclient [ 2] python-keystonemiddleware [ 1] python-neutronclient [ 3] + python-novaclient [ 1] python-openstackclient [ 5] ++ python-oslo-config [ 2] rdo-manager [ 52] +++++++++++++++++++++++ rdo-manager-cli [ 6] ++ rdopkg [ 1] RFEs [ 2] tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (381 bugs) ### dib-utils (2 bugs) [1263779 ] http://bugzilla.redhat.com/1263779 (NEW) Component: dib-utils Last change: 2015-12-07 Summary: Packstack Ironic admin_url misconfigured in nova.conf [1283812 ] http://bugzilla.redhat.com/1283812 (NEW) Component: dib-utils Last change: 2015-12-10 Summary: local_interface=bond0.120 in undercloud.conf create broken network configuration ### diskimage-builder (2 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1290163 ] http://bugzilla.redhat.com/1290163 (NEW) Component: distribution Last change: 2015-12-10 Summary: Tracker: Review requests for new RDO Mitaka packages [1300013 ] http://bugzilla.redhat.com/1300013 (NEW) Component: distribution Last change: 2016-01-19 Summary: openstack-aodh now requires python-gnocchiclient [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2016-01-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-12-10 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1299958 ] http://bugzilla.redhat.com/1299958 (NEW) Component: instack-undercloud Last change: 2016-01-19 Summary: instack-virt-setup does not set explicit path, can't find binaries [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2016-01-18 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (2 bugs) [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2016-01-04 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (12 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2016-01-04 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2015-11-25 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (5 bugs) [1291047 ] http://bugzilla.redhat.com/1291047 (NEW) Component: openstack-heat Last change: 2016-01-07 Summary: (RDO Mitaka) Overcloud deployment failed: Exceeded max scheduling attempts [1293961 ] http://bugzilla.redhat.com/1293961 (ASSIGNED) Component: openstack-heat Last change: 2016-01-07 Summary: [SFCI] Heat template failed to start because Property error: ... net_cidr (constraint not found) [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (2 bugs) [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: openstack-ironic Last change: 2016-01-04 Summary: IPMI driver for Ironic should support RAID for operating system/root parition [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (1 bug) [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (10 bugs) [1289267 ] http://bugzilla.redhat.com/1289267 (NEW) Component: openstack-keystone Last change: 2015-12-09 Summary: Mitaka: keystone.py is deprecated for WSGI implementation [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-12-07 Summary: keystone: add token flush cronjob script to keystone package [1280530 ] http://bugzilla.redhat.com/1280530 (NEW) Component: openstack-keystone Last change: 2016-01-19 Summary: Fernet tokens cannot read key files with SELInuxz enabeld [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1284871 ] http://bugzilla.redhat.com/1284871 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: /usr/share/keystone/wsgi-keystone.conf is missing group=keystone [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf ### openstack-manila (10 bugs) [1278918 ] http://bugzilla.redhat.com/1278918 (NEW) Component: openstack-manila Last change: 2015-12-06 Summary: manila-api fails to start without updates from upstream stable/liberty [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1277787 ] http://bugzilla.redhat.com/1277787 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: Glusterfs_driver: Export location for Glusterfs NFS- Ganesha is incorrect [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1277792 ] http://bugzilla.redhat.com/1277792 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: glusterfs_driver: Access-deny for glusterfs driver should be dynamic [1278919 ] http://bugzilla.redhat.com/1278919 (NEW) Component: openstack-manila Last change: 2015-12-06 Summary: AvailabilityZoneFilter is not working in manila- scheduler [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (12 bugs) [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2016-01-11 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-11-19 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-12-22 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1281308 ] http://bugzilla.redhat.com/1281308 (NEW) Component: openstack-neutron Last change: 2015-12-30 Summary: QoS policy is not enforced when using a previously used port [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1280258 ] http://bugzilla.redhat.com/1280258 (NEW) Component: openstack-neutron Last change: 2015-11-11 Summary: tenants seem like they are able to detach admin enforced QoS policies from ports or networks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https [1065826 ] http://bugzilla.redhat.com/1065826 (ASSIGNED) Component: openstack-neutron Last change: 2015-12-15 Summary: [RFE] [neutron] neutron services needs more RPM granularity ### openstack-nova (20 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1278808 ] http://bugzilla.redhat.com/1278808 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Guest fails to use more than 1 vCPU with smpboot: do_boot_cpu failed(-1) to wakeup [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova: fail to edit project quota with DataError from nova [1294747 ] http://bugzilla.redhat.com/1294747 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Migration fails when the SRIOV PF is not online [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2016-01-19 Summary: logrotate should copytruncate to avoid openstack logging to deleted files [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: v4-fixed-ip= not working with juno nova networking ### openstack-packstack (89 bugs) [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1284182 ] http://bugzilla.redhat.com/1284182 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Unable start Keystone, core dump [1296844 ] http://bugzilla.redhat.com/1296844 (NEW) Component: openstack-packstack Last change: 2016-01-08 Summary: RDO Kilo packstack AIO install fails on CentOS 7.2. Error: Unable to connect to mongodb server! (192.169.142.54:27017) [1297692 ] http://bugzilla.redhat.com/1297692 (ON_DEV) Component: openstack-packstack Last change: 2016-01-18 Summary: Raise MariaDB max connections limit [1201612 ] http://bugzilla.redhat.com/1201612 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: Interactive - Packstack asks for Tempest details even when Tempest install is declined [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1298364 ] http://bugzilla.redhat.com/1298364 (NEW) Component: openstack-packstack Last change: 2016-01-13 Summary: rdo liberty install centos 7 nova-network error:CONFIG_NEUTRON_METADATA_PW_UNQUOTED [1292271 ] http://bugzilla.redhat.com/1292271 (NEW) Component: openstack-packstack Last change: 2015-12-18 Summary: Receive Msg 'Error: Could not find user glance' [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1255369 ] http://bugzilla.redhat.com/1255369 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: Improve session settings for horizon [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1254389 ] http://bugzilla.redhat.com/1254389 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-13 Summary: Can no longer run packstack to maintain cluster [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1063393 ] http://bugzilla.redhat.com/1063393 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: RFE: Provide option to set bind_host/bind_port for API services [1291492 ] http://bugzilla.redhat.com/1291492 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: Unfriendly behavior of IP filtering for VXLAN with EXCLUDE_SERVERS [1290415 ] http://bugzilla.redhat.com/1290415 (NEW) Component: openstack-packstack Last change: 2016-01-09 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1282746 ] http://bugzilla.redhat.com/1282746 (NEW) Component: openstack-packstack Last change: 2016-01-08 Summary: Swift's proxy-server is not configured to use ceilometer [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1242647 ] http://bugzilla.redhat.com/1242647 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: Nova keypair doesn't work with Nova Networking [1239027 ] http://bugzilla.redhat.com/1239027 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: please move httpd log files to corresponding dirs [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1116019 ] http://bugzilla.redhat.com/1116019 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: AMQP1.0 server configurations needed [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1184806 ] http://bugzilla.redhat.com/1184806 (NEW) Component: openstack-packstack Last change: 2015-12-02 Summary: [RFE] Packstack should support deploying Nova and Glance with RBD images and Ceph as a backend [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1279642 ] http://bugzilla.redhat.com/1279642 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run fails when running with DEMO [1200129 ] http://bugzilla.redhat.com/1200129 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: [RFE] add support for ceilometer workload partitioning via tooz/redis [1194678 ] http://bugzilla.redhat.com/1194678 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: On aarch64, nova.conf should default to vnc_enabled=False [1293693 ] http://bugzilla.redhat.com/1293693 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: Keystone setup fails on missing required parameter [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1286995 ] http://bugzilla.redhat.com/1286995 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: PackStack should configure LVM filtering with LVM/iSCSI [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1279641 ] http://bugzilla.redhat.com/1279641 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run does not install keystoneauth1 [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1276277 ] http://bugzilla.redhat.com/1276277 (NEW) Component: openstack-packstack Last change: 2015-10-31 Summary: packstack --allinone fails on CentOS 7 x86_64 1503-01 [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [903645 ] http://bugzilla.redhat.com/903645 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: RFE: Include the ability in PackStack to support SSL for all REST services and message bus communication [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1289761 ] http://bugzilla.redhat.com/1289761 (NEW) Component: openstack-packstack Last change: 2015-12-10 Summary: PackStack installs Nova crontab that nova user can't run [1286828 ] http://bugzilla.redhat.com/1286828 (NEW) Component: openstack-packstack Last change: 2015-12-04 Summary: Packstack should have the option to install QoS (neutron) [1172467 ] http://bugzilla.redhat.com/1172467 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: New user cannot retrieve container listing [1283261 ] http://bugzilla.redhat.com/1283261 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: ceilometer-nova is not configured [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2016-01-09 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1285494 ] http://bugzilla.redhat.com/1285494 (NEW) Component: openstack-packstack Last change: 2015-11-25 Summary: openstack- packstack-7.0.0-0.5.dev1661.gaf13b7e.el7.noarch cripples(?) httpd.conf [1227298 ] http://bugzilla.redhat.com/1227298 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: Packstack should support MTU settings [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1005073 ] http://bugzilla.redhat.com/1005073 (NEW) Component: openstack-packstack Last change: 2015-12-02 Summary: [RFE] Please add glance and nova lib folder config [1296899 ] http://bugzilla.redhat.com/1296899 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: Swift's proxy-server is not configured to use ceilometer [1297833 ] http://bugzilla.redhat.com/1297833 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: VPNaaS should use libreswan driver instead of openswan by default [1168113 ] http://bugzilla.redhat.com/1168113 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: The warning message " NetworkManager is active " appears even when the NetworkManager is inactive [1172310 ] http://bugzilla.redhat.com/1172310 (ASSIGNED) Component: openstack-packstack Last change: 2016-01-20 Summary: support Keystone LDAP [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1202922 ] http://bugzilla.redhat.com/1202922 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: packstack key injection fails with legacy networking (Nova networking) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1282928 ] http://bugzilla.redhat.com/1282928 (ASSIGNED) Component: openstack-packstack Last change: 2016-01-13 Summary: Trove-api fails to start when deployed using packstack on RHEL 7.2 RC1.1 [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-11-06 Summary: Error: service-update is not currently supported by the keystone sql driver [1188491 ] http://bugzilla.redhat.com/1188491 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: Packstack wording is unclear for demo and testing provisioning. ### openstack-puppet-modules (18 bugs) [1288533 ] http://bugzilla.redhat.com/1288533 (NEW) Component: openstack-puppet-modules Last change: 2015-12-04 Summary: packstack fails on installing mongodb [1289309 ] http://bugzilla.redhat.com/1289309 (NEW) Component: openstack-puppet-modules Last change: 2015-12-07 Summary: Neutron module needs updating in OPM [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1298245 ] http://bugzilla.redhat.com/1298245 (NEW) Component: openstack-puppet-modules Last change: 2016-01-13 Summary: Add possibility to change DEFAULT/api_paste_config in trove.conf [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-puppet-modules Last change: 2015-12-16 Summary: puppet module for manila should include service type - shareV2 [1285900 ] http://bugzilla.redhat.com/1285900 (NEW) Component: openstack-puppet-modules Last change: 2015-11-26 Summary: Typo in log file name for trove-guestagent [1297535 ] http://bugzilla.redhat.com/1297535 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-01-13 Summary: Undercloud installation fails ::aodh::keystone::auth not found for instack [1285897 ] http://bugzilla.redhat.com/1285897 (NEW) Component: openstack-puppet-modules Last change: 2015-11-26 Summary: trove-guestagent.conf should define the configuration for backups [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start ### openstack-selinux (11 bugs) [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2016-01-04 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1284879 ] http://bugzilla.redhat.com/1284879 (NEW) Component: openstack-selinux Last change: 2015-11-24 Summary: Keystone via mod_wsgi is missing permission to read /etc/keystone/fernet-keys [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2016-01-04 Summary: Nova rootwrap-daemon requires a selinux exception [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied ### openstack-swift (3 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-12-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (27 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2015-12-11 Summary: missing python-proliantutils [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1284664 ] http://bugzilla.redhat.com/1284664 (NEW) Component: openstack-tripleo Last change: 2015-11-23 Summary: NtpServer is passed as string by "openstack overcloud deploy" [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1290156 ] http://bugzilla.redhat.com/1290156 (NEW) Component: openstack-trove Last change: 2015-12-09 Summary: Move guestagent settings to default section ### openstack-tuskar (2 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (1 bug) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2016-01-04 Summary: Can't enable OpenStack service after openstack-service disable ### Package Review (10 bugs) [1283295 ] http://bugzilla.redhat.com/1283295 (NEW) Component: Package Review Last change: 2015-11-18 Summary: Review Request: CloudKitty - Rating as a Service [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2015-12-03 Summary: Review Request: openstack-mistral - workflow Service for OpenStack cloud [1290090 ] http://bugzilla.redhat.com/1290090 (ASSIGNED) Component: Package Review Last change: 2015-12-10 Summary: Review Request: python-networking-midonet [1299959 ] http://bugzilla.redhat.com/1299959 (NEW) Component: Package Review Last change: 2016-01-19 Summary: Package Review: python-ironic-cisco [1290308 ] http://bugzilla.redhat.com/1290308 (NEW) Component: Package Review Last change: 2015-12-10 Summary: Review Request: python-midonetclient [1288149 ] http://bugzilla.redhat.com/1288149 (NEW) Component: Package Review Last change: 2015-12-07 Summary: Review Request: python-os-win - Windows / Hyper-V library for OpenStack projects [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2016-01-13 Summary: Review Request: Murano - is an application catalog for OpenStack [1293948 ] http://bugzilla.redhat.com/1293948 (NEW) Component: Package Review Last change: 2015-12-23 Summary: Review Request: python-kuryr [1292794 ] http://bugzilla.redhat.com/1292794 (NEW) Component: Package Review Last change: 2016-01-13 Summary: Review Request: openstack-magnum - Container Management project for OpenStack [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2015-11-13 Summary: New Package: python-dracclient ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (3 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1281352 ] http://bugzilla.redhat.com/1281352 (NEW) Component: python-neutronclient Last change: 2015-11-12 Summary: Internal server error when running qos-bandwidth-limit- rule-update as a tenant Edit ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: appdirs requirement ### python-oslo-config (2 bugs) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2016-01-04 Summary: oslo_config != oslo.config [1282093 ] http://bugzilla.redhat.com/1282093 (NEW) Component: python-oslo-config Last change: 2016-01-04 Summary: please rebase oslo.log to 1.12.0 ### rdo-manager (52 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1294599 ] http://bugzilla.redhat.com/1294599 (NEW) Component: rdo-manager Last change: 2015-12-29 Summary: Virtual environment overcloud deploy fails with default memory allocation [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support IPv6 [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1294085 ] http://bugzilla.redhat.com/1294085 (NEW) Component: rdo-manager Last change: 2016-01-04 Summary: Creating an instance on RDO overcloud, errors out [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1296475 ] http://bugzilla.redhat.com/1296475 (NEW) Component: rdo-manager Last change: 2016-01-07 Summary: Deploying Manila is not possible due to missing template [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] Support enabling the port security extension [1294683 ] http://bugzilla.redhat.com/1294683 (NEW) Component: rdo-manager Last change: 2016-01-01 Summary: instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux. [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-11-18 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1292253 ] http://bugzilla.redhat.com/1292253 (NEW) Component: rdo-manager Last change: 2016-01-01 Summary: Production + EPEL + yum-plugin-priorities results in wrong version of hiera [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-31 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (ASSIGNED) Component: RFEs Last change: 2016-01-17 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (211 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2016-01-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (10 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Ceilometer requires pymongo>=3.0.2 [1265721 ] http://bugzilla.redhat.com/1265721 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (POST) Component: openstack-ceilometer Last change: 2016-01-04 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer-common missing python-babel dependency [1287252 ] http://bugzilla.redhat.com/1287252 (POST) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer-alarm-notifier does not start: unit file is missing [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Ceilometer dbsync failing during HA deployment [1265818 ] http://bugzilla.redhat.com/1265818 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: ceilometer polling agent does not start [1214928 ] http://bugzilla.redhat.com/1214928 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: package ceilometermiddleware missing ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1278962 ] http://bugzilla.redhat.com/1278962 (ON_QA) Component: openstack-glance Last change: 2015-11-13 Summary: python-cryptography requires pyasn1>=0.1.8 but only 0.1.6 is available in Centos [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency ### openstack-heat (3 bugs) [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2016-01-04 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1281920 ] http://bugzilla.redhat.com/1281920 (POST) Component: openstack-neutron Last change: 2015-11-16 Summary: neutron-server will not start: fails with pbr version issue [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2016-01-04 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2016-01-04 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all ### openstack-packstack (70 bugs) [1252483 ] http://bugzilla.redhat.com/1252483 (POST) Component: openstack-packstack Last change: 2015-12-07 Summary: Demo network provisioning: public and private are shared, private has no tenant [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: Cannot start nova-network on juno - Centos7 [1288179 ] http://bugzilla.redhat.com/1288179 (POST) Component: openstack-packstack Last change: 2015-12-08 Summary: Mitaka: Packstack image provisioning fails with "Store filesystem could not be configured correctly" [1297733 ] http://bugzilla.redhat.com/1297733 (POST) Component: openstack-packstack Last change: 2016-01-19 Summary: No VPN tab in Horizon after deploy OSP-8 with packstack - CONFIG_NEUTRON_VPNAAS=y [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1259354 ] http://bugzilla.redhat.com/1259354 (MODIFIED) Component: openstack-packstack Last change: 2015-11-10 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: rdo release RPM not installed on all fedora hosts [1266028 ] http://bugzilla.redhat.com/1266028 (POST) Component: openstack-packstack Last change: 2015-12-15 Summary: Packstack should use pymysql database driver since Liberty [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1290429 ] http://bugzilla.redhat.com/1290429 (POST) Component: openstack-packstack Last change: 2015-12-10 Summary: Packstack does not correctly configure Nova notifications for Neutron in Mitaka-1 [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: packstack requires 2 runs to install ceilometer [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1150652 ] http://bugzilla.redhat.com/1150652 (POST) Component: openstack-packstack Last change: 2015-12-07 Summary: PackStack does not provide an option to register hosts to Red Hat Satellite 6 [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1295503 ] http://bugzilla.redhat.com/1295503 (MODIFIED) Component: openstack-packstack Last change: 2016-01-08 Summary: Packstack master branch is in the liberty repositories (was: Packstack installation fails with unsupported db backend) [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2016-01-13 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1124982 ] http://bugzilla.redhat.com/1124982 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Help text for SSL is incorrect regarding passphrase on the cert [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1297518 ] http://bugzilla.redhat.com/1297518 (POST) Component: openstack-packstack Last change: 2016-01-12 Summary: Sahara installation fails with ArgumentError: Could not find declared class ::sahara::notify::rabbitmq [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1285314 ] http://bugzilla.redhat.com/1285314 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Packstack needs to support aodh services since Mitaka [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1187412 ] http://bugzilla.redhat.com/1187412 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Script wording for service installation should be consistent [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher ### openstack-puppet-modules (20 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1297052 ] http://bugzilla.redhat.com/1297052 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-13 Summary: openstack-puppet-modules build is out of date and wrong branch in Delorean repos [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (2 bugs) [1290387 ] http://bugzilla.redhat.com/1290387 (POST) Component: openstack-sahara Last change: 2015-12-10 Summary: openstack-sahara-api fails to start in Mitaka-1, cannot find api-paste.ini [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (13 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2016-01-04 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1219406 ] http://bugzilla.redhat.com/1219406 (MODIFIED) Component: openstack-selinux Last change: 2015-11-06 Summary: Glance over nfs fails due to selinux [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (2 bugs) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account [1272572 ] http://bugzilla.redhat.com/1272572 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-12-10 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon ### openstack-trove (2 bugs) [1278608 ] http://bugzilla.redhat.com/1278608 (MODIFIED) Component: openstack-trove Last change: 2015-11-06 Summary: trove-api fails to start [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1229493 ] http://bugzilla.redhat.com/1229493 (POST) Component: openstack-tuskar Last change: 2015-12-04 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (3 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (POST) Component: openstack-utils Last change: 2016-01-05 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2016-01-04 Summary: openstack-status as admin falsely shows zero instances [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2016-01-04 Summary: update openstack-status for rdo-manager ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content ### python-glanceclient (2 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2016-01-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2016-01-04 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2016-01-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2016-01-04 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (10 bugs) [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1271335 ] http://bugzilla.redhat.com/1271335 (POST) Component: rdo-manager Last change: 2015-12-30 Summary: [RFE] Support explicit configuration of L2 population [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1272180 ] http://bugzilla.redhat.com/1272180 (POST) Component: rdo-manager Last change: 2015-12-04 Summary: Horizon doesn't load when deploying without pacemaker [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-11-04 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command ### rdo-manager-cli (10 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2015-11-08 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jruzicka at redhat.com Wed Jan 20 21:54:52 2016 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Wed, 20 Jan 2016 22:54:52 +0100 Subject: [Rdo-list] [meeting] RDO meeting (2016-01-20) Message-ID: <56A0022C.7010804@redhat.com> ============================== #rdo: RDO meeting (2016-01-20) ============================== Meeting started by jruzicka at 14:59:59 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2016-01-20/rdo_meeting_(2016-01-20).2016-01-20-14.59.log.html . Meeting summary --------------- * LINK: https://etherpad.openstack.org/p/RDO-bettername (rbowen, 15:02:51) * python3 effort status (jruzicka, 15:02:57) * LINK: https://review.gerrithub.io/#/q/topic:py3 (number80, 15:03:19) * most of python3 reviews are merged, rest are trivial issues (jruzicka, 15:05:11) * rdoinfo database format update (jruzicka, 15:09:29) * jruzicka is implementing rdoinfo metadata filtering in rdopkg (number80, 15:14:36) * ACTION: jruzicka to deliver rdopkg tags support including filtering (jruzicka, 15:14:39) * dropping early F22/F23 support in Fedora (jruzicka, 15:15:20) * LINK: https://trello.com/c/dGFWubRQ/123-early-retirement-of-fedora-support (jruzicka, 15:15:28) * ACTION: number80 draft announcement about early retirement for OpenStack F22/23 packages (number80, 15:20:03) * Magnum integration in Packstack? (jruzicka, 15:21:01) * LINK: https://www.rdoproject.org/rdo/projectsinrdo/ (rbowen, 15:27:16) * doc day today (jruzicka, 15:27:53) * LINK: https://github.com/redhat-openstack/website/issues (jruzicka, 15:28:24) * ACTION: trown hack on rdoproject.org rdo-manager docs (trown, 15:33:07) * LINK: https://github.com/redhat-openstack/openstack-packaging-doc (apevec, 15:34:02) * ACTION: number80 submit PR to import packaging doc in website (number80, 15:35:49) * Volunteers needed to work the OpenStack table at FOSDEM (jruzicka, 15:41:57) * LINK: https://etherpad.openstack.org/p/fosdem-2016 (jruzicka, 15:42:10) * Other upcoming events (jruzicka, 15:43:53) * RDO Day @ FOSDEM Fringe - Jan 29 - https://www.rdoproject.org/events/rdo-day-fosdem-2016/ (jruzicka, 15:44:15) * RDO BOF at DevConf.cz, Feb 6 @15:00 (jruzicka, 15:44:29) * Delorean outage on January 21. Also affects RDO website. (jruzicka, 15:47:46) * 21-Jan-2016 15:00 UTC/10:00 EST, estimated 8 hours downtime. (jruzicka, 15:47:50) * delorean repositories will be available during downtime (number80, 15:48:42) * ACTION: rbowen to get website setup details from csim (apevec, 15:52:43) * migrate RDO-packaging etherpad to a new one RDO-Meeting in order to reflect that this is not limited to packaging (jruzicka, 15:52:56) * we decided to USE RDO-Meeting etherpad (jruzicka, 15:54:28) * RDO CI update (jruzicka, 15:58:24) * Packstack upstream integration gate jobs are almost there: https://review.openstack.org/#/q/topic:puppet-and-packstack-jobs (jruzicka, 16:01:22) * open floor & next chair (jruzicka, 16:01:47) * ACTION: trown to chair next meeting (jruzicka, 16:02:15) Meeting ended at 16:03:08 UTC. Action Items ------------ * jruzicka to deliver rdopkg tags support including filtering * number80 draft announcement about early retirement for OpenStack F22/23 packages * trown hack on rdoproject.org rdo-manager docs * number80 submit PR to import packaging doc in website * rbowen to get website setup details from csim * trown to chair next meeting Action Items, by person ----------------------- * csim * rbowen to get website setup details from csim * jruzicka * jruzicka to deliver rdopkg tags support including filtering * number80 * number80 draft announcement about early retirement for OpenStack F22/23 packages * number80 submit PR to import packaging doc in website * rbowen * rbowen to get website setup details from csim * trown * trown hack on rdoproject.org rdo-manager docs * trown to chair next meeting * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * rbowen (71) * number80 (65) * jruzicka (62) * apevec (54) * dmsimard (31) * trown (26) * snecklifter (15) * jpena (10) * imcsk8 (10) * leifmadsen (9) * zodbot (8) * gkadam (5) * elmiko (4) * csim (3) * pradk (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From jalway at gmail.com Thu Jan 21 05:04:05 2016 From: jalway at gmail.com (John Alway) Date: Wed, 20 Jan 2016 23:04:05 -0600 Subject: [Rdo-list] Bounced Emails Message-ID: I'm sorry that my yahoo (thaleslv at yahoo.com) account bounced emails to the list. That wasn't my doing. I don't know why that happened. I'm going to try this gmail account. I put this group in a white list by using "[Rdo-list]" as a flag for the filter. I hope that works! Regards, ....John -------------- next part -------------- An HTML attachment was scrubbed... URL: From michele at acksyn.org Thu Jan 21 07:27:03 2016 From: michele at acksyn.org (Michele Baldessari) Date: Thu, 21 Jan 2016 08:27:03 +0100 Subject: [Rdo-list] RDO-Manager Mitaka - Successful HA deployment Message-ID: <20160121072703.GD5582@palahniuk.int.rhx> Hi all, here are some notes to maybe spare some time to whoever is trying to deploy Mitaka via RDO Manager with an HA setup from the delorean trunk repos. Two BZs needed to be worked around: 1) Could not find resource 'Service[mysqld]' for relationship from 'File[mysql-config-file]' https://bugzilla.redhat.com/show_bug.cgi?id=1300562 2) missing python-gnocchiclient https://bugzilla.redhat.com/show_bug.cgi?id=1300013 After that, deployment (3 controllers, 2 computes and 1 ceph node) was successful, albeit I did not do a whole lot of testing ;) Hope this helps a bit with the upcoming test days cheers, Michele -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From mohammed.arafa at gmail.com Thu Jan 21 11:59:51 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 21 Jan 2016 06:59:51 -0500 Subject: [Rdo-list] [rdo-manager] introspection failures Message-ID: hello about 2 months back i set up my undercloud on a VM and my overcloud on 2 physical nodes. recently i needed to replicate that feat of installation. so i set up a brand new undercloud on another vm (RDO2) now here is my problem. rdo2 will go thru the entire process without complaining and then fail with setting up the compute node.so in the end i have the undercloud vm (RDO2) and my controller node so i went back to my 1st vm (RDO1) and did a heat stack-delete overcloud and then was able to deploy the 2 node overcloud. so i went back in the process and did a "ironic node-delete" on each of the nodes and attempted a bulk introspection on the 2. the 1st node passed quickly but the 2nd node, the same one that was giving problems on rdo2 failed with "locked by rdo1" http 409 then it sets the node to available. here's a funny aside, overcloud can still deploy. i need help troubleshooting the issue; i attempted to look in /var/log/ironic/* logs and not exactly sure what i am looking for plus there was quite a lot of debug messages and it is quite difficult to watch a log for 30 minutes while the introspection on the 2nd node goes on. help would be great thank you -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Jan 21 12:53:56 2016 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 21 Jan 2016 07:53:56 -0500 Subject: [Rdo-list] Bounced Emails In-Reply-To: References: Message-ID: <56A0D4E4.4060407@redhat.com> We've been seeing an increased number of bounces lately, particularly to Yahoo and Hotmail accounts. I haven't yet figured out any reason for this, but the evidence suggests that it's on our end, not yours. --Rich On 01/21/2016 12:04 AM, John Alway wrote: > I'm sorry that my yahoo (thaleslv at yahoo.com ) > account bounced emails to the list. That wasn't my doing. I don't > know why that happened. > > I'm going to try this gmail account. I put this group in a white list > by using "[Rdo-list]" as a flag for the filter. I hope that works! > > > Regards, > ....John > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ltoscano at redhat.com Thu Jan 21 13:00:09 2016 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 21 Jan 2016 14:00:09 +0100 Subject: [Rdo-list] Bounced Emails In-Reply-To: <56A0D4E4.4060407@redhat.com> References: <56A0D4E4.4060407@redhat.com> Message-ID: <3335823.hpLfj6O6Gh@whitebase.usersys.redhat.com> On Thursday 21 of January 2016 07:53:56 Rich Bowen wrote: > We've been seeing an increased number of bounces lately, particularly to > Yahoo and Hotmail accounts. I haven't yet figured out any reason for > this, but the evidence suggests that it's on our end, not yours. Uhm, maybe they are enforcing SPF. Ciao -- Luigi From javier.pena at redhat.com Thu Jan 21 17:25:49 2016 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 21 Jan 2016 12:25:49 -0500 (EST) Subject: [Rdo-list] [delorean] Delorean planned outage on January 21 In-Reply-To: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> References: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> Message-ID: <1765635824.17364983.1453397149534.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Dear rdo-list, > > Due to a planned maintenance on the underlying infrastructure, the Delorean > server will be under maintenance on January 21, from 9:00 UTC to 23:00 UTC > time. > > During the maintenance window, the Delorean repositories will be available > (served from the backup system), but no new commits will be processed. > Dear all, The maintenance is now finished (earlier than expected!), and the Delorean instance is processing commits. Regards, Javier From dms at redhat.com Thu Jan 21 20:34:53 2016 From: dms at redhat.com (David Moreau Simard) Date: Thu, 21 Jan 2016 15:34:53 -0500 Subject: [Rdo-list] [delorean] Delorean planned outage on January 21 In-Reply-To: <1765635824.17364983.1453397149534.JavaMail.zimbra@redhat.com> References: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> <1765635824.17364983.1453397149534.JavaMail.zimbra@redhat.com> Message-ID: There was also an impact on the delorean CI jenkins slave due to the maintenance. If you have submitted a review or patchset to delorean-related repositories (packages, delorean), Jenkins did not pick it up. Feel free to submit a new patchset (i.e, slight commit message edit) to trigger a CI run. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Thu, Jan 21, 2016 at 12:25 PM, Javier Pena wrote: > > > ----- Original Message ----- >> Dear rdo-list, >> >> Due to a planned maintenance on the underlying infrastructure, the Delorean >> server will be under maintenance on January 21, from 9:00 UTC to 23:00 UTC >> time. >> >> During the maintenance window, the Delorean repositories will be available >> (served from the backup system), but no new commits will be processed. >> > > Dear all, > > The maintenance is now finished (earlier than expected!), and the Delorean instance is processing commits. > > Regards, > Javier > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From andrius at cumulusnetworks.com Thu Jan 21 23:12:31 2016 From: andrius at cumulusnetworks.com (Andrius Benokraitis) Date: Thu, 21 Jan 2016 18:12:31 -0500 Subject: [Rdo-list] RFE: Explicit Configuration of L2 Population Message-ID: <77806F3D-DCA2-4F51-97F5-463D406B3C36@cumulusnetworks.com> Hi All, Just wanted to confirm I?m reading the following Bugzilla correctly in at L2 Population will be included as part of a rebase for a future version of RDO (and therefore RHEL OSP 8): RDO: https://bugzilla.redhat.com/show_bug.cgi?id=1271335 RHEL OSP: https://bugzilla.redhat.com/show_bug.cgi?id=1279615 Thanks! Andrius Benokraitis ISV Partner Alliance Manager Cumulus Networks m: +1 919 741 0141 From amuller at redhat.com Thu Jan 21 23:15:39 2016 From: amuller at redhat.com (Assaf Muller) Date: Thu, 21 Jan 2016 18:15:39 -0500 Subject: [Rdo-list] RFE: Explicit Configuration of L2 Population In-Reply-To: <77806F3D-DCA2-4F51-97F5-463D406B3C36@cumulusnetworks.com> References: <77806F3D-DCA2-4F51-97F5-463D406B3C36@cumulusnetworks.com> Message-ID: The patches mean that RDO director will now support a flag in its .yaml files that will let users easily turn l2pop on or off, but it won't change any defaults or enable the feature implicitly. On Thu, Jan 21, 2016 at 6:12 PM, Andrius Benokraitis wrote: > Hi All, > > Just wanted to confirm I?m reading the following Bugzilla correctly in at L2 Population will be included as part of a rebase for a future version of RDO (and therefore RHEL OSP 8): > > RDO: https://bugzilla.redhat.com/show_bug.cgi?id=1271335 > RHEL OSP: https://bugzilla.redhat.com/show_bug.cgi?id=1279615 > > Thanks! > > Andrius Benokraitis > ISV Partner Alliance Manager > Cumulus Networks > m: +1 919 741 0141 > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Fri Jan 22 08:11:13 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Fri, 22 Jan 2016 09:11:13 +0100 Subject: [Rdo-list] [delorean] Delorean planned outage on January 21 In-Reply-To: References: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> <1765635824.17364983.1453397149534.JavaMail.zimbra@redhat.com> Message-ID: 2016-01-21 21:34 GMT+01:00 David Moreau Simard : > There was also an impact on the delorean CI jenkins slave due to the > maintenance. > > If you have submitted a review or patchset to delorean-related > repositories (packages, delorean), Jenkins did not pick it up. > Feel free to submit a new patchset (i.e, slight commit message edit) > to trigger a CI run. > git review -Fshould do the trick too > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Thu, Jan 21, 2016 at 12:25 PM, Javier Pena wrote: >> >> >> ----- Original Message ----- >>> Dear rdo-list, >>> >>> Due to a planned maintenance on the underlying infrastructure, the Delorean >>> server will be under maintenance on January 21, from 9:00 UTC to 23:00 UTC >>> time. >>> >>> During the maintenance window, the Delorean repositories will be available >>> (served from the backup system), but no new commits will be processed. >>> >> >> Dear all, >> >> The maintenance is now finished (earlier than expected!), and the Delorean instance is processing commits. >> >> Regards, >> Javier >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From bderzhavets at hotmail.com Fri Jan 22 18:43:47 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 22 Jan 2016 18:43:47 +0000 Subject: [Rdo-list] EXCLUDE_SERVERS BUG AFFECTING RDO Kilo 2015.1.1 In-Reply-To: References: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> <1765635824.17364983.1453397149534.JavaMail.zimbra@redhat.com> , Message-ID: Please, see https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1254389 Commit contains fix per https://review.openstack.org/#/c/257033/ is https://review.openstack.org/gitweb?p=openstack/packstack.git;a=commitdiff;h=04e3572e618713828ffafb1ce24790f26499719e I attempted to rebuild openstack-packstack-2015.1-0.16.dev1637.g2bb5c1d.el7.src.rpm ( the most recent available) adding fourth patch https://review.openstack.org/gitweb?p=openstack/packstack.git;a=patch;h=04e3572e618713828ffafb1ce24790f26499719e as 004-Fix-exclude-servers.patch. Build failed with errors :- + echo 'Patch #1 (0001-Do-not-enable-Keystone-in-httpd-by-default.patch):' Patch #1 (0001-Do-not-enable-Keystone-in-httpd-by-default.patch): + /usr/bin/cat /root/rpmbuild/SOURCES/0001-Do-not-enable-Keystone-in-httpd-by-default.patch + /usr/bin/patch -p1 --fuzz=0 patching file packstack/plugins/keystone_100.py Hunk #1 succeeded at 168 (offset 16 lines). + echo 'Patch #2 (0002-Do-not-enable-EPEL-when-installing-RDO.patch):' Patch #2 (0002-Do-not-enable-EPEL-when-installing-RDO.patch): + /usr/bin/cat /root/rpmbuild/SOURCES/0002-Do-not-enable-EPEL-when-installing-RDO.patch + /usr/bin/patch -p1 --fuzz=0 patching file packstack/plugins/prescript_000.py Hunk #1 succeeded at 1113 (offset 21 lines). + echo 'Patch #3 (0003-Fix-nagios-service-configuration.patch):' Patch #3 (0003-Fix-nagios-service-configuration.patch): + /usr/bin/cat /root/rpmbuild/SOURCES/0003-Fix-nagios-service-configuration.patch + /usr/bin/patch -p1 --fuzz=0 patching file packstack/puppet/modules/packstack/manifests/nagios_config_wrapper.pp + echo 'Patch #4 (004-Fix-exclude-servers.patch):' Patch #4 (004-Fix-exclude-servers.patch): <==== my patch placed in SOURCES + /usr/bin/cat /root/rpmbuild/SOURCES/004-Fix-exclude-servers.patch + /usr/bin/patch -p1 --fuzz=0 patching file docs/packstack.rst Hunk #1 succeeded at 838 (offset -19 lines). patching file packstack/plugins/neutron_350.py Hunk #2 succeeded at 515 (offset -59 lines). Hunk #3 FAILED at 670. 1 out of 3 hunks FAILED -- saving rejects to file packstack/plugins/neutron_350.py.rej error: Bad exit status from /var/tmp/rpm-tmp.ZVNRrt (%prep) So Hunk #1,#2 were absent in the most recent openstack-packstack build for RDO Kilo. I am aware of ` yum downgrade openstack-packstack openstack-packstack-doc openstack-packstack-puppet` until version of openctack--packstack will become openstack-packstack-puppet-2015.1-0.1.dev1537.gba5183c.el7.noarch openstack-packstack-2015.1-0.1.dev1537.gba5183c.el7.noarch allows to use EXCLUDE_SERVERS directive ( already tested) Version installed via `rpm -iv rdo-release-kilo-1.noarch.rpm` was 2015.1-0.14 Please, advise Boris From rbowen at redhat.com Fri Jan 22 19:07:53 2016 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 22 Jan 2016 14:07:53 -0500 Subject: [Rdo-list] Doc day: Thanks! Message-ID: <56A27E09.5040002@redhat.com> Many thanks to all who participate in the doc day over the last few days. Your improvements to the website and documentation are enormously appreciated. Watch https://www.rdoproject.org/events/docdays/ for upcoming doc days in February! --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Fri Jan 22 19:32:41 2016 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 22 Jan 2016 14:32:41 -0500 Subject: [Rdo-list] Mitaka Milestone 2 test day, January 27th, 28th Message-ID: <56A283D9.6070900@redhat.com> We will be holding a test day for Mitaka Milestone 2 packages that have passed CI, on January 27th and 28th, which is next week. You can join the conversation on #rdo, on the Freenode IRC network, and here on rdo-list Further details may be found at https://www.rdoproject.org/testday/mitaka/milestone2/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From apevec at gmail.com Sat Jan 23 00:30:25 2016 From: apevec at gmail.com (Alan Pevec) Date: Sat, 23 Jan 2016 01:30:25 +0100 Subject: [Rdo-list] EXCLUDE_SERVERS BUG AFFECTING RDO Kilo 2015.1.1 In-Reply-To: References: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> <1765635824.17364983.1453397149534.JavaMail.zimbra@redhat.com> Message-ID: > Commit contains fix per https://review.openstack.org/#/c/257033/ is This was only recently merged to packstack upstream master branch and needs to be cherry-picked to stable/kilo branch used to build the package in RDO Kilo. > Version installed via `rpm -iv rdo-release-kilo-1.noarch.rpm` was 2015.1-0.14 Kilo updates have been pushed to the CentOS Cloud SIG repo http://mirror.centos.org/centos/7/cloud/x86_64/openstack-kilo/ and rdo-release-kilo-1 still points to the old repository location on fedorapeople.org. I'll push rdo-release update for Kilo to point to the new location. Cheers, Alan From dradez at redhat.com Sat Jan 23 05:50:31 2016 From: dradez at redhat.com (Dan Radez) Date: Sat, 23 Jan 2016 00:50:31 -0500 Subject: [Rdo-list] RC1 Message-ID: <56A314A7.70009@redhat.com> I've pushed the tag to the git repo and moved the RC1 artifacts into the brahmaputra directory on artifacts.opnfv.org Tim would you like to type up an announcement and send ti in the morning? Please include rdo-list at redhat.com and address the RDO and RDO Manager communities there if you do write up. If you want me to write it up let me know. Dan From dradez at redhat.com Sat Jan 23 06:03:24 2016 From: dradez at redhat.com (Dan Radez) Date: Sat, 23 Jan 2016 01:03:24 -0500 Subject: [Rdo-list] RC1 In-Reply-To: <56A314A7.70009@redhat.com> References: <56A314A7.70009@redhat.com> Message-ID: <56A317AC.2030305@redhat.com> On 01/23/2016 12:50 AM, Dan Radez wrote: > I've pushed the tag to the git repo and moved the RC1 artifacts into the > brahmaputra directory on artifacts.opnfv.org > > Tim would you like to type up an announcement and send ti in the morning? > > Please include rdo-list at redhat.com and address the RDO and RDO Manager > communities there if you do write up. > > If you want me to write it up let me know. > > Dan Oops, Sry RDO list, didn't mean for that to hit the list yet. Hope I didn't ruin the surprise :) From bderzhavets at hotmail.com Sat Jan 23 06:42:03 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 23 Jan 2016 06:42:03 +0000 Subject: [Rdo-list] EXCLUDE_SERVERS BUG AFFECTING RDO Kilo 2015.1.1 In-Reply-To: References: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> <1765635824.17364983.1453397149534.JavaMail.zimbra@redhat.com> , Message-ID: ________________________________________ From: Alan Pevec Sent: Friday, January 22, 2016 7:30 PM To: Boris Derzhavets Cc: rdo-list Subject: Re: EXCLUDE_SERVERS BUG AFFECTING RDO Kilo 2015.1.1 > Commit contains fix per https://review.openstack.org/#/c/257033/ is This was only recently merged to packstack upstream master branch and needs to be cherry-picked to stable/kilo branch used to build the package in RDO Kilo. Could I expect openstack-packstack-2015.1-0.17 to have mentioned patch applied ? I believe this issue to be critical for landscape where 10 new Compute Nodes have to be added ASAP. Any time estimate , if possible. Thank you Boris > Version installed via `rpm -iv rdo-release-kilo-1.noarch.rpm` was 2015.1-0.14 Kilo updates have been pushed to the CentOS Cloud SIG repo http://mirror.centos.org/centos/7/cloud/x86_64/openstack-kilo/ and rdo-release-kilo-1 still points to the old repository location on fedorapeople.org. I'll push rdo-release update for Kilo to point to the new location. Cheers, Alan From apevec at gmail.com Sat Jan 23 11:57:44 2016 From: apevec at gmail.com (Alan Pevec) Date: Sat, 23 Jan 2016 12:57:44 +0100 Subject: [Rdo-list] EXCLUDE_SERVERS BUG AFFECTING RDO Kilo 2015.1.1 In-Reply-To: References: <972612602.14979085.1453193902004.JavaMail.zimbra@redhat.com> <1765635824.17364983.1453397149534.JavaMail.zimbra@redhat.com> Message-ID: > Could I expect openstack-packstack-2015.1-0.17 to have mentioned patch applied ? > I believe this issue to be critical for landscape where 10 new Compute Nodes > have to be added ASAP. It does make sense to backport this fix, I've requested it in Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1254389#c9 Cheers, Alan From dradez at redhat.com Sat Jan 23 18:53:58 2016 From: dradez at redhat.com (Dan Radez) Date: Sat, 23 Jan 2016 13:53:58 -0500 Subject: [Rdo-list] Fwd: [apex] RC1 now available In-Reply-To: <1980555140.16378832.1453568277739.JavaMail.zimbra@redhat.com> References: <1980555140.16378832.1453568277739.JavaMail.zimbra@redhat.com> Message-ID: <56A3CC46.2060403@redhat.com> Hello RDO, As leaked earlier: Here is our announcement for OPNFV Apex Brahmaputra RC1. OPNFV Apex uses RDO Manager and RDO under the covers to deploy an NFV platform. We launch a Virtual instack on a jump host and then deploy either virtual on the same jumphost or a baremetal overcloud. Documentation can be found here: http://artifacts.opnfv.org/apex/docs/installation-instructions/ Contact us via freenode in #opnfv-apex or on the opnfv users or tech mailing lists. opnfv-tech-discuss at lists.opnfv.org opnfv-users at lists.opnfv.org Dan -------- Forwarded Message -------- Subject: [apex] RC1 now available Date: Sat, 23 Jan 2016 11:57:57 -0500 (EST) From: Tim Rozet To: Debra Scott CC: opnfv-tech-discuss at lists.opnfv.org, Dan Radez , Michael Chapman , Dave Neary , Chris Wright , Alvaro Lopez Ortega Hello, I am pleased to announce that we have released another candidate build for Brahmaputra: artifacts.opnfv.org apex/brahmaputra/opnfv-apex-2.1-brahmaputra.1.rc1.noarch.rpm apex/brahmaputra/opnfv-apex-common-2.1-brahmaputra.1.rc1.noarch.rpm apex/brahmaputra/opnfv-apex-opendaylight-sfc-2.1-brahmaputra.1.rc1.noarch.rpm apex/brahmaputra/opnfv-apex-undercloud-2.1-brahmaputra.1.rc1.noarch.rpm apex/brahmaputra/opnfv-brahmaputra.1.rc1.iso This candidate build includes support for: - OpenStack HA Liberty - OpenDaylight Lithium 3.3 with L2 and L3 plugin deployment scenarios - ONOS - SFC (NSH OVS with OpenDaylight Beryllium) - Ceilometer Aodh support for Doctor Several bug fixes have gone in as well, and thanks to Functest team and others in the community for testing out RC0 for us and finding these issues. Virtual deployments in our CI environment take as little as 32 minutes, and bare metal takes about 43 minutes to deploy a 5 controller/compute HA deployment. Huge thanks to Dan and Michael for their hard work and sacrificing their nights and weekends to get us this far! Tim Rozet Red Hat SDN Team ----- Original Message ----- From: "Tim Rozet" To: "Debra Scott" Cc: opnfv-tech-discuss at lists.opnfv.org Sent: Thursday, January 21, 2016 1:57:26 AM Subject: Re: [opnfv-tech-discuss] [apex] RC1 readiness/status report Hi Debra, Latest status. Progress since last update: 1. Michael has gotten AODH to work locally in his build environment and submitted a patch to OPNFV. We should then have support for Doctor: https://gerrit.opnfv.org/gerrit/#/c/7455/ 2. We have patch in gerrit to support SFC (ODL Be, NSH OVS, Kernel Upgrade to 3.13): https://gerrit.opnfv.org/gerrit/#/c/7497/ 3. ODL L3 has deployed on baremetal, and is now added to our Jenkins verify. 4. We have fixed 2 bugs that were causing failures in functest: https://jira.opnfv.org/browse/APEX-61, https://jira.opnfv.org/browse/APEX-59 (thanks Jose for debugging them with me) TODO: 1. Dan is working on OpenContrail integration. 3. Functest is broken. Would like to get good functest results on our daily before we build RC1. 4. Yardstick was integrated into Apex, but is failing during run. Will debug that with Yardstick team. 5. Docs Thanks, Tim Rozet Red Hat SDN Team ----- Original Message ----- From: "Tim Rozet" To: "Debra Scott" Cc: opnfv-tech-discuss at lists.opnfv.org, "Dan Radez" , "Michael Chapman" , "zhoubo (X)" , "Chris Wright" Sent: Monday, January 18, 2016 12:51:26 AM Subject: Re: [opnfv-tech-discuss] [apex] RC1 readiness/status report Hi Debra, I have not heard what officially RC1 is supposed to contain, but for us we will build another candidate sometime within the next few days. Progress since last update: 1. Awesome work by Dan and Michael to get the RPM split done, our RPMs are now split into 3: apex/opnfv-apex-2.1-20160117.noarch.rpm apex/opnfv-apex-common-2.1-20160117.noarch.rpm apex/opnfv-apex-undercloud-2.1-20160117.noarch.rpm 2. ONOS code is fully merged and works in virtual deployments (baremetal and functest results pending) - thanks Bob and ONOS team for all the help integrating 3. Scenarios were all added Friday to Apex Jenkins: https://build.opnfv.org/ci/view/apex/ (ONOS and ODL L2 are now part of our CI verify and daily) 4. ODL L3 patch has been pushed and passed initial round of CI: https://gerrit.opnfv.org/gerrit/#/c/6995/6 TODO: 1. Dan is working on OpenContrail integration. 2. Michael is working on getting AODH support for Doctor into Apex. Patch WIP: https://gerrit.opnfv.org/gerrit/#/c/6631/ 3. I'm working on getting SFC support. Plan is to build Apex RPM with Be version of ODL included, and I'm writing up some ansible modules to update kernel, install NSH OVS, and Tacker. 4. Yardstick was integrated into Apex, but is failing during run. Will debug that with Yardstick team. For our RC1 build my goal is to include ONOS and ODL L3. As soon as we feel those are both stable, with good functest results, we will build RC1. Thanks, Tim Rozet Red Hat SDN Team ----- Original Message ----- From: "Tim Rozet" To: "Debra Scott" Cc: opnfv-tech-discuss at lists.opnfv.org, "Dan Radez" , "Michael Chapman" , "Sam Hague" Sent: Thursday, January 14, 2016 5:34:10 PM Subject: [opnfv-tech-discuss] [apex] RC0 readiness/status report Hi Debra, Glad to announce we have built an RC0 candidate artifact! This build supports ODL Lithium 3.3, OpenStack Liberty HA, network isolation, Ceph in a virtual or baremetal environment. http://artifacts.opnfv.org/index.html apex/brahmaputra/opnfv-apex-2.7-brahmaputra.1.rc0.noarch.rpm apex/brahmaputra/opnfv-apex-2.7-brahmaputra.1.rc0.src.rpm apex/brahmaputra/opnfv-brahmaputra.1.rc0.iso apex/brahmaputra/opnfv-brahmaputra.1.rc0.properties Functest results on artifact: Tempest cases: 110 Pass/8 Fail ODL cases: 15 Pass/3 Fail VPING: 1 Pass/0 Fail Rally: 6 Pass/2 Fail VIMS: Aborted Functest caught a bug for us that we will fix post RC0: https://jira.opnfv.org/browse/APEX-59, which also caused VIMS to abort. TODO looking past RC0: 1. Found an issue where ONOS build artifact was not being included in our build. Patch currently up to fix that. (Waiting on #2) 2. However, because of #1, our artifact (rpm) size grows over 4GB which is the limit of cpio, and breaks the rpm build. Dan is working on splitting our rpm into separate rpms which will fix this problem. Patch currently being tested: https://gerrit.opnfv.org/gerrit/#/c/6227/ 3. Test ONOS and make sure it works with our integration. (Depends on #1, #2) 4. Dan was able to get OpenContrail to build as an RPM. Stuart has provided us configuration info. Next step is to see if the RPM works and if we can install/configure OpenContrail. 5. Michael is working on getting AODH support for Doctor into Apex. Patch WIP: https://gerrit.opnfv.org/gerrit/#/c/6631/ 6. I'm working on getting SFC support. Plan is to build Apex RPM with Be version of ODL included, and I'm writing up some ansible modules to update kernel, install NSH OVS, and Tacker. 7. Yardstick was integrated into Apex, but is failing to run. Will debug that with Yardstick team. Thanks, Tim Rozet Red Hat SDN Team ----- Original Message ----- From: "Tim Rozet" To: "Debra Scott" Cc: opnfv-tech-discuss at lists.opnfv.org Sent: Wednesday, January 13, 2016 8:27:45 AM Subject: Re: [opnfv-tech-discuss] [apex] RC0 readiness/status report Hi Debra, Will be late again today so here is current status: Accomplishments since Monday: 1. Ceph is working and merged. 2. Network isolation is fully working (admin, private, public, storage networks created and used in deployment). 3. Was able to get functest to run with 118 pass/27 fail (Thanks Jose and Morgan for the help). There was a slight error in functest where some of our keystone users got deleted, so I am hoping on the next run there will be more passes: http://213.77.62.197/results?project=functest&case=Tempest&installer=apex Current status for RC0: I believe all the patches are in place to provide us an RC0 artifact. The only remaining issue is functest failed to start again on our daily last night, so we need to fix that so we can get a valid artifact. TODO for RC0: 1. Fix functest execution in our daily. 2. Merge current master code to stable/b and produce RC0 candidate via daily job. TODO looking past RC0: 1. Found an issue where ONOS build artifact was not being included in our build. Patch currently up to fix that. 2. However, because of #1, our artifact (rpm) size grows over 4GB which is the limit of cpio, and breaks the rpm build. Dan is working on splitting our rpm into separate rpms which will fix this problem. 3. Test ONOS and make sure it works with our integration. 4. Dan was able to get OpenContrail to build as an RPM. Waiting on Stuart for more information on what we need to do to integrate. 5. Michael is working on getting AODH support for Doctor into Apex. 6. I'll work on getting SFC supported. Tim Rozet Red Hat SDN Team ----- Original Message ----- From: "Tim Rozet" To: "Debra Scott" Cc: "Dan Radez" , "Michael Chapman" , opnfv-tech-discuss at lists.opnfv.org Sent: Sunday, January 10, 2016 11:19:40 PM Subject: [apex] RC0 readiness/status report Hi Debra, I will be late to the daily release ops meeting tomorrow, so I wanted to relay our status. Accomplishments this past week: 1. Upstream artifacts are frozen for B-release, this way we cannot be affected anymore by upstream breakage 2. OPNFV Apex CI pipieline is stable now for builds and virtual deployments (last 4 builds in a row successful) 3. LF POD1 is up and stable, and we completed our first baremetal deployment there today: https://build.opnfv.org/ci/view/apex/job/apex-deploy-baremetal-master/12/ 4. Our installer now takes deploy, network, and inventory settings as separate yaml inputs and parses them correctly 5. ODL Lithium with L2 support is what is currently being deployed in CI 6. ONOS code has been merged into Apex and included in latest artifact build TODO this week: 1. Get functest executing again on our daily jobs (working on that now) 2. Get Ceph working on controllers and part of our deploy (patch in place https://gerrit.opnfv.org/gerrit/#/c/5521/) 3. Create jenkins job to test ONOS 4. Meet with Stuart and figure out how to get OpenContrail support asap If we can get #1 and #2 (with good functest results) completed then we have an RC0 candidate build. I am hopeful we can achieve that by EOB Monday. Thanks, Tim Rozet Red Hat SDN Team _______________________________________________ opnfv-tech-discuss mailing list opnfv-tech-discuss at lists.opnfv.org https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss _______________________________________________ opnfv-tech-discuss mailing list opnfv-tech-discuss at lists.opnfv.org https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss From hguemar at fedoraproject.org Sat Jan 23 20:08:35 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sat, 23 Jan 2016 21:08:35 +0100 Subject: [Rdo-list] Fwd: [apex] RC1 now available In-Reply-To: <56A3CC46.2060403@redhat.com> References: <1980555140.16378832.1453568277739.JavaMail.zimbra@redhat.com> <56A3CC46.2060403@redhat.com> Message-ID: Great job for this Release Candidate and no worries for the spoilers :) Regards, H. From mohammed.arafa at gmail.com Sun Jan 24 20:15:37 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sun, 24 Jan 2016 22:15:37 +0200 Subject: [Rdo-list] [rdo-manager] bug while building images Message-ID: ### i was attempting to upload the images generated when i got this error [stack at rdo3 ~]$ openstack overcloud image upload Required file "./deploy-ramdisk-ironic.initramfs" does not exist. ### so i tried to build the image by itself and got this error [stack at rdo3 images]$ openstack overcloud image build --type ironic-python-agent usage: openstack overcloud image build [-h] [--all] [--type ] [--base-image BASE_IMAGE] [--instack-undercloud-elements INSTACK_UNDERCLOUD_ELEMENTS] [--tripleo-puppet-elements TRIPLEO_PUPPET_ELEMENTS] [--elements-path ELEMENTS_PATH] [--tmp-dir TMP_DIR] [--node-arch NODE_ARCH] [--node-dist NODE_DIST] [--registration-method REG_METHOD] [--use-delorean-trunk] [--delorean-trunk-repo DELOREAN_TRUNK_REPO] [--delorean-repo-file DELOREAN_REPO_FILE] [--overcloud-full-dib-extra-args OVERCLOUD_FULL_DIB_EXTRA_ARGS] [--overcloud-full-name OVERCLOUD_FULL_NAME] [--fedora-user-name FEDORA_USER_NAME] [--agent-name AGENT_NAME] [--deploy-name DEPLOY_NAME] [--discovery-name DISCOVERY_NAME] [--agent-image-element AGENT_IMAGE_ELEMENT] [--deploy-image-element DEPLOY_IMAGE_ELEMENT] [--discovery-image-element DISCOVERY_IMAGE_ELEMENT] [--builder ] openstack overcloud image build: error: argument --type: invalid choice: 'ironic-python-agent' (choose from 'agent-ramdisk', 'deploy-ramdisk', 'discovery-ramdisk', 'fedora-user', 'overcloud-full') [stack at rdo3 images]$ ### ok thanks for the clarification (which wasnt in the --help!) so i ran the _proper_ command and got this error openstack overcloud image build --type agent-ramdisk Running install-packages install. Package list: openstack-ironic-python-agent Loading "fastestmirror" plugin Config time: 0.015 Yum version: 3.4.3 rpmdb time: 0.000 Setting up Package Sacks Loading mirror speeds from cached hostfile * base: ba.mirror.garr.it * epel: fr2.rpmfind.net * extras: centos.fastbull.org * updates: mirrors.prometeus.net pkgsack time: 6.760 Checking for virtual provide or file-provide for openstack-ironic-python-agent No package openstack-ironic-python-agent available. Error: Nothing to do [stack at rdo3 images]$ yum repolist Loaded plugins: fastestmirror Determining fastest mirrors * base: mi.mirror.garr.it * epel: mirror.imt-systems.com * extras: mirror.crazynetwork.it * updates: ba.mirror.garr.it repo id repo name status base/7/x86_64 CentOS-7 - Base 9,007 epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 9,276+1 extras/7/x86_64 CentOS-7 - Extras 191 openstack-liberty/x86_64 OpenStack Liberty Repository 870 updates/7/x86_64 CentOS-7 - Updates 497 repolist: 19,841 [stack at rdo3 images]$ -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Jan 25 14:53:17 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 25 Jan 2016 09:53:17 -0500 Subject: [Rdo-list] Fwd: [apex] RC1 now available In-Reply-To: <56A3CC46.2060403@redhat.com> References: <1980555140.16378832.1453568277739.JavaMail.zimbra@redhat.com> <56A3CC46.2060403@redhat.com> Message-ID: <56A636DD.7030806@redhat.com> Congratulations on this release! On 01/23/2016 01:53 PM, Dan Radez wrote: > Hello RDO, > As leaked earlier: Here is our announcement for OPNFV Apex Brahmaputra RC1. > > OPNFV Apex uses RDO Manager and RDO under the covers to deploy an NFV > platform. We launch a Virtual instack on a jump host and then deploy > either virtual on the same jumphost or a baremetal overcloud. > > Documentation can be found here: > http://artifacts.opnfv.org/apex/docs/installation-instructions/ > > Contact us via freenode in #opnfv-apex or on the opnfv users or tech > mailing lists. > opnfv-tech-discuss at lists.opnfv.org > opnfv-users at lists.opnfv.org > > Dan > > > -------- Forwarded Message -------- > Subject: [apex] RC1 now available > Date: Sat, 23 Jan 2016 11:57:57 -0500 (EST) > From: Tim Rozet > To: Debra Scott > CC: opnfv-tech-discuss at lists.opnfv.org, Dan Radez , > Michael Chapman , Dave Neary , > Chris Wright , Alvaro Lopez Ortega > > Hello, > I am pleased to announce that we have released another candidate build > for Brahmaputra: > artifacts.opnfv.org > apex/brahmaputra/opnfv-apex-2.1-brahmaputra.1.rc1.noarch.rpm > apex/brahmaputra/opnfv-apex-common-2.1-brahmaputra.1.rc1.noarch.rpm > apex/brahmaputra/opnfv-apex-opendaylight-sfc-2.1-brahmaputra.1.rc1.noarch.rpm > > apex/brahmaputra/opnfv-apex-undercloud-2.1-brahmaputra.1.rc1.noarch.rpm > apex/brahmaputra/opnfv-brahmaputra.1.rc1.iso > > This candidate build includes support for: > - OpenStack HA Liberty > - OpenDaylight Lithium 3.3 with L2 and L3 plugin deployment scenarios > - ONOS > - SFC (NSH OVS with OpenDaylight Beryllium) > - Ceilometer Aodh support for Doctor > > Several bug fixes have gone in as well, and thanks to Functest team and > others in the community for testing out RC0 for us and finding these > issues. Virtual deployments in our CI environment take as little as 32 > minutes, and bare metal takes about 43 minutes to deploy a 5 > controller/compute HA deployment. > > Huge thanks to Dan and Michael for their hard work and sacrificing their > nights and weekends to get us this far! > > Tim Rozet > Red Hat SDN Team > > ----- Original Message ----- > From: "Tim Rozet" > To: "Debra Scott" > Cc: opnfv-tech-discuss at lists.opnfv.org > Sent: Thursday, January 21, 2016 1:57:26 AM > Subject: Re: [opnfv-tech-discuss] [apex] RC1 readiness/status report > > Hi Debra, > Latest status. > > Progress since last update: > 1. Michael has gotten AODH to work locally in his build environment and > submitted a patch to OPNFV. We should then have support for Doctor: > https://gerrit.opnfv.org/gerrit/#/c/7455/ > 2. We have patch in gerrit to support SFC (ODL Be, NSH OVS, Kernel > Upgrade to 3.13): https://gerrit.opnfv.org/gerrit/#/c/7497/ > 3. ODL L3 has deployed on baremetal, and is now added to our Jenkins > verify. > 4. We have fixed 2 bugs that were causing failures in functest: > https://jira.opnfv.org/browse/APEX-61, > https://jira.opnfv.org/browse/APEX-59 (thanks Jose for debugging them > with me) > > TODO: > 1. Dan is working on OpenContrail integration. > 3. Functest is broken. Would like to get good functest results on our > daily before we build RC1. > 4. Yardstick was integrated into Apex, but is failing during run. Will > debug that with Yardstick team. > 5. Docs > > Thanks, > > Tim Rozet > Red Hat SDN Team > > ----- Original Message ----- > From: "Tim Rozet" > To: "Debra Scott" > Cc: opnfv-tech-discuss at lists.opnfv.org, "Dan Radez" , > "Michael Chapman" , "zhoubo (X)" > , "Chris Wright" > Sent: Monday, January 18, 2016 12:51:26 AM > Subject: Re: [opnfv-tech-discuss] [apex] RC1 readiness/status report > > Hi Debra, > I have not heard what officially RC1 is supposed to contain, but for us > we will build another candidate sometime within the next few days. > > Progress since last update: > 1. Awesome work by Dan and Michael to get the RPM split done, our RPMs > are now split into 3: > apex/opnfv-apex-2.1-20160117.noarch.rpm > apex/opnfv-apex-common-2.1-20160117.noarch.rpm > apex/opnfv-apex-undercloud-2.1-20160117.noarch.rpm > 2. ONOS code is fully merged and works in virtual deployments > (baremetal and functest results pending) - thanks Bob and ONOS team for > all the help integrating > 3. Scenarios were all added Friday to Apex Jenkins: > https://build.opnfv.org/ci/view/apex/ (ONOS and ODL L2 are now part of > our CI verify and daily) > 4. ODL L3 patch has been pushed and passed initial round of CI: > https://gerrit.opnfv.org/gerrit/#/c/6995/6 > > TODO: > 1. Dan is working on OpenContrail integration. > 2. Michael is working on getting AODH support for Doctor into Apex. > Patch WIP: https://gerrit.opnfv.org/gerrit/#/c/6631/ > 3. I'm working on getting SFC support. Plan is to build Apex RPM with > Be version of ODL included, and I'm writing up some ansible modules to > update kernel, install NSH OVS, and Tacker. > 4. Yardstick was integrated into Apex, but is failing during run. Will > debug that with Yardstick team. > > For our RC1 build my goal is to include ONOS and ODL L3. As soon as we > feel those are both stable, with good functest results, we will build RC1. > > Thanks, > > Tim Rozet > Red Hat SDN Team > > ----- Original Message ----- > From: "Tim Rozet" > To: "Debra Scott" > Cc: opnfv-tech-discuss at lists.opnfv.org, "Dan Radez" , > "Michael Chapman" , "Sam Hague" > Sent: Thursday, January 14, 2016 5:34:10 PM > Subject: [opnfv-tech-discuss] [apex] RC0 readiness/status report > > Hi Debra, > Glad to announce we have built an RC0 candidate artifact! This build > supports ODL Lithium 3.3, OpenStack Liberty HA, network isolation, Ceph > in a virtual or baremetal environment. > > http://artifacts.opnfv.org/index.html > apex/brahmaputra/opnfv-apex-2.7-brahmaputra.1.rc0.noarch.rpm > apex/brahmaputra/opnfv-apex-2.7-brahmaputra.1.rc0.src.rpm > apex/brahmaputra/opnfv-brahmaputra.1.rc0.iso > apex/brahmaputra/opnfv-brahmaputra.1.rc0.properties > > Functest results on artifact: > Tempest cases: 110 Pass/8 Fail > ODL cases: 15 Pass/3 Fail > VPING: 1 Pass/0 Fail > Rally: 6 Pass/2 Fail > VIMS: Aborted > > Functest caught a bug for us that we will fix post RC0: > https://jira.opnfv.org/browse/APEX-59, which also caused VIMS to abort. > > TODO looking past RC0: > 1. Found an issue where ONOS build artifact was not being included in > our build. Patch currently up to fix that. (Waiting on #2) > 2. However, because of #1, our artifact (rpm) size grows over 4GB which > is the limit of cpio, and breaks the rpm build. Dan is working on > splitting our rpm into separate rpms which will fix this problem. Patch > currently being tested: https://gerrit.opnfv.org/gerrit/#/c/6227/ > 3. Test ONOS and make sure it works with our integration. (Depends on > #1, #2) > 4. Dan was able to get OpenContrail to build as an RPM. Stuart has > provided us configuration info. Next step is to see if the RPM works > and if we can install/configure OpenContrail. > 5. Michael is working on getting AODH support for Doctor into Apex. > Patch WIP: https://gerrit.opnfv.org/gerrit/#/c/6631/ > 6. I'm working on getting SFC support. Plan is to build Apex RPM with > Be version of ODL included, and I'm writing up some ansible modules to > update kernel, install NSH OVS, and Tacker. > 7. Yardstick was integrated into Apex, but is failing to run. Will > debug that with Yardstick team. > > Thanks, > > Tim Rozet > Red Hat SDN Team > > ----- Original Message ----- > From: "Tim Rozet" > To: "Debra Scott" > Cc: opnfv-tech-discuss at lists.opnfv.org > Sent: Wednesday, January 13, 2016 8:27:45 AM > Subject: Re: [opnfv-tech-discuss] [apex] RC0 readiness/status report > > Hi Debra, > Will be late again today so here is current status: > Accomplishments since Monday: > 1. Ceph is working and merged. > 2. Network isolation is fully working (admin, private, public, storage > networks created and used in deployment). > 3. Was able to get functest to run with 118 pass/27 fail (Thanks Jose > and Morgan for the help). There was a slight error in functest where > some of our keystone users got deleted, so I am hoping on the next run > there will be more passes: > http://213.77.62.197/results?project=functest&case=Tempest&installer=apex > > Current status for RC0: > I believe all the patches are in place to provide us an RC0 artifact. > The only remaining issue is functest failed to start again on our daily > last night, so we need to fix that so we can get a valid artifact. > > TODO for RC0: > 1. Fix functest execution in our daily. > 2. Merge current master code to stable/b and produce RC0 candidate via > daily job. > > TODO looking past RC0: > 1. Found an issue where ONOS build artifact was not being included in > our build. Patch currently up to fix that. > 2. However, because of #1, our artifact (rpm) size grows over 4GB which > is the limit of cpio, and breaks the rpm build. Dan is working on > splitting our rpm into separate rpms which will fix this problem. > 3. Test ONOS and make sure it works with our integration. > 4. Dan was able to get OpenContrail to build as an RPM. Waiting on > Stuart for more information on what we need to do to integrate. > 5. Michael is working on getting AODH support for Doctor into Apex. > 6. I'll work on getting SFC supported. > > Tim Rozet > Red Hat SDN Team > > ----- Original Message ----- > From: "Tim Rozet" > To: "Debra Scott" > Cc: "Dan Radez" , "Michael Chapman" > , opnfv-tech-discuss at lists.opnfv.org > Sent: Sunday, January 10, 2016 11:19:40 PM > Subject: [apex] RC0 readiness/status report > > Hi Debra, > I will be late to the daily release ops meeting tomorrow, so I wanted to > relay our status. > > Accomplishments this past week: > 1. Upstream artifacts are frozen for B-release, this way we cannot be > affected anymore by upstream breakage > 2. OPNFV Apex CI pipieline is stable now for builds and virtual > deployments (last 4 builds in a row successful) > 3. LF POD1 is up and stable, and we completed our first baremetal > deployment there today: > https://build.opnfv.org/ci/view/apex/job/apex-deploy-baremetal-master/12/ > 4. Our installer now takes deploy, network, and inventory settings as > separate yaml inputs and parses them correctly > 5. ODL Lithium with L2 support is what is currently being deployed in CI > 6. ONOS code has been merged into Apex and included in latest artifact > build > > TODO this week: > 1. Get functest executing again on our daily jobs (working on that now) > 2. Get Ceph working on controllers and part of our deploy (patch in > place https://gerrit.opnfv.org/gerrit/#/c/5521/) > 3. Create jenkins job to test ONOS > 4. Meet with Stuart and figure out how to get OpenContrail support asap > > If we can get #1 and #2 (with good functest results) completed then we > have an RC0 candidate build. I am hopeful we can achieve that by EOB > Monday. > > Thanks, > > Tim Rozet > Red Hat SDN Team > _______________________________________________ > opnfv-tech-discuss mailing list > opnfv-tech-discuss at lists.opnfv.org > https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss > _______________________________________________ > opnfv-tech-discuss mailing list > opnfv-tech-discuss at lists.opnfv.org > https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From hguemar at fedoraproject.org Mon Jan 25 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 25 Jan 2016 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160125150003.9D55060A4004@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-01-27 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Packaging ](https://etherpad.openstack.org/p/RDO-Packaging) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Jan 25 17:46:38 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 25 Jan 2016 12:46:38 -0500 Subject: [Rdo-list] Unanswered ask.openstack.org "RDO" questions (Jan 25, 2016) Message-ID: <56A65F7E.2050907@redhat.com> Many thanks to all the folks who have been helping work down the backlog of unanswered questions. Here's this week's batch. 61 unanswered questions: socket.error: [Errno 111] Connection refused https://ask.openstack.org/en/question/87800/socketerror-errno-111-connection-refused/ Tags: access, ovs-bridge, multi-tenant, liberty-neutron Is it possible to upgrade my Red Hat Enterprise Linux OpenStack Platform to an RDO based OpenStack? https://ask.openstack.org/en/question/87776/is-it-possible-to-upgrade-my-red-hat-enterprise-linux-openstack-platform-to-an-rdo-based-openstack/ Tags: upgrade, rdo, rhelosp connectivity chain diagnose https://ask.openstack.org/en/question/87757/connectivity-chain-diagnose/ Tags: ovs, neutron, rdo, liberty Create a new dashboard Error https://ask.openstack.org/en/question/87549/create-a-new-dashboard-error/ Tags: dashboard, command, startdash, manage.py Hiera 3.0.1 on CentOS 7.2 errors out https://ask.openstack.org/en/question/87474/hiera-301-on-centos-72-errors-out/ Tags: undercloud OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Clarification on docs for self service connectivity https://ask.openstack.org/en/question/87183/clarification-on-docs-for-self-service-connectivity/ Tags: liberty, neutron, connectivity, router Can't create volume with cinder https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ Tags: cinder, glusterfs, nfs error installing rdo kilo with proxy https://ask.openstack.org/en/question/85703/error-installing-rdo-kilo-with-proxy/ Tags: rdo, packstack, centos, proxy Why is /usr/bin/openstack domain list ... hanging? https://ask.openstack.org/en/question/85593/why-is-usrbinopenstack-domain-list-hanging/ Tags: puppet, keystone, kilo [ RDO ] Could not find declared class ::remote::db https://ask.openstack.org/en/question/84820/rdo-could-not-find-declared-class-remotedb/ Tags: rdo Sahara SSHException: Error reading SSH protocol banner https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ Tags: sahara, icehouse, ssh, vanila Error Sahara create cluster: 'Error attach volume to instance https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, vanila, icehouse Creating Sahara cluster: Error attach volume to instance https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, hadoop, icehouse, vanilla Routing between two tenants https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ Tags: kilo, fuel, rdo, routing Freeing IP from FLAT network setup https://ask.openstack.org/en/question/84063/freeing-ip-from-flat-network-setup/ Tags: juno, existing-network, rdo, neutron, flat How to deploy Virtual network function (VNF) in Opnstack integrated Opendaylight https://ask.openstack.org/en/question/84061/how-to-deploy-virtual-network-function-vnf-in-opnstack-integrated-opendaylight/ Tags: vnf, kilo, opendaylight, nfv cann't install python-keystone-auth-token [Close Duplicate] https://ask.openstack.org/en/question/83942/cannt-install-python-keystone-auth-token-close-duplicate/ Tags: python-keystone, openstack-swift RDO kilo installation metadata widget doesn't work https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ Tags: kilo, flavor, metadata Not able to ssh into RDO Kilo instance https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ Tags: rdo, instance-ssh No able to create an instance in odl integrated RDO Kilo openstack https://ask.openstack.org/en/question/83700/no-able-to-create-an-instance-in-odl-integrated-rdo-kilo-openstack/ Tags: kilo, rdo, opendaylight, kilo-neutron, integration redhat RDO enable access to swift via S3 https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ Tags: swift, s3 openstack baremetal introspection internal server error https://ask.openstack.org/en/question/82790/openstack-baremetal-introspection-internal-server-error/ Tags: rdo, ironic-inspector, tripleo glance\nova command line SSL failure https://ask.openstack.org/en/question/82692/glancenova-command-line-ssl-failure/ Tags: glance, kilo-openstack, ssl Cannot create/update flavor metadata from horizon https://ask.openstack.org/en/question/82477/cannot-createupdate-flavor-metadata-from-horizon/ Tags: rdo, kilo, flavor, metadata Installing openstack using packstack (rdo) failed https://ask.openstack.org/en/question/82473/installing-openstack-using-packstack-rdo-failed/ Tags: rdo, packstack, installation-error, keystone can't start instances after upgrade/reboot https://ask.openstack.org/en/question/82205/cant-start-instances-after-upgradereboot/ Tags: cinder, iscsi, rdo, juno_rdo Cinder LVM iSCSI can't attach https://ask.openstack.org/en/question/82031/cinder-lvm-iscsi-cant-attach/ Tags: lvmiscsi, cinder, kilo, cento7, rdo external "NFS"-Network for all vms https://ask.openstack.org/en/question/81709/external-nfs-network-for-all-vms/ Tags: external-network, juno-neutron, ovs RDO - Qrouters lose IP on public network https://ask.openstack.org/en/question/80761/rdo-qrouters-lose-ip-on-public-network/ Tags: rdo, juno_rdo, floating-ip, qrouter -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Jan 25 18:14:00 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 25 Jan 2016 13:14:00 -0500 Subject: [Rdo-list] RDO/OpenStack Meetups, week of 25 Jan, 2016 Message-ID: <56A665E8.5020501@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. Also, don't forget that on Friday we have an all-day RDO Community Day at FOSDEM. Details at https://www.rdoproject.org/events/rdo-day-fosdem-2016/ Hope to see you there! --Rich * Monday January 25 in Saint Paul, MN, US: OpenStack Update and IBM Bluemix Demo - http://www.meetup.com/Minnesota-OpenStack-Meetup/events/227739133/ * Tuesday January 26 in Tel Aviv-Yafo, IL: Neutron Done the SDN Way - http://www.meetup.com/OpenStack-Israel/events/227823764/ * Tuesday January 26 in San Jose, CA, US: Come learn about building OpenStack Clouds with Maas and Juju - http://www.meetup.com/Silicon-Valley-OpenStack-Ops-Meetup/events/228102705/ * Wednesday January 27 in London, 17, GB: London Openstack January Meetup - http://www.meetup.com/Openstack-London/events/227588289/ * Wednesday January 27 in Orlando, FL, US: Red Hat and OpenStack in the Enterprise - http://www.meetup.com/Orlando-Central-Florida-OpenStack-Meetup/events/228183631/ * Thursday January 28 in Oslo, NO: *Partner Promo* Dell lanserer OpenStack Forum i Oslo: Automation For The Win! - http://www.meetup.com/RedHatOslo/events/227888127/ * Thursday January 28 in Harrisburg, PA, US: OpenStack Harrisburg kickoff - http://www.meetup.com/OpenStack-Harrisburg/events/228110502/ * Thursday January 28 in Herriman, UT, US: Bios 2 Boot 2 OpenStack Deploying a Straight Opensource OpenStack Infrastructure - http://www.meetup.com/openstack-utah/events/227290248/ * Thursday January 28 in Littleton, CO, US: Building Distributed Service Rich SDNs - http://www.meetup.com/OpenStack-Denver/events/226962447/ * Thursday January 28 in Pasadena, CA, US: Deployment Experience From Large-Scale Neutron Networks - January OpenStack L.A. - http://www.meetup.com/OpenStack-LA/events/227931609/ * Thursday January 28 in Morgantown, WV, US: OpenStack / RDO Q&A with Perry Myers - http://www.meetup.com/Morgantown-Linux-User-Group/events/227482742/ * Thursday January 28 in Budapest, HU: Cloud Budapest 2016/01 - http://www.meetup.com/Cloud-Budapest/events/227922125/ * Monday February 01 in S?o Paulo, BR: OpenStack Cloud Security Workshop CANCELADO - http://www.meetup.com/Openstack-Brasil/events/227950266/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From pmyers at redhat.com Mon Jan 25 18:44:16 2016 From: pmyers at redhat.com (Perry Myers) Date: Mon, 25 Jan 2016 13:44:16 -0500 Subject: [Rdo-list] RDO/OpenStack Meetups, week of 25 Jan, 2016 In-Reply-To: <56A665E8.5020501@redhat.com> References: <56A665E8.5020501@redhat.com> Message-ID: <56A66D00.1020706@redhat.com> On 01/25/2016 01:14 PM, Rich Bowen wrote: > The following are the meetups I'm aware of in the coming week where > OpenStack and/or RDO enthusiasts are likely to be present. If you know > of others, please let me know, and/or add them to > http://rdoproject.org/events > > If there's a meetup in your area, please consider attending. If you > attend, please consider taking a few photos, and possibly even writing > up a brief summary of what was covered. > > Also, don't forget that on Friday we have an all-day RDO Community Day > at FOSDEM. Details at https://www.rdoproject.org/events/rdo-day-fosdem-2016/ > > Hope to see you there! > > --Rich > > * Monday January 25 in Saint Paul, MN, US: OpenStack Update and IBM > Bluemix Demo - > http://www.meetup.com/Minnesota-OpenStack-Meetup/events/227739133/ > > * Tuesday January 26 in Tel Aviv-Yafo, IL: Neutron Done the SDN Way - > http://www.meetup.com/OpenStack-Israel/events/227823764/ > > * Tuesday January 26 in San Jose, CA, US: Come learn about building > OpenStack Clouds with Maas and Juju - > http://www.meetup.com/Silicon-Valley-OpenStack-Ops-Meetup/events/228102705/ > > * Wednesday January 27 in London, 17, GB: London Openstack January > Meetup - http://www.meetup.com/Openstack-London/events/227588289/ > > * Wednesday January 27 in Orlando, FL, US: Red Hat and OpenStack in the > Enterprise - > http://www.meetup.com/Orlando-Central-Florida-OpenStack-Meetup/events/228183631/ > > * Thursday January 28 in Oslo, NO: *Partner Promo* Dell lanserer > OpenStack Forum i Oslo: Automation For The Win! - > http://www.meetup.com/RedHatOslo/events/227888127/ > > * Thursday January 28 in Harrisburg, PA, US: OpenStack Harrisburg > kickoff - http://www.meetup.com/OpenStack-Harrisburg/events/228110502/ Harrisburg? That's not too far away, I should go to one of these :) > * Thursday January 28 in Herriman, UT, US: Bios 2 Boot 2 OpenStack > Deploying a Straight Opensource OpenStack Infrastructure - > http://www.meetup.com/openstack-utah/events/227290248/ > > * Thursday January 28 in Littleton, CO, US: Building Distributed Service > Rich SDNs - http://www.meetup.com/OpenStack-Denver/events/226962447/ > > * Thursday January 28 in Pasadena, CA, US: Deployment Experience From > Large-Scale Neutron Networks - January OpenStack L.A. - > http://www.meetup.com/OpenStack-LA/events/227931609/ > > * Thursday January 28 in Morgantown, WV, US: OpenStack / RDO Q&A with > Perry Myers - > http://www.meetup.com/Morgantown-Linux-User-Group/events/227482742/ Note, this has to be canceled as I have been pulled away on unexpected travel and cannot make the meetup. My apologies for the community members. > * Thursday January 28 in Budapest, HU: Cloud Budapest 2016/01 - > http://www.meetup.com/Cloud-Budapest/events/227922125/ > > * Monday February 01 in S?o Paulo, BR: OpenStack Cloud Security Workshop > CANCELADO - http://www.meetup.com/Openstack-Brasil/events/227950266/ > > > From rbowen at redhat.com Mon Jan 25 18:51:25 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 25 Jan 2016 13:51:25 -0500 Subject: [Rdo-list] Reminder: RDO Community Day at FOSDEM - this Friday Message-ID: <56A66EAD.8020805@redhat.com> A final reminder: We're having the all-day RDO Community Day at FOSDEM this Friday. It will be held at IBM Client Center Brussels Avenue du Bourget/ Bourgetlaan 42 B ? 1130 Brussels You can find more details about the venue at their microsite at http://www-05.ibm.com/be/clientcenter/contact.html The full schedule for the event is at https://www.rdoproject.org/events/rdo-day-fosdem-2016/ This event is being held in conjunction with the CentOS Dojo, and the details for that event are here: https://wiki.centos.org/Events/Dojo/Brussels2016 Hope to see some of you there. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Jan 25 20:12:52 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 25 Jan 2016 15:12:52 -0500 Subject: [Rdo-list] RDO blog roundup, 25 Jan 2016 Message-ID: <56A681C4.4000407@redhat.com> Here's what RDO enthusiasts have been writing about in the last week: Deploying an OpenStack undercloud/overcloud on a single server from my laptop with Ansible. by Harry Rybacki During the summer of 2014 I worked on the OpenStack Keystone component while interning at Red Hat. Fast forward to the end of October 2015 and I once again find myself working on OpenStack for Red Hat ? this time on the RDO Continuous Integration (CI) team. Since re-joining Red Hat I?ve developed a whole new level of respect not only for the wide breadth of knowledge required to work on this team but for deploying OpenStack in general. ? read more at http://tm3.org/4q Ceilometer Polling Performance Improvement, by Julien Danjou During the OpenStack summit of May 2015 in Vancouver, the OpenStack Telemetry community team ran a session for operators to provide feedback. One of the main issues operators relayed was the polling that Ceilometer was running on Nova to gather instance information. It had a highly negative impact on the Nova API CPU usage, as it retrieves all the information about instances on regular intervals. ? read more at http://tm3.org/4m AIO RDO Liberty && several external networks VLAN provider setup by Boris Derzhavets Post below is addressing the question when AIO RDO Liberty Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack ?allinone install doesn't allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.57.0/24 vlan tagged (172), 10.10.32.0/24 vlan tagged (200) already exists when RDO install is running. If demo_provision was "y" , then delete router1 and created external network of VXLAN type ? read more at http://tm3.org/4l Caching in Horizon with Redis by Matthias Runge Redis is a in-memory data structure store, which can be used as cache and session backend. I thought to give it a try for Horizon. Installation is quite simple, either pip install django-redis or dnf ?enablerepo=rawhide install python-django-redis. ? read more at http://tm3.org/4n Red Hat Cloud Infrastructure Cited as a Leader Among Private Cloud Software Suites by Independent Research Firm by Gordon Tillmore The Forrester report states that Red Hat ?leads the evaluation with its powerful portal, top governance capabilities, and a strategy built around integration, open source, and interoperability. Rather than trying to build a custom approach for completing functions around operations, governance, or automation, Red Hat provides a very composable package by leveraging a mix of market standards and open source in addition to its own development.? ? read more at http://tm3.org/4o Disable "Resource Usage"-dashboard in Horizon by Matthias Runge When using Horizon as Admin user, you probably saw the metering dashboard, also known as "Resource Usage". It internally uses Ceilometer; Ceilometer continuously collects data from configured data sources. In a cloud environment, this can quickly grow enormously. When someone visits the metering dashboard in Horizon, Ceilometer then will accumulate requested data on the fly. ? read more at http://tm3.org/4p -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From mohammed.arafa at gmail.com Tue Jan 26 12:51:36 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 26 Jan 2016 14:51:36 +0200 Subject: [Rdo-list] [rdo-manager] node locked by localhost.localdomain Message-ID: hi i am attempting introspection (for the umpteenth time) and i keep getting the error that says node is locked by localhost.localdomain. it is extremely puzzling because i have a line in /etc/hosts just for my undercloud. (hmm.. does my 127.0.0.1 undercloud.hostname need to be on its own line or with the 127.0.0.1 locahost.localdomain line?) i found an rhosp bug since last year with no activity (1232997) that indicate the node may be a duplicate. however i am performing an ironic node-delete uuid before beginning introspection. anybody got any any ideas? -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Tue Jan 26 15:21:19 2016 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 26 Jan 2016 10:21:19 -0500 (EST) Subject: [Rdo-list] Mitaka Milestone 2 test day, January 27th, 28th In-Reply-To: <56A283D9.6070900@redhat.com> References: <56A283D9.6070900@redhat.com> Message-ID: <225091752.20064479.1453821679417.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Rich Bowen" > To: rdo-list at redhat.com > > We will be holding a test day for Mitaka Milestone 2 packages that have > passed CI, on January 27th and 28th, which is next week. > > You can join the conversation on #rdo, on the Freenode IRC network, and > here on rdo-list > > Further details may be found at > https://www.rdoproject.org/testday/mitaka/milestone2/ Just looking at packages in the latest repo, a lot of packages (aodh, nova, etc.) are carrying the b3 suffix which would seem to correspond to milestone 3, but milestone 3 for these projects hasn't been cut (they are up to b2): http://docs.openstack.org/releases/releases/mitaka.html Is the intent to reflect that we're picking up the "latest" from master and therefore it's closer to b3 than b2? Thanks, Steve From ibravo at ltgfederal.com Tue Jan 26 15:16:39 2016 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Tue, 26 Jan 2016 10:16:39 -0500 Subject: [Rdo-list] [rdo-manager] node locked by localhost.localdomain In-Reply-To: References: Message-ID: <0f6801d1584c$8e6186d0$ab249470$@ltgfederal.com> This is what I have in my undercloud VM /etc/hosts file: 127.0.0.1 undercloud.mydomain undercloud localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 -- Ignacio Bravo LTG Federal ibravo at ltgfederal.com From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Mohammed Arafa Sent: Tuesday, January 26, 2016 7:52 AM To: rdo-list at redhat.com Subject: [Rdo-list] [rdo-manager] node locked by localhost.localdomain hi i am attempting introspection (for the umpteenth time) and i keep getting the error that says node is locked by localhost.localdomain. it is extremely puzzling because i have a line in /etc/hosts just for my undercloud. (hmm.. does my 127.0.0.1 undercloud.hostname need to be on its own line or with the 127.0.0.1 locahost.localdomain line?) i found an rhosp bug since last year with no activity (1232997) that indicate the node may be a duplicate. however i am performing an ironic node-delete uuid before beginning introspection. anybody got any any ideas? -- 805010942448935 GR750055912MA Link to me on LinkedIn -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Jan 26 15:31:09 2016 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 26 Jan 2016 16:31:09 +0100 Subject: [Rdo-list] Mitaka Milestone 2 test day, January 27th, 28th In-Reply-To: <225091752.20064479.1453821679417.JavaMail.zimbra@redhat.com> References: <56A283D9.6070900@redhat.com> <225091752.20064479.1453821679417.JavaMail.zimbra@redhat.com> Message-ID: <56A7913D.6040403@redhat.com> On 01/26/2016 04:21 PM, Steve Gordon wrote: > ----- Original Message ----- >> From: "Rich Bowen" >> To: rdo-list at redhat.com >> >> We will be holding a test day for Mitaka Milestone 2 packages that have >> passed CI, on January 27th and 28th, which is next week. >> >> You can join the conversation on #rdo, on the Freenode IRC network, and >> here on rdo-list >> >> Further details may be found at >> https://www.rdoproject.org/testday/mitaka/milestone2/ > > Just looking at packages in the latest repo, a lot of packages (aodh, nova, etc.) are carrying the b3 suffix which would seem to correspond to milestone 3, but milestone 3 for these projects hasn't been cut (they are up to b2): > > http://docs.openstack.org/releases/releases/mitaka.html > > Is the intent to reflect that we're picking up the "latest" from master and therefore it's closer to b3 than b2? Versions are generated by pbr, which bumps the version after the release. So if you release 1.0.0b1, the next commit becomes 1.0.0b2-dev1 aka the 1st commit towards b2. Hope that helps. > > Thanks, > > Steve > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mohammed.arafa at gmail.com Tue Jan 26 16:46:37 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 26 Jan 2016 18:46:37 +0200 Subject: [Rdo-list] [rdo-manager] node locked by localhost.localdomain In-Reply-To: <0f6801d1584c$8e6186d0$ab249470$@ltgfederal.com> References: <0f6801d1584c$8e6186d0$ab249470$@ltgfederal.com> Message-ID: I did an ironic node delete * and did everything over again. No lock. I did not change my hosts file. Strange behaviour though On Jan 26, 2016 10:22 AM, "Ignacio Bravo" wrote: > This is what I have in my undercloud VM /etc/hosts file: > > > > 127.0.0.1 undercloud.mydomain undercloud localhost localhost.localdomain > localhost4 localhost4.localdomain4 > > ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > > > > > > -- > > *Ignacio Bravo* > > *LTG Federal* > > ibravo at ltgfederal.com > > > > *From:* rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] *On > Behalf Of *Mohammed Arafa > *Sent:* Tuesday, January 26, 2016 7:52 AM > *To:* rdo-list at redhat.com > *Subject:* [Rdo-list] [rdo-manager] node locked by localhost.localdomain > > > > hi > > i am attempting introspection (for the umpteenth time) and i keep getting > the error that says node is locked by localhost.localdomain. > > it is extremely puzzling because i have a line in /etc/hosts just for my > undercloud. > > (hmm.. does my 127.0.0.1 undercloud.hostname need to be on its own line or > with the 127.0.0.1 locahost.localdomain line?) > > i found an rhosp bug since last year with no activity (1232997) that > indicate the node may be a duplicate. however i am performing an ironic > node-delete uuid before beginning introspection. > > > anybody got any any ideas? > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Jan 26 20:14:46 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 26 Jan 2016 15:14:46 -0500 Subject: [Rdo-list] Reminder: RDO test day tomorrow Message-ID: <56A7D3B6.2060409@redhat.com> A reminder that we'll be holding the RDO Mitaka 2 test day Tomorrow and Thursday - January 27-28. Details, and test instructions, may be found here: https://www.rdoproject.org/testday/mitaka/milestone2/ I will be traveling tomorrow, so I request that folks be particularly aware on #rdo, so that beginners and others new to RDO have the support that they need when things go wrong. Thank you all, in advance, for the time that you're willing to invest to make RDO better for everyone. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ekuris at redhat.com Wed Jan 27 06:43:19 2016 From: ekuris at redhat.com (Eran Kuris) Date: Wed, 27 Jan 2016 01:43:19 -0500 (EST) Subject: [Rdo-list] Reminder: RDO test day tomorrow In-Reply-To: <56A7D3B6.2060409@redhat.com> References: <56A7D3B6.2060409@redhat.com> Message-ID: <1824360520.17390198.1453876999235.JavaMail.zimbra@redhat.com> Hi , In the table written to use rhel7.1 Why don't we need use rhel 7.2 which is the latest version of rhel ? ----- Original Message ----- > From: "Rich Bowen" > To: rdo-list at redhat.com > Sent: Tuesday, January 26, 2016 10:14:46 PM > Subject: [Rdo-list] Reminder: RDO test day tomorrow > > A reminder that we'll be holding the RDO Mitaka 2 test day Tomorrow and > Thursday - January 27-28. Details, and test instructions, may be found > here: https://www.rdoproject.org/testday/mitaka/milestone2/ > > I will be traveling tomorrow, so I request that folks be particularly > aware on #rdo, so that beginners and others new to RDO have the support > that they need when things go wrong. > > Thank you all, in advance, for the time that you're willing to invest to > make RDO better for everyone. > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Best Regards , Eran Kuris From ekuris at redhat.com Wed Jan 27 07:41:25 2016 From: ekuris at redhat.com (Eran Kuris) Date: Wed, 27 Jan 2016 02:41:25 -0500 (EST) Subject: [Rdo-list] RDO installation : no packages available In-Reply-To: <324426887.17400410.1453880478820.JavaMail.zimbra@redhat.com> Message-ID: <743738226.17400428.1453880485465.JavaMail.zimbra@redhat.com> Hi all , someone saw it ? https://www.rdoproject.org/testday/mitaka/milestone2/ # yum -y install yum-plugin-priorities Loaded plugins: search-disabled-repos Bad id for repo: rhel7.2 optional, byte = 7 No package yum-plugin-priorities available. Error: Nothing to do -- Best Regards , Eran Kuris From trown at redhat.com Wed Jan 27 16:04:59 2016 From: trown at redhat.com (John Trowbridge) Date: Wed, 27 Jan 2016 11:04:59 -0500 Subject: [Rdo-list] [meeting] RDO meeting (2016-01-27) Message-ID: <56A8EAAB.8060909@redhat.com> ============================== #rdo: RDO meeting (2016-01-27) ============================== Meeting started by trown at 15:01:12 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2016-01-27/rdo_meeting_(2016-01-27).2016-01-27-15.01.log.html . Meeting summary --------------- * testday progress (trown, 15:04:46) * LINK: https://etherpad.openstack.org/p/RDO-Meeting (dmsimard, 15:06:02) * LINK: https://etherpad.openstack.org/p/rdo-test-days-mitaka-m2 (jpena, 15:06:18) * ACTION: trown make test matrix for rdo-manager at https://www.rdoproject.org/testday/mitaka/testedsetups2/ (trown, 15:08:17) * LINK: https://review.gerrithub.io/254505 is meant to address it (trown, 15:15:39) * LINK: http://logs.openstack.org/90/272890/2/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/ccf6da2/logs/rpm-qa.txt.gz (EmilienM, 15:16:10) * rdopkg tags support (trown, 15:22:53) * LINK: https://trello.com/c/KxKtVkTz/62-restructure-rdoinfo-for-new-needs (jruzicka, 15:23:51) * LINK: https://trello.com/c/KxKtVkTz/62-restructure-rdoinfo-for-new-needs (jruzicka, 15:25:12) * tags/overrides are supported in current rdoinfo and rdopkg >= 0.34 (jruzicka, 15:25:30) * apevec is going to integrate tags into delorean (jruzicka, 15:26:06) * Create M2Tesday symlinks (trown, 15:29:28) * LINK: http://trunk.rdoproject.org/centos7-liberty/61/71/61712212b9ed9a1d76233c874875400fccd8cae8_dd86149b/ (trown, 15:30:18) * LINK: http://trunk.rdoproject.org/centos7/55/17/5517b8e9aea3ded1052209384b4194d2caa97541_673a78a2 (trown, 15:30:29) * Proposal: Only promote delorean if there are no FTBFS (trown, 15:36:55) * LINK: https://jenkins04.openstack.org/job/gate-puppet-gnocchi-puppet-beaker-rspec-dsvm-centos7/33/consoleFull (EmilienM, 15:44:41) * AGREED: change the name of the 'coherent' symlink to 'consistent' and start using the 'consistent' symlink for the basis of promotion to 'current-passed-ci' (trown, 15:58:02) * ACTION: number90 rename symlink from coherent to consistent (number80, 15:59:21) * ACTION: trown put up review for rdo-infra to change base repo to 'consistent' instead of 'current' (trown, 15:59:24) * open discussion (trown, 16:00:29) Meeting ended at 16:02:00 UTC. Action Items ------------ * trown make test matrix for rdo-manager at https://www.rdoproject.org/testday/mitaka/testedsetups2/ * number90 rename symlink from coherent to consistent * trown put up review for rdo-infra to change base repo to 'consistent' instead of 'current' Action Items, by person ----------------------- * trown * trown make test matrix for rdo-manager at https://www.rdoproject.org/testday/mitaka/testedsetups2/ * trown put up review for rdo-infra to change base repo to 'consistent' instead of 'current' * **UNASSIGNED** * number90 rename symlink from coherent to consistent People Present (lines said) --------------------------- * trown (82) * number80 (46) * dmsimard (37) * apevec (31) * EmilienM (20) * jruzicka (12) * dmellado (11) * zodbot (10) * jpena (7) * social (4) * nmagnezi (3) * ukalifon1 (3) * pradk_ (2) * sshnaidm (2) * mflobo (1) * ibravo (1) * gkadam (1) * slagle (1) * jschlueter (1) * degorenko (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From edannon at redhat.com Wed Jan 27 16:58:57 2016 From: edannon at redhat.com (Eyal Dannon) Date: Wed, 27 Jan 2016 11:58:57 -0500 (EST) Subject: [Rdo-list] MITAKA-RDO DAY INSTALLATION ERROR In-Reply-To: <1389118650.26535920.1453908983936.JavaMail.zimbra@redhat.com> References: <1389118650.26535920.1453908983936.JavaMail.zimbra@redhat.com> Message-ID: <1707644246.26629052.1453913937339.JavaMail.zimbra@redhat.com> Hi, During rdo installation with packstack on multi-node env setup, I got an error message: https://bugzilla.redhat.com/show_bug.cgi?id=1302321 the problem can be solved by modifying the file: /usr/share/openstack-puppet/modules/vswitch/lib/puppet/provider/vs_port/ovs_redhat.rb as described here: https://review.openstack.org/265005 Best Regards , Eyal Dannon From javier.pena at redhat.com Wed Jan 27 18:01:50 2016 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 27 Jan 2016 13:01:50 -0500 (EST) Subject: [Rdo-list] MITAKA-RDO DAY INSTALLATION ERROR In-Reply-To: <1707644246.26629052.1453913937339.JavaMail.zimbra@redhat.com> References: <1389118650.26535920.1453908983936.JavaMail.zimbra@redhat.com> <1707644246.26629052.1453913937339.JavaMail.zimbra@redhat.com> Message-ID: <85774158.20957435.1453917710209.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > Hi, > > During rdo installation with packstack on multi-node env setup, I got an > error message: https://bugzilla.redhat.com/show_bug.cgi?id=1302321 > the problem can be solved by modifying the file: > /usr/share/openstack-puppet/modules/vswitch/lib/puppet/provider/vs_port/ovs_redhat.rb > as described here: https://review.openstack.org/265005 > Hi, Actually, the fix is to revert the changes created by the review mentioned above. Javier > Best Regards , > Eyal Dannon > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ukalifon at redhat.com Wed Jan 27 18:25:05 2016 From: ukalifon at redhat.com (Udi Kalifon) Date: Wed, 27 Jan 2016 20:25:05 +0200 Subject: [Rdo-list] First Mitaka test day summary Message-ID: Hello. The good news is that I succeeded to deploy :). I haven't yet tried to test the overcloud for any sanity, but in past test days I was never able to report any success - so maybe it's a sign that things are stabilizing. I deployed with rdo-manager on a virtual setup according to the instructions in https://www.rdoproject.org/rdo-manager/. I wasn't able to deploy with network isolation, because I assume that my templates from 7.x require changes, but I haven't seen any documentation on what's changed. If you can point me in the right direction to get network isolation working for this environment I will test it tomorrow. Some of the problems I hit today: 1) The link to the quickstart guide from the testday page https://www.rdoproject.org/testday/mitaka/milestone2/ points to a very old github page. The correct link should be the one I already mentioned: https://www.rdoproject.org/rdo-manager/ 2) The prerequisites to installing ansible are not documented. On a fresh CentOS 7 I had to install python-virtualenv, git, and gcc. I then ran "easy_install pip" and "pip install git+https://github.com/ansible/ansible.git at v2.0.0-0.6.rc1#egg=ansible" to be able to run the playbook which installs the virtual environment. Thanks, Udi. From weiler at soe.ucsc.edu Wed Jan 27 20:30:51 2016 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Wed, 27 Jan 2016 12:30:51 -0800 Subject: [Rdo-list] Slow network performance on Kilo? Message-ID: <56A928FB.6000206@soe.ucsc.edu> Hi Y'all, I've seen several folks on the net with this problem, but I'm still flailing a bit as to what is really going on. We are running RHEL 7 with RDO OpenStack Kilo. We are setting this environment up still, not quite done yet. But in our testing, we are experiencing very slow network performance when downloading or uploading to and from VMs. We get like 300Kb/s or so. We are using Neutron, MTU 9000 everywhere. I've tried disabling GSO, LRO, TSO, GRO on the neutron interfaces, as well as the VM server interfaces, still no improvement. I've tried lowing the VM MTU to 1500, still no improvement. It's really strange. We do get connectivity, I can ssh to the instances, but the network performance is just really, really slow. It appears the instances can talk to each other very quickly however. They just get slow network to the internet (i.e. when packets go through the network node). We are using VLAN tenant network isolation. Can anyone point me in the right direction? I've been beating my head against a wall and googling without avail for a week... Many thanks, erich From trown at redhat.com Wed Jan 27 21:17:57 2016 From: trown at redhat.com (John Trowbridge) Date: Wed, 27 Jan 2016 16:17:57 -0500 Subject: [Rdo-list] First Mitaka test day summary In-Reply-To: References: Message-ID: <56A93405.5080605@redhat.com> On 01/27/2016 01:25 PM, Udi Kalifon wrote: > Hello. > > The good news is that I succeeded to deploy :). I haven't yet tried to > test the overcloud for any sanity, but in past test days I was never > able to report any success - so maybe it's a sign that things are > stabilizing. That's awesome! > > I deployed with rdo-manager on a virtual setup according to the > instructions in https://www.rdoproject.org/rdo-manager/. I wasn't able > to deploy with network isolation, because I assume that my templates > from 7.x require changes, but I haven't seen any documentation on > what's changed. If you can point me in the right direction to get > network isolation working for this environment I will test it > tomorrow. > > Some of the problems I hit today: > > 1) The link to the quickstart guide from the testday page > https://www.rdoproject.org/testday/mitaka/milestone2/ points to a very > old github page. The correct link should be the one I already > mentioned: https://www.rdoproject.org/rdo-manager/ I fixed that link, thanks! > > 2) The prerequisites to installing ansible are not documented. On a > fresh CentOS 7 I had to install python-virtualenv, git, and gcc. I > then ran "easy_install pip" and "pip install > git+https://github.com/ansible/ansible.git at v2.0.0-0.6.rc1#egg=ansible" > to be able to run the playbook which installs the virtual environment. The quickstart.sh will do all of that for you, but if you wanted to submit a patch for more detailed instructions for manually setting up the virtualenv, that would great. For tripleo-quickstart, gerrit is setup and follows the same gerrit workflow as everything else. > > Thanks, > Udi. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From bderzhavets at hotmail.com Wed Jan 27 21:22:04 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 27 Jan 2016 21:22:04 +0000 Subject: [Rdo-list] Slow network performance on Kilo? In-Reply-To: <56A928FB.6000206@soe.ucsc.edu> References: <56A928FB.6000206@soe.ucsc.edu> Message-ID: ________________________________________ From: rdo-list-bounces at redhat.com on behalf of Erich Weiler Sent: Wednesday, January 27, 2016 3:30 PM To: rdo-list at redhat.com Subject: [Rdo-list] Slow network performance on Kilo? Hi Y'all, I've seen several folks on the net with this problem, but I'm still flailing a bit as to what is really going on. We are running RHEL 7 with RDO OpenStack Kilo. We are setting this environment up still, not quite done yet. But in our testing, we are experiencing very slow network performance when downloading or uploading to and from VMs. We get like 300Kb/s or so. We are using Neutron, MTU 9000 everywhere. I've tried disabling GSO, LRO, TSO, GRO on the neutron interfaces, as well as the VM server interfaces, still no improvement. I've tried lowing the VM MTU to 1500, still no improvement. It's really strange. We do get connectivity, I can ssh to the instances, but the network performance is just really, really slow. It appears the instances can talk to each other very quickly however. They just get slow network to the internet (i.e. when packets go through the network node). We are using VLAN tenant network isolation. 1. Switch to VXLAN tunneling 2. Activate DVR . It's already stable on RDO Kilo. It will result routing of North-South && East-West traffic avoiding Network Node. Boris. Can anyone point me in the right direction? I've been beating my head against a wall and googling without avail for a week... Many thanks, erich _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From ayoung at redhat.com Thu Jan 28 03:48:57 2016 From: ayoung at redhat.com (Adam Young) Date: Wed, 27 Jan 2016 22:48:57 -0500 Subject: [Rdo-list] Reminder: RDO test day tomorrow In-Reply-To: <56A7D3B6.2060409@redhat.com> References: <56A7D3B6.2060409@redhat.com> Message-ID: <56A98FA9.9010400@redhat.com> On 01/26/2016 03:14 PM, Rich Bowen wrote: > A reminder that we'll be holding the RDO Mitaka 2 test day Tomorrow and > Thursday - January 27-28. Details, and test instructions, may be found > here: https://www.rdoproject.org/testday/mitaka/milestone2/ > > I will be traveling tomorrow, so I request that folks be particularly > aware on #rdo, so that beginners and others new to RDO have the support > that they need when things go wrong. > > Thank you all, in advance, for the time that you're willing to invest to > make RDO better for everyone. > I am not certain if it is common knowledge, but if you are trying to install via Tripleo: I've had the best success with: http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/tripleo.sh.html do each stage manually, don't run --all, as there is a race condition. Is there a way to run this script specifically for RDO manager? From brandon.james at sunayu.com Thu Jan 28 04:03:57 2016 From: brandon.james at sunayu.com (Brandon James) Date: Wed, 27 Jan 2016 23:03:57 -0500 Subject: [Rdo-list] Reminder: RDO test day tomorrow In-Reply-To: <56A98FA9.9010400@redhat.com> References: <56A7D3B6.2060409@redhat.com> <56A98FA9.9010400@redhat.com> Message-ID: Adam, I ran it today and I did not run into any race conditions. Have you tried using the quickstart.sh? Brandon On Wed, Jan 27, 2016 at 10:48 PM, Adam Young wrote: > On 01/26/2016 03:14 PM, Rich Bowen wrote: > >> A reminder that we'll be holding the RDO Mitaka 2 test day Tomorrow and >> Thursday - January 27-28. Details, and test instructions, may be found >> here: https://www.rdoproject.org/testday/mitaka/milestone2/ >> >> I will be traveling tomorrow, so I request that folks be particularly >> aware on #rdo, so that beginners and others new to RDO have the support >> that they need when things go wrong. >> >> Thank you all, in advance, for the time that you're willing to invest to >> make RDO better for everyone. >> >> I am not certain if it is common knowledge, but if you are trying to > install via Tripleo: > > > I've had the best success with: > > > http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/tripleo.sh.html > > do each stage manually, don't run --all, as there is a race condition. > > Is there a way to run this script specifically for RDO manager? > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Thu Jan 28 05:46:13 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Thu, 28 Jan 2016 11:16:13 +0530 Subject: [Rdo-list] Bug Statistics for 2016-01-28 Message-ID: # RDO Bugs on 2016-01-28 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 389 - Fixed (MODIFIED, POST, ON_QA): 217 ## Number of open bugs by component dib-utils [ 2] diskimage-builder [ 3] + distribution [ 14] ++++++ dnsmasq [ 1] Documentation [ 4] + instack [ 4] + instack-undercloud [ 28] ++++++++++++ iproute [ 1] openstack-ceilometer [ 2] openstack-cinder [ 13] +++++ openstack-foreman-inst... [ 2] openstack-glance [ 2] openstack-heat [ 5] ++ openstack-horizon [ 2] openstack-ironic [ 4] + openstack-ironic-disco... [ 1] openstack-keystone [ 10] ++++ openstack-manila [ 10] ++++ openstack-neutron [ 13] +++++ openstack-nova [ 21] +++++++++ openstack-packstack [ 89] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 17] +++++++ openstack-selinux [ 11] ++++ openstack-swift [ 3] + openstack-tripleo [ 27] ++++++++++++ openstack-tripleo-heat... [ 6] ++ openstack-tripleo-imag... [ 2] openstack-trove [ 1] openstack-tuskar [ 2] openstack-utils [ 1] Package Review [ 9] ++++ python-glanceclient [ 2] python-keystonemiddleware [ 1] python-neutronclient [ 3] + python-novaclient [ 1] python-openstackclient [ 5] ++ python-oslo-config [ 2] rdo-manager [ 55] ++++++++++++++++++++++++ rdo-manager-cli [ 6] ++ rdopkg [ 1] RFEs [ 2] tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (389 bugs) ### dib-utils (2 bugs) [1263779 ] http://bugzilla.redhat.com/1263779 (NEW) Component: dib-utils Last change: 2015-12-07 Summary: Packstack Ironic admin_url misconfigured in nova.conf [1283812 ] http://bugzilla.redhat.com/1283812 (NEW) Component: dib-utils Last change: 2015-12-10 Summary: local_interface=bond0.120 in undercloud.conf create broken network configuration ### diskimage-builder (3 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version [1302176 ] http://bugzilla.redhat.com/1302176 (NEW) Component: diskimage-builder Last change: 2016-01-27 Summary: add support for deltarpm ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1290163 ] http://bugzilla.redhat.com/1290163 (NEW) Component: distribution Last change: 2016-01-25 Summary: Tracker: Review requests for new RDO Mitaka packages [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2016-01-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1301751 ] http://bugzilla.redhat.com/1301751 (NEW) Component: distribution Last change: 2016-01-25 Summary: Move all logging to stdout/err to allow systemd throttling logging of errors [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-12-10 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1299958 ] http://bugzilla.redhat.com/1299958 (NEW) Component: instack-undercloud Last change: 2016-01-19 Summary: instack-virt-setup does not set explicit path, can't find binaries [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2016-01-18 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (2 bugs) [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2016-01-04 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (13 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2016-01-04 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1301158 ] http://bugzilla.redhat.com/1301158 (NEW) Component: openstack-cinder Last change: 2016-01-22 Summary: openstack-cinder now requires google-api-python- client>=1.4.2 [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2015-11-25 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (5 bugs) [1291047 ] http://bugzilla.redhat.com/1291047 (NEW) Component: openstack-heat Last change: 2016-01-07 Summary: (RDO Mitaka) Overcloud deployment failed: Exceeded max scheduling attempts [1293961 ] http://bugzilla.redhat.com/1293961 (ASSIGNED) Component: openstack-heat Last change: 2016-01-07 Summary: [SFCI] Heat template failed to start because Property error: ... net_cidr (constraint not found) [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (4 bugs) [1300509 ] http://bugzilla.redhat.com/1300509 (NEW) Component: openstack-ironic Last change: 2016-01-21 Summary: ironic should have its own log file [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: openstack-ironic Last change: 2016-01-04 Summary: IPMI driver for Ironic should support RAID for operating system/root parition [1301153 ] http://bugzilla.redhat.com/1301153 (NEW) Component: openstack-ironic Last change: 2016-01-22 Summary: ironic logs huge [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (1 bug) [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (10 bugs) [1289267 ] http://bugzilla.redhat.com/1289267 (NEW) Component: openstack-keystone Last change: 2015-12-09 Summary: Mitaka: keystone.py is deprecated for WSGI implementation [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1280530 ] http://bugzilla.redhat.com/1280530 (NEW) Component: openstack-keystone Last change: 2016-01-21 Summary: Fernet tokens cannot read key files with SELInuxz enabeld [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1284871 ] http://bugzilla.redhat.com/1284871 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: /usr/share/keystone/wsgi-keystone.conf is missing group=keystone [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-12-07 Summary: keystone: add token flush cronjob script to keystone package ### openstack-manila (10 bugs) [1278918 ] http://bugzilla.redhat.com/1278918 (NEW) Component: openstack-manila Last change: 2015-12-06 Summary: manila-api fails to start without updates from upstream stable/liberty [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1277787 ] http://bugzilla.redhat.com/1277787 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: Glusterfs_driver: Export location for Glusterfs NFS- Ganesha is incorrect [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1277792 ] http://bugzilla.redhat.com/1277792 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: glusterfs_driver: Access-deny for glusterfs driver should be dynamic [1278919 ] http://bugzilla.redhat.com/1278919 (NEW) Component: openstack-manila Last change: 2015-12-06 Summary: AvailabilityZoneFilter is not working in manila- scheduler [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (13 bugs) [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2016-01-11 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-11-19 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-12-22 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1281308 ] http://bugzilla.redhat.com/1281308 (NEW) Component: openstack-neutron Last change: 2015-12-30 Summary: QoS policy is not enforced when using a previously used port [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1280258 ] http://bugzilla.redhat.com/1280258 (NEW) Component: openstack-neutron Last change: 2015-11-11 Summary: tenants seem like they are able to detach admin enforced QoS policies from ports or networks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https [1302416 ] http://bugzilla.redhat.com/1302416 (NEW) Component: openstack-neutron Last change: 2016-01-27 Summary: nova unable to delete floating-ip with neutron server error AttributeError: 'Query' object has no attribute 'one_or_none' [1065826 ] http://bugzilla.redhat.com/1065826 (ASSIGNED) Component: openstack-neutron Last change: 2015-12-15 Summary: [RFE] [neutron] neutron services needs more RPM granularity ### openstack-nova (21 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1278808 ] http://bugzilla.redhat.com/1278808 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Guest fails to use more than 1 vCPU with smpboot: do_boot_cpu failed(-1) to wakeup [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: nova: fail to edit project quota with DataError from nova [1294747 ] http://bugzilla.redhat.com/1294747 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Migration fails when the SRIOV PF is not online [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2016-01-27 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Nova AVC messages [1300611 ] http://bugzilla.redhat.com/1300611 (NEW) Component: openstack-nova Last change: 2016-01-21 Summary: filter instances by ip not work [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2016-01-19 Summary: logrotate should copytruncate to avoid openstack logging to deleted files [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2016-01-04 Summary: v4-fixed-ip= not working with juno nova networking ### openstack-packstack (89 bugs) [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1284182 ] http://bugzilla.redhat.com/1284182 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Unable start Keystone, core dump [1296844 ] http://bugzilla.redhat.com/1296844 (NEW) Component: openstack-packstack Last change: 2016-01-08 Summary: RDO Kilo packstack AIO install fails on CentOS 7.2. Error: Unable to connect to mongodb server! (192.169.142.54:27017) [1297692 ] http://bugzilla.redhat.com/1297692 (ON_DEV) Component: openstack-packstack Last change: 2016-01-18 Summary: Raise MariaDB max connections limit [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1298364 ] http://bugzilla.redhat.com/1298364 (NEW) Component: openstack-packstack Last change: 2016-01-13 Summary: rdo liberty install centos 7 nova-network error:CONFIG_NEUTRON_METADATA_PW_UNQUOTED [1292271 ] http://bugzilla.redhat.com/1292271 (NEW) Component: openstack-packstack Last change: 2015-12-18 Summary: Receive Msg 'Error: Could not find user glance' [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1255369 ] http://bugzilla.redhat.com/1255369 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: Improve session settings for horizon [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1254389 ] http://bugzilla.redhat.com/1254389 (ASSIGNED) Component: openstack-packstack Last change: 2016-01-23 Summary: Can no longer run packstack to maintain cluster [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1063393 ] http://bugzilla.redhat.com/1063393 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: RFE: Provide option to set bind_host/bind_port for API services [1291492 ] http://bugzilla.redhat.com/1291492 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: Unfriendly behavior of IP filtering for VXLAN with EXCLUDE_SERVERS [1290415 ] http://bugzilla.redhat.com/1290415 (NEW) Component: openstack-packstack Last change: 2016-01-09 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1282746 ] http://bugzilla.redhat.com/1282746 (NEW) Component: openstack-packstack Last change: 2016-01-08 Summary: Swift's proxy-server is not configured to use ceilometer [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1242647 ] http://bugzilla.redhat.com/1242647 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: Nova keypair doesn't work with Nova Networking [1239027 ] http://bugzilla.redhat.com/1239027 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: please move httpd log files to corresponding dirs [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1116019 ] http://bugzilla.redhat.com/1116019 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: AMQP1.0 server configurations needed [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1184806 ] http://bugzilla.redhat.com/1184806 (NEW) Component: openstack-packstack Last change: 2015-12-02 Summary: [RFE] Packstack should support deploying Nova and Glance with RBD images and Ceph as a backend [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1279642 ] http://bugzilla.redhat.com/1279642 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run fails when running with DEMO [1200129 ] http://bugzilla.redhat.com/1200129 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: [RFE] add support for ceilometer workload partitioning via tooz/redis [1194678 ] http://bugzilla.redhat.com/1194678 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: On aarch64, nova.conf should default to vnc_enabled=False [1293693 ] http://bugzilla.redhat.com/1293693 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: Keystone setup fails on missing required parameter [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1286995 ] http://bugzilla.redhat.com/1286995 (NEW) Component: openstack-packstack Last change: 2015-12-07 Summary: PackStack should configure LVM filtering with LVM/iSCSI [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1279641 ] http://bugzilla.redhat.com/1279641 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run does not install keystoneauth1 [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1276277 ] http://bugzilla.redhat.com/1276277 (NEW) Component: openstack-packstack Last change: 2015-10-31 Summary: packstack --allinone fails on CentOS 7 x86_64 1503-01 [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [903645 ] http://bugzilla.redhat.com/903645 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-02 Summary: RFE: Include the ability in PackStack to support SSL for all REST services and message bus communication [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1289761 ] http://bugzilla.redhat.com/1289761 (NEW) Component: openstack-packstack Last change: 2015-12-10 Summary: PackStack installs Nova crontab that nova user can't run [1286828 ] http://bugzilla.redhat.com/1286828 (NEW) Component: openstack-packstack Last change: 2015-12-04 Summary: Packstack should have the option to install QoS (neutron) [1172467 ] http://bugzilla.redhat.com/1172467 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: New user cannot retrieve container listing [1283261 ] http://bugzilla.redhat.com/1283261 (NEW) Component: openstack-packstack Last change: 2016-01-26 Summary: ceilometer-nova is not configured [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2016-01-09 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1285494 ] http://bugzilla.redhat.com/1285494 (NEW) Component: openstack-packstack Last change: 2015-11-25 Summary: openstack- packstack-7.0.0-0.5.dev1661.gaf13b7e.el7.noarch cripples(?) httpd.conf [1227298 ] http://bugzilla.redhat.com/1227298 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: Packstack should support MTU settings [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1296899 ] http://bugzilla.redhat.com/1296899 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: Swift's proxy-server is not configured to use ceilometer [1005073 ] http://bugzilla.redhat.com/1005073 (NEW) Component: openstack-packstack Last change: 2015-12-02 Summary: [RFE] Please add glance and nova lib folder config [1297833 ] http://bugzilla.redhat.com/1297833 (NEW) Component: openstack-packstack Last change: 2016-01-18 Summary: VPNaaS should use libreswan driver instead of openswan by default [1168113 ] http://bugzilla.redhat.com/1168113 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: The warning message " NetworkManager is active " appears even when the NetworkManager is inactive [1172310 ] http://bugzilla.redhat.com/1172310 (ASSIGNED) Component: openstack-packstack Last change: 2016-01-20 Summary: support Keystone LDAP [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1202922 ] http://bugzilla.redhat.com/1202922 (NEW) Component: openstack-packstack Last change: 2015-12-03 Summary: packstack key injection fails with legacy networking (Nova networking) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1282928 ] http://bugzilla.redhat.com/1282928 (ASSIGNED) Component: openstack-packstack Last change: 2016-01-13 Summary: Trove-api fails to start when deployed using packstack on RHEL 7.2 RC1.1 [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-11-06 Summary: Error: service-update is not currently supported by the keystone sql driver [1188491 ] http://bugzilla.redhat.com/1188491 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: Packstack wording is unclear for demo and testing provisioning. [1201612 ] http://bugzilla.redhat.com/1201612 (ASSIGNED) Component: openstack-packstack Last change: 2015-12-03 Summary: Interactive - Packstack asks for Tempest details even when Tempest install is declined [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex ### openstack-puppet-modules (17 bugs) [1288533 ] http://bugzilla.redhat.com/1288533 (NEW) Component: openstack-puppet-modules Last change: 2015-12-04 Summary: packstack fails on installing mongodb [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1298245 ] http://bugzilla.redhat.com/1298245 (NEW) Component: openstack-puppet-modules Last change: 2016-01-13 Summary: Add possibility to change DEFAULT/api_paste_config in trove.conf [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1285900 ] http://bugzilla.redhat.com/1285900 (NEW) Component: openstack-puppet-modules Last change: 2015-11-26 Summary: Typo in log file name for trove-guestagent [1297535 ] http://bugzilla.redhat.com/1297535 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-01-13 Summary: Undercloud installation fails ::aodh::keystone::auth not found for instack [1285897 ] http://bugzilla.redhat.com/1285897 (NEW) Component: openstack-puppet-modules Last change: 2015-11-26 Summary: trove-guestagent.conf should define the configuration for backups [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1289309 ] http://bugzilla.redhat.com/1289309 (NEW) Component: openstack-puppet-modules Last change: 2015-12-07 Summary: Neutron module needs updating in OPM ### openstack-selinux (11 bugs) [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2016-01-04 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1284879 ] http://bugzilla.redhat.com/1284879 (NEW) Component: openstack-selinux Last change: 2015-11-24 Summary: Keystone via mod_wsgi is missing permission to read /etc/keystone/fernet-keys [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2016-01-04 Summary: Nova rootwrap-daemon requires a selinux exception [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied ### openstack-swift (3 bugs) [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-12-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files ### openstack-tripleo (27 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2015-12-11 Summary: missing python-proliantutils [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1284664 ] http://bugzilla.redhat.com/1284664 (NEW) Component: openstack-tripleo Last change: 2015-11-23 Summary: NtpServer is passed as string by "openstack overcloud deploy" [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever ### openstack-tripleo-heat-templates (6 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1301290 ] http://bugzilla.redhat.com/1301290 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2016-01-25 Summary: Mitaka - overcloud deploy gives: [ERROR] /usr/libexec/mysqld: option '--wsrep_notify_cmd' requires an argument [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux ### openstack-trove (1 bug) [1290156 ] http://bugzilla.redhat.com/1290156 (NEW) Component: openstack-trove Last change: 2015-12-09 Summary: Move guestagent settings to default section ### openstack-tuskar (2 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (1 bug) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2016-01-04 Summary: Can't enable OpenStack service after openstack-service disable ### Package Review (9 bugs) [1283295 ] http://bugzilla.redhat.com/1283295 (NEW) Component: Package Review Last change: 2015-11-18 Summary: Review Request: CloudKitty - Rating as a Service [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2015-12-03 Summary: Review Request: openstack-mistral - workflow Service for OpenStack cloud [1290090 ] http://bugzilla.redhat.com/1290090 (ASSIGNED) Component: Package Review Last change: 2015-12-10 Summary: Review Request: python-networking-midonet [1299959 ] http://bugzilla.redhat.com/1299959 (NEW) Component: Package Review Last change: 2016-01-22 Summary: Package Review: python-ironic-cisco [1290308 ] http://bugzilla.redhat.com/1290308 (NEW) Component: Package Review Last change: 2015-12-10 Summary: Review Request: python-midonetclient [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2016-01-13 Summary: Review Request: Murano - is an application catalog for OpenStack [1293948 ] http://bugzilla.redhat.com/1293948 (NEW) Component: Package Review Last change: 2015-12-23 Summary: Review Request: python-kuryr [1292794 ] http://bugzilla.redhat.com/1292794 (ASSIGNED) Component: Package Review Last change: 2016-01-27 Summary: Review Request: openstack-magnum - Container Management project for OpenStack [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2015-11-13 Summary: New Package: python-dracclient ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (3 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1281352 ] http://bugzilla.redhat.com/1281352 (NEW) Component: python-neutronclient Last change: 2015-11-12 Summary: Internal server error when running qos-bandwidth-limit- rule-update as a tenant Edit ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2016-01-04 Summary: appdirs requirement ### python-oslo-config (2 bugs) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2016-01-04 Summary: oslo_config != oslo.config [1282093 ] http://bugzilla.redhat.com/1282093 (NEW) Component: python-oslo-config Last change: 2016-01-04 Summary: please rebase oslo.log to 1.12.0 ### rdo-manager (55 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1294599 ] http://bugzilla.redhat.com/1294599 (NEW) Component: rdo-manager Last change: 2015-12-29 Summary: Virtual environment overcloud deploy fails with default memory allocation [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Enable configuration of OVS ARP Responder [1300444 ] http://bugzilla.redhat.com/1300444 (NEW) Component: rdo-manager Last change: 2016-01-20 Summary: RDO Manager is using deprecated nova options [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support IPv6 [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-11-25 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1294085 ] http://bugzilla.redhat.com/1294085 (NEW) Component: rdo-manager Last change: 2016-01-04 Summary: Creating an instance on RDO overcloud, errors out [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1296475 ] http://bugzilla.redhat.com/1296475 (NEW) Component: rdo-manager Last change: 2016-01-07 Summary: Deploying Manila is not possible due to missing template [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] Support enabling the port security extension [1294683 ] http://bugzilla.redhat.com/1294683 (NEW) Component: rdo-manager Last change: 2016-01-01 Summary: instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux. [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-11-18 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1301009 ] http://bugzilla.redhat.com/1301009 (NEW) Component: rdo-manager Last change: 2016-01-27 Summary: Undercloud install failing [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1292253 ] http://bugzilla.redhat.com/1292253 (NEW) Component: rdo-manager Last change: 2016-01-01 Summary: Production + EPEL + yum-plugin-priorities results in wrong version of hiera [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1300445 ] http://bugzilla.redhat.com/1300445 (NEW) Component: rdo-manager Last change: 2016-01-20 Summary: RDO Manager is using deprecated neutron options [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-31 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (ASSIGNED) Component: RFEs Last change: 2016-01-17 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2016-01-22 Summary: [RFE] Provide easy to use upgrade tool ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (217 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (7 bugs) [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1300013 ] http://bugzilla.redhat.com/1300013 (MODIFIED) Component: distribution Last change: 2016-01-21 Summary: openstack-aodh now requires python-gnocchiclient [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2016-01-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (10 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Ceilometer requires pymongo>=3.0.2 [1265721 ] http://bugzilla.redhat.com/1265721 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (POST) Component: openstack-ceilometer Last change: 2016-01-04 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer-common missing python-babel dependency [1287252 ] http://bugzilla.redhat.com/1287252 (POST) Component: openstack-ceilometer Last change: 2016-01-04 Summary: openstack-ceilometer-alarm-notifier does not start: unit file is missing [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: Ceilometer dbsync failing during HA deployment [1265818 ] http://bugzilla.redhat.com/1265818 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: ceilometer polling agent does not start [1214928 ] http://bugzilla.redhat.com/1214928 (MODIFIED) Component: openstack-ceilometer Last change: 2016-01-04 Summary: package ceilometermiddleware missing ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1278962 ] http://bugzilla.redhat.com/1278962 (ON_QA) Component: openstack-glance Last change: 2015-11-13 Summary: python-cryptography requires pyasn1>=0.1.8 but only 0.1.6 is available in Centos [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency ### openstack-heat (3 bugs) [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2016-01-04 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1281920 ] http://bugzilla.redhat.com/1281920 (POST) Component: openstack-neutron Last change: 2015-11-16 Summary: neutron-server will not start: fails with pbr version issue [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (6 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2016-01-04 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2016-01-04 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [1301156 ] http://bugzilla.redhat.com/1301156 (POST) Component: openstack-nova Last change: 2016-01-22 Summary: openstack-nova missing specfile requires on castellan>=0.3.1 [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2016-01-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all ### openstack-packstack (72 bugs) [1252483 ] http://bugzilla.redhat.com/1252483 (POST) Component: openstack-packstack Last change: 2015-12-07 Summary: Demo network provisioning: public and private are shared, private has no tenant [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: Cannot start nova-network on juno - Centos7 [1288179 ] http://bugzilla.redhat.com/1288179 (POST) Component: openstack-packstack Last change: 2015-12-08 Summary: Mitaka: Packstack image provisioning fails with "Store filesystem could not be configured correctly" [1297733 ] http://bugzilla.redhat.com/1297733 (POST) Component: openstack-packstack Last change: 2016-01-19 Summary: No VPN tab in Horizon after deploy OSP-8 with packstack - CONFIG_NEUTRON_VPNAAS=y [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1259354 ] http://bugzilla.redhat.com/1259354 (MODIFIED) Component: openstack-packstack Last change: 2015-11-10 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: rdo release RPM not installed on all fedora hosts [1266028 ] http://bugzilla.redhat.com/1266028 (POST) Component: openstack-packstack Last change: 2015-12-15 Summary: Packstack should use pymysql database driver since Liberty [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1290429 ] http://bugzilla.redhat.com/1290429 (POST) Component: openstack-packstack Last change: 2015-12-10 Summary: Packstack does not correctly configure Nova notifications for Neutron in Mitaka-1 [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2016-01-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1302275 ] http://bugzilla.redhat.com/1302275 (POST) Component: openstack-packstack Last change: 2016-01-27 Summary: neutron-l3-agent does not start on Mitaka-2 when enabling FWaaS [1302256 ] http://bugzilla.redhat.com/1302256 (POST) Component: openstack-packstack Last change: 2016-01-27 Summary: neutron-server does not start on Mitaka-2 when enabling LBaaS [1150652 ] http://bugzilla.redhat.com/1150652 (POST) Component: openstack-packstack Last change: 2015-12-07 Summary: PackStack does not provide an option to register hosts to Red Hat Satellite 6 [1295503 ] http://bugzilla.redhat.com/1295503 (MODIFIED) Component: openstack-packstack Last change: 2016-01-08 Summary: Packstack master branch is in the liberty repositories (was: Packstack installation fails with unsupported db backend) [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2016-01-13 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1124982 ] http://bugzilla.redhat.com/1124982 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Help text for SSL is incorrect regarding passphrase on the cert [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2016-01-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1297518 ] http://bugzilla.redhat.com/1297518 (POST) Component: openstack-packstack Last change: 2016-01-12 Summary: Sahara installation fails with ArgumentError: Could not find declared class ::sahara::notify::rabbitmq [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1285314 ] http://bugzilla.redhat.com/1285314 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Packstack needs to support aodh services since Mitaka [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1187412 ] http://bugzilla.redhat.com/1187412 (POST) Component: openstack-packstack Last change: 2015-12-09 Summary: Script wording for service installation should be consistent [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error ### openstack-puppet-modules (22 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1300562 ] http://bugzilla.redhat.com/1300562 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-22 Summary: Mitaka - Could not find resource 'Service[mysqld]' for relationship from 'File[mysql-config-file]' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1297052 ] http://bugzilla.redhat.com/1297052 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-13 Summary: openstack-puppet-modules build is out of date and wrong branch in Delorean repos [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2016-01-04 Summary: prescript.pp does not ensure iptables-services package installation [1302321 ] http://bugzilla.redhat.com/1302321 (POST) Component: openstack-puppet-modules Last change: 2016-01-27 Summary: During RDO packstack install Error: Could not set 'present' on ensure: uninitialized constant DEFAULT ### openstack-sahara (2 bugs) [1290387 ] http://bugzilla.redhat.com/1290387 (POST) Component: openstack-sahara Last change: 2015-12-10 Summary: openstack-sahara-api fails to start in Mitaka-1, cannot find api-paste.ini [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (13 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2016-01-04 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1219406 ] http://bugzilla.redhat.com/1219406 (MODIFIED) Component: openstack-selinux Last change: 2015-11-06 Summary: Glance over nfs fails due to selinux [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (1 bug) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account ### openstack-trove (2 bugs) [1278608 ] http://bugzilla.redhat.com/1278608 (MODIFIED) Component: openstack-trove Last change: 2015-11-06 Summary: trove-api fails to start [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1229493 ] http://bugzilla.redhat.com/1229493 (POST) Component: openstack-tuskar Last change: 2015-12-04 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (3 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (POST) Component: openstack-utils Last change: 2016-01-05 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2016-01-04 Summary: openstack-status as admin falsely shows zero instances [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2016-01-04 Summary: update openstack-status for rdo-manager ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content ### python-glanceclient (2 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2016-01-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request ### python-openstackclient (2 bugs) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2016-01-04 Summary: Rebase python-openstackclient to version 1.0.0 [1302379 ] http://bugzilla.redhat.com/1302379 (MODIFIED) Component: python-openstackclient Last change: 2016-01-27 Summary: rebase python-openstackclient to 1.7.2 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2016-01-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2016-01-04 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (10 bugs) [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1271335 ] http://bugzilla.redhat.com/1271335 (POST) Component: rdo-manager Last change: 2015-12-30 Summary: [RFE] Support explicit configuration of L2 population [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1272180 ] http://bugzilla.redhat.com/1272180 (POST) Component: rdo-manager Last change: 2015-12-04 Summary: Horizon doesn't load when deploying without pacemaker [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-11-04 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command ### rdo-manager-cli (10 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2015-11-08 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ukalifon at redhat.com Thu Jan 28 09:35:00 2016 From: ukalifon at redhat.com (Udi Kalifon) Date: Thu, 28 Jan 2016 11:35:00 +0200 Subject: [Rdo-list] First Mitaka test day summary In-Reply-To: <56A93405.5080605@redhat.com> References: <56A93405.5080605@redhat.com> Message-ID: I can't launch instances in Mitaka on a virtual environment. I made sure that nested virtualization is enabled on the host. If I do nova show on the failed instance I see this: {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": " File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 372, in build_instances context, request_spec, filter_properties) File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 416, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", line 372, in wrapped return func(*args, **kwargs) File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", line 158, in call retry=self.retry) File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", line 90, in _send timeout=timeout, retry=retry) File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 466, in send retry=retry) File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 457, in _send raise result ", "created": "2016-01-28T09:20:47Z"} On the controller, I see a lot of db-related errors in nova-scheduler.log: 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db Traceback (most recent call last): 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", line 88, in _report_state 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db service.service_ref.save() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 221, in wrapper 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return fn(self, *args, **kwargs) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, in save 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db self._check_minimum_version() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, in _check_minimum_version 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db minver = self.get_minimum_version(self._context, self.binary) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 179, in wrapper 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db result = fn(cls, context, *args, **kwargs) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, in get_minimum_version 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db use_slave=use_slave) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in service_get_minimum_version 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db use_slave=use_slave) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 468, in service_get_minimum_version 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db filter(models.Service.forced_down == false()).\ 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2503, in scalar 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db ret = self.one() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2472, in one 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db ret = list(self) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2515, in __iter__ 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return self._execute_and_instances(context) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db close_with_result=True) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kw) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 882, in connection 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db execution_options=execution_options) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db engine, execution_options) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db conn = bind.contextual_connect() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2036, in contextual_connect 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kwargs) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 92, in __init__ 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db self.dispatch.engine_connect(self, self.__branch) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line 258, in __call__ 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db fn(*args, **kw) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 80, in _connect_ping_listener 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db connection.scalar(select([1])) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 844, in scalar 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return self.execute(object, *multiparams, **params).scalar() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return meth(self, multiparams, params) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return connection._execute_clauseelement(self, multiparams, params) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db compiled_sql, distilled_params 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1078, in _execute_context 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db None, None) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1335, in _handle_dbapi_exception 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db util.raise_from_cause(newraise, exc_info) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db reraise(type(exception), exception, tb=exc_tb) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1071, in _execute_context 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db conn = self._revalidate_connection() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 393, in _revalidate_connection 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db self.__connection = self.engine.raw_connection(_connection=self) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2099, in raw_connection 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db self.pool.unique_connection, _connection) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2075, in _wrap_pool_connect 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db util.reraise(*sys.exc_info()) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return fn() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 318, in unique_connection 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return _ConnectionFairy._checkout(self) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in _checkout 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db fairy = _ConnectionRecord.checkout(pool) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in checkout 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db rec.checkin() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db compat.reraise(exc_type, exc_value, exc_tb) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in checkout 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db dbapi_connection = rec.get_connection() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 590, in get_connection 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db self.connection = self.__connect() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in __connect 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db connection = self.__pool._invoke_creator(self) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 97, in connect 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return dialect.connect(*cargs, **cparams) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 377, in connect 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return self.dbapi.connect(*cargs, **cparams) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in Connect 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return Connection(*args, **kwargs) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, in __init__ 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db self.connect() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, in connect 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db self._get_server_information() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, in _get_server_information 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db packet = self._read_packet() 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, in _read_packet 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db packet_header = self._read_bytes(4) 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, in _read_bytes 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db 2013, "Lost connection to MySQL server during query") 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'SELECT 1'] 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db Traceback (most recent call last): 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", line 88, in _report_state 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db service.service_ref.save() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 221, in wrapper 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return fn(self, *args, **kwargs) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, in save 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db self._check_minimum_version() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, in _check_minimum_version 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db minver = self.get_minimum_version(self._context, self.binary) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 179, in wrapper 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db result = fn(cls, context, *args, **kwargs) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, in get_minimum_version 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db use_slave=use_slave) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in service_get_minimum_version 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db use_slave=use_slave) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 468, in service_get_minimum_version 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db filter(models.Service.forced_down == false()).\ 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2503, in scalar 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db ret = self.one() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2472, in one 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db ret = list(self) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2515, in __iter__ 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return self._execute_and_instances(context) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db close_with_result=True) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db **kw) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 882, in connection 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db execution_options=execution_options) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db engine, execution_options) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db conn = bind.contextual_connect() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db self._wrap_pool_connect(self.pool.connect, None), 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2073, in _wrap_pool_connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db e, dialect, self) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1399, in _handle_dbapi_exception_noconnection 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db util.raise_from_cause(newraise, exc_info) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db reraise(type(exception), exception, tb=exc_tb) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return fn() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return _ConnectionFairy._checkout(self) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in _checkout 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db fairy = _ConnectionRecord.checkout(pool) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in checkout 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db rec.checkin() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db compat.reraise(exc_type, exc_value, exc_tb) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in checkout 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db dbapi_connection = rec.get_connection() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 563, in get_connection 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db self.connection = self.__connect() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in __connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db connection = self.__pool._invoke_creator(self) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 97, in connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return dialect.connect(*cargs, **cparams) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 377, in connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return self.dbapi.connect(*cargs, **cparams) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in Connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return Connection(*args, **kwargs) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, in __init__ 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db self.connect() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, in connect 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db self._get_server_information() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, in _get_server_information 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db packet = self._read_packet() 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, in _read_packet 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db packet_header = self._read_bytes(4) 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, in _read_bytes 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db 2013, "Lost connection to MySQL server during query") 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db Traceback (most recent call last): 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", line 88, in _report_state 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db service.service_ref.save() 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 221, in wrapper 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db return fn(self, *args, **kwargs) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, in save 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db self._check_minimum_version() 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, in _check_minimum_version 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db minver = self.get_minimum_version(self._context, self.binary) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 179, in wrapper 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db result = fn(cls, context, *args, **kwargs) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, in get_minimum_version 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db use_slave=use_slave) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in service_get_minimum_version 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db use_slave=use_slave) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 468, in service_get_minimum_version 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db filter(models.Service.forced_down == false()).\ 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2503, in scalar 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db ret = self.one() 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2472, in one 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db ret = list(self) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2515, in __iter__ 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db return self._execute_and_instances(context) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db close_with_result=True) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db **kw) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 882, in connection 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db execution_options=execution_options) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db engine, execution_options) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db conn = bind.contextual_connect() 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db self._wrap_pool_connect(self.pool.connect, None), 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db return fn() 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in connect 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db return _ConnectionFairy._checkout(self) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 729, in _checkout 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db fairy) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line 258, in __call__ 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db fn(*args, **kw) 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 352, in checkout 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db if connection_record.info['pid'] != pid: 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db KeyError: 'pid' 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool [-] Exception during reset or similar 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool Traceback (most recent call last): 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 631, in _finalize_fairy 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool fairy._reset(pool) 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 765, in _reset 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool pool._dialect.do_rollback(self) 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/base.py", line 2519, in do_rollback 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool dbapi_connection.rollback() 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 723, in rollback 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool self._execute_command(COMMAND.COM_QUERY, "ROLLBACK") 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 956, in _execute_command 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool raise err.InterfaceError("(0, '')") 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool InterfaceError: (0, '') 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool [-] Exception closing connection 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Traceback (most recent call last): 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 290, in _close_connection 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool self._dialect.do_close(connection) 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 418, in do_close 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool dbapi_connection.close() 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 662, in close 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool raise err.Error("Already closed") 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Error: Already closed 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool 2016-01-28 08:25:27.009 1121 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status. 2016-01-28 08:27:21.165 31979 WARNING oslo_reports.guru_meditation_report [-] Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports. 2016-01-28 08:27:21.544 31979 WARNING oslo_config.cfg [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Option "use_local" from group "conductor" is deprecated for removal. Its value may be silently ignored in the future. 2016-01-28 08:27:21.606 31979 INFO oslo_service.periodic_task [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Skipping periodic task _periodic_update_dns because its interval is negative 2016-01-28 08:27:21.635 31979 INFO nova.service [-] Starting scheduler node (version 13.0.0-dev80.el7.centos) Was anyone successful in launching instances in Mitaka? I installed with the director, it's an HA deployment without network isolation in a virtualized environment. Thanks a lot, Udi. On Wed, Jan 27, 2016 at 11:17 PM, John Trowbridge wrote: > > > On 01/27/2016 01:25 PM, Udi Kalifon wrote: >> Hello. >> >> The good news is that I succeeded to deploy :). I haven't yet tried to >> test the overcloud for any sanity, but in past test days I was never >> able to report any success - so maybe it's a sign that things are >> stabilizing. > > That's awesome! > >> >> I deployed with rdo-manager on a virtual setup according to the >> instructions in https://www.rdoproject.org/rdo-manager/. I wasn't able >> to deploy with network isolation, because I assume that my templates >> from 7.x require changes, but I haven't seen any documentation on >> what's changed. If you can point me in the right direction to get >> network isolation working for this environment I will test it >> tomorrow. >> >> Some of the problems I hit today: >> >> 1) The link to the quickstart guide from the testday page >> https://www.rdoproject.org/testday/mitaka/milestone2/ points to a very >> old github page. The correct link should be the one I already >> mentioned: https://www.rdoproject.org/rdo-manager/ > > I fixed that link, thanks! > >> >> 2) The prerequisites to installing ansible are not documented. On a >> fresh CentOS 7 I had to install python-virtualenv, git, and gcc. I >> then ran "easy_install pip" and "pip install >> git+https://github.com/ansible/ansible.git at v2.0.0-0.6.rc1#egg=ansible" >> to be able to run the playbook which installs the virtual environment. > > The quickstart.sh will do all of that for you, but if you wanted to > submit a patch for more detailed instructions for manually setting up > the virtualenv, that would great. For tripleo-quickstart, gerrit is > setup and follows the same gerrit workflow as everything else. > >> >> Thanks, >> Udi. >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From trown at redhat.com Thu Jan 28 11:48:37 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 28 Jan 2016 06:48:37 -0500 Subject: [Rdo-list] Reminder: RDO test day tomorrow In-Reply-To: <56A98FA9.9010400@redhat.com> References: <56A7D3B6.2060409@redhat.com> <56A98FA9.9010400@redhat.com> Message-ID: <56AA0015.40509@redhat.com> On 01/27/2016 10:48 PM, Adam Young wrote: > On 01/26/2016 03:14 PM, Rich Bowen wrote: >> A reminder that we'll be holding the RDO Mitaka 2 test day Tomorrow and >> Thursday - January 27-28. Details, and test instructions, may be found >> here: https://www.rdoproject.org/testday/mitaka/milestone2/ >> >> I will be traveling tomorrow, so I request that folks be particularly >> aware on #rdo, so that beginners and others new to RDO have the support >> that they need when things go wrong. >> >> Thank you all, in advance, for the time that you're willing to invest to >> make RDO better for everyone. >> > I am not certain if it is common knowledge, but if you are trying to > install via Tripleo: > > > I've had the best success with: > > http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/tripleo.sh.html > > > do each stage manually, don't run --all, as there is a race condition. > > Is there a way to run this script specifically for RDO manager? > I would prefer we don't use that in RDO. It is meant to be a dev/CI tool, and it is not supported by upstream TripleO. We are actually in the process of moving it from the openstack/tripleo-common repo to the openstack-infra/tripleoci repo to make this more clear. If the upstream documentation can not be followed to a successful deploy without using tripleo.sh, then we need to fix upstream documentation. We have www.github.com/redhat-openstack/tripleo-quickstart for the RDO supported way to quickly stand up a virtual environment. This still relies on upstream documentation for everything after the undercloud install. So again, if there are flaws with the upstream docs, we need to be submitting bugs/patches there. > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From trown at redhat.com Thu Jan 28 12:01:35 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 28 Jan 2016 07:01:35 -0500 Subject: [Rdo-list] First Mitaka test day summary In-Reply-To: References: <56A93405.5080605@redhat.com> Message-ID: <56AA031F.7080004@redhat.com> On 01/28/2016 04:35 AM, Udi Kalifon wrote: > I can't launch instances in Mitaka on a virtual environment. I made > sure that nested virtualization is enabled on the host. If I do nova > show on the failed instance I see this: > > {"message": "No valid host was found. There are not enough hosts > available.", "code": 500, "details": " File > \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line > 372, in build_instances > context, request_spec, filter_properties) > File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", > line 416, in _schedule_instances > hosts = self.scheduler_client.select_destinations(context, > spec_obj) > File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", > line 372, in wrapped > return func(*args, **kwargs) > File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", > line 51, in select_destinations > return self.queryclient.select_destinations(context, spec_obj) > File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", > line 37, in __run_method > return getattr(self.instance, __name)(*args, **kwargs) > File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", > line 32, in select_destinations > return self.scheduler_rpcapi.select_destinations(context, spec_obj) > File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", > line 121, in select_destinations > return cctxt.call(ctxt, 'select_destinations', **msg_args) > File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", > line 158, in call > retry=self.retry) > File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", > line 90, in _send > timeout=timeout, retry=retry) > File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", > line 466, in send > retry=retry) > File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", > line 457, in _send > raise result > ", "created": "2016-01-28T09:20:47Z"} > > > On the controller, I see a lot of db-related errors in nova-scheduler.log: > > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db [-] > Unexpected error while reporting service status > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > Traceback (most recent call last): > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", > line 88, in _report_state > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > service.service_ref.save() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > 221, in wrapper > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return fn(self, *args, **kwargs) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, > in save > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > self._check_minimum_version() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, > in _check_minimum_version > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > minver = self.get_minimum_version(self._context, self.binary) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > 179, in wrapper > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > result = fn(cls, context, *args, **kwargs) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, > in get_minimum_version > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > use_slave=use_slave) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in > service_get_minimum_version > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > use_slave=use_slave) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line > 468, in service_get_minimum_version > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > filter(models.Service.forced_down == false()).\ > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2503, in scalar > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > ret = self.one() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2472, in one > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > ret = list(self) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2515, in __iter__ > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return self._execute_and_instances(context) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2528, in _execute_and_instances > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > close_with_result=True) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2519, in _connection_from_session > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kw) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 882, in connection > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > execution_options=execution_options) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 887, in _connection_for_bind > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > engine, execution_options) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 334, in _connection_for_bind > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > conn = bind.contextual_connect() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2036, in contextual_connect > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kwargs) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 92, in __init__ > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > self.dispatch.engine_connect(self, self.__branch) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line > 258, in __call__ > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > fn(*args, **kw) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line > 80, in _connect_ping_listener > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > connection.scalar(select([1])) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 844, in scalar > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return self.execute(object, *multiparams, **params).scalar() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 914, in execute > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return meth(self, multiparams, params) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line > 323, in _execute_on_connection > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return connection._execute_clauseelement(self, multiparams, params) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 1010, in _execute_clauseelement > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > compiled_sql, distilled_params > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 1078, in _execute_context > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db None, None) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 1335, in _handle_dbapi_exception > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > util.raise_from_cause(newraise, exc_info) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line > 199, in raise_from_cause > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > reraise(type(exception), exception, tb=exc_tb) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 1071, in _execute_context > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > conn = self._revalidate_connection() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 393, in _revalidate_connection > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > self.__connection = self.engine.raw_connection(_connection=self) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2099, in raw_connection > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > self.pool.unique_connection, _connection) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2075, in _wrap_pool_connect > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > util.reraise(*sys.exc_info()) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2069, in _wrap_pool_connect > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return fn() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 318, in > unique_connection > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return _ConnectionFairy._checkout(self) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in > _checkout > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > fairy = _ConnectionRecord.checkout(pool) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in > checkout > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > rec.checkin() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", > line 60, in __exit__ > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > compat.reraise(exc_type, exc_value, exc_tb) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in > checkout > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > dbapi_connection = rec.get_connection() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 590, in > get_connection > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > self.connection = self.__connect() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in > __connect > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > connection = self.__pool._invoke_creator(self) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", > line 97, in connect > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return dialect.connect(*cargs, **cparams) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", > line 377, in connect > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return self.dbapi.connect(*cargs, **cparams) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in > Connect > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > return Connection(*args, **kwargs) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, > in __init__ > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > self.connect() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, > in connect > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > self._get_server_information() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, > in _get_server_information > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > packet = self._read_packet() > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, > in _read_packet > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > packet_header = self._read_bytes(4) > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, > in _read_bytes > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > 2013, "Lost connection to MySQL server during query") > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost > connection to MySQL server during query') [SQL: u'SELECT 1'] > 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db [-] > Unexpected error while reporting service status > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > Traceback (most recent call last): > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", > line 88, in _report_state > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > service.service_ref.save() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > 221, in wrapper > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > return fn(self, *args, **kwargs) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, > in save > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > self._check_minimum_version() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, > in _check_minimum_version > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > minver = self.get_minimum_version(self._context, self.binary) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > 179, in wrapper > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > result = fn(cls, context, *args, **kwargs) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, > in get_minimum_version > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > use_slave=use_slave) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in > service_get_minimum_version > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > use_slave=use_slave) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line > 468, in service_get_minimum_version > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > filter(models.Service.forced_down == false()).\ > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2503, in scalar > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > ret = self.one() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2472, in one > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > ret = list(self) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2515, in __iter__ > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > return self._execute_and_instances(context) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2528, in _execute_and_instances > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > close_with_result=True) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2519, in _connection_from_session > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db **kw) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 882, in connection > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > execution_options=execution_options) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 887, in _connection_for_bind > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > engine, execution_options) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 334, in _connection_for_bind > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > conn = bind.contextual_connect() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2034, in contextual_connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > self._wrap_pool_connect(self.pool.connect, None), > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2073, in _wrap_pool_connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db e, > dialect, self) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 1399, in _handle_dbapi_exception_noconnection > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > util.raise_from_cause(newraise, exc_info) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line > 199, in raise_from_cause > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > reraise(type(exception), exception, tb=exc_tb) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2069, in _wrap_pool_connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return fn() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in > connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > return _ConnectionFairy._checkout(self) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in > _checkout > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > fairy = _ConnectionRecord.checkout(pool) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in > checkout > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > rec.checkin() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", > line 60, in __exit__ > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > compat.reraise(exc_type, exc_value, exc_tb) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in > checkout > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > dbapi_connection = rec.get_connection() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 563, in > get_connection > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > self.connection = self.__connect() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in > __connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > connection = self.__pool._invoke_creator(self) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", > line 97, in connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > return dialect.connect(*cargs, **cparams) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", > line 377, in connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > return self.dbapi.connect(*cargs, **cparams) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in > Connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > return Connection(*args, **kwargs) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, > in __init__ > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > self.connect() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, > in connect > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > self._get_server_information() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, > in _get_server_information > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > packet = self._read_packet() > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, > in _read_packet > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > packet_header = self._read_bytes(4) > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, > in _read_bytes > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > 2013, "Lost connection to MySQL server during query") > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost > connection to MySQL server during query') > 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db [-] > Unexpected error while reporting service status > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > Traceback (most recent call last): > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", > line 88, in _report_state > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > service.service_ref.save() > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > 221, in wrapper > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > return fn(self, *args, **kwargs) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, > in save > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > self._check_minimum_version() > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, > in _check_minimum_version > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > minver = self.get_minimum_version(self._context, self.binary) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > 179, in wrapper > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > result = fn(cls, context, *args, **kwargs) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, > in get_minimum_version > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > use_slave=use_slave) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in > service_get_minimum_version > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > use_slave=use_slave) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line > 468, in service_get_minimum_version > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > filter(models.Service.forced_down == false()).\ > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2503, in scalar > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > ret = self.one() > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2472, in one > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > ret = list(self) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2515, in __iter__ > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > return self._execute_and_instances(context) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2528, in _execute_and_instances > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > close_with_result=True) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > 2519, in _connection_from_session > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db **kw) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 882, in connection > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > execution_options=execution_options) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 887, in _connection_for_bind > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > engine, execution_options) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > 334, in _connection_for_bind > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > conn = bind.contextual_connect() > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2034, in contextual_connect > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > self._wrap_pool_connect(self.pool.connect, None), > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > 2069, in _wrap_pool_connect > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db return fn() > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in > connect > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > return _ConnectionFairy._checkout(self) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 729, in > _checkout > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db fairy) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line > 258, in __call__ > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > fn(*args, **kw) > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line > 352, in checkout > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db if > connection_record.info['pid'] != pid: > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db KeyError: 'pid' > 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool [-] > Exception during reset or similar > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool Traceback > (most recent call last): > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 631, in > _finalize_fairy > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > fairy._reset(pool) > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 765, in > _reset > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > pool._dialect.do_rollback(self) > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/base.py", > line 2519, in do_rollback > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > dbapi_connection.rollback() > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 723, > in rollback > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > self._execute_command(COMMAND.COM_QUERY, "ROLLBACK") > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 956, > in _execute_command > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool raise > err.InterfaceError("(0, '')") > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > InterfaceError: (0, '') > 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool [-] > Exception closing connection 0x510f550> > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Traceback > (most recent call last): > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File > "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 290, in > _close_connection > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool > self._dialect.do_close(connection) > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File > "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", > line 418, in do_close > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool > dbapi_connection.close() > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File > "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 662, > in close > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool raise > err.Error("Already closed") > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Error: > Already closed > 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool > 2016-01-28 08:25:27.009 1121 INFO nova.servicegroup.drivers.db [-] > Recovered from being unable to report status. > 2016-01-28 08:27:21.165 31979 WARNING > oslo_reports.guru_meditation_report [-] Guru mediation now registers > SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 > will no longer be registered in a future release, so please use > SIGUSR2 to generate reports. > 2016-01-28 08:27:21.544 31979 WARNING oslo_config.cfg > [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Option > "use_local" from group "conductor" is deprecated for removal. Its > value may be silently ignored in the future. > 2016-01-28 08:27:21.606 31979 INFO oslo_service.periodic_task > [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Skipping periodic > task _periodic_update_dns because its interval is negative > 2016-01-28 08:27:21.635 31979 INFO nova.service [-] Starting scheduler > node (version 13.0.0-dev80.el7.centos) > > Was anyone successful in launching instances in Mitaka? I installed > with the director, it's an HA deployment without network isolation in > a virtualized environment. > CI is passing on that configuration, and I have had success with it manually. Could you post your deploy command? > Thanks a lot, > Udi. > > On Wed, Jan 27, 2016 at 11:17 PM, John Trowbridge wrote: >> >> >> On 01/27/2016 01:25 PM, Udi Kalifon wrote: >>> Hello. >>> >>> The good news is that I succeeded to deploy :). I haven't yet tried to >>> test the overcloud for any sanity, but in past test days I was never >>> able to report any success - so maybe it's a sign that things are >>> stabilizing. >> >> That's awesome! >> >>> >>> I deployed with rdo-manager on a virtual setup according to the >>> instructions in https://www.rdoproject.org/rdo-manager/. I wasn't able >>> to deploy with network isolation, because I assume that my templates >>> from 7.x require changes, but I haven't seen any documentation on >>> what's changed. If you can point me in the right direction to get >>> network isolation working for this environment I will test it >>> tomorrow. >>> >>> Some of the problems I hit today: >>> >>> 1) The link to the quickstart guide from the testday page >>> https://www.rdoproject.org/testday/mitaka/milestone2/ points to a very >>> old github page. The correct link should be the one I already >>> mentioned: https://www.rdoproject.org/rdo-manager/ >> >> I fixed that link, thanks! >> >>> >>> 2) The prerequisites to installing ansible are not documented. On a >>> fresh CentOS 7 I had to install python-virtualenv, git, and gcc. I >>> then ran "easy_install pip" and "pip install >>> git+https://github.com/ansible/ansible.git at v2.0.0-0.6.rc1#egg=ansible" >>> to be able to run the playbook which installs the virtual environment. >> >> The quickstart.sh will do all of that for you, but if you wanted to >> submit a patch for more detailed instructions for manually setting up >> the virtualenv, that would great. For tripleo-quickstart, gerrit is >> setup and follows the same gerrit workflow as everything else. >> >>> >>> Thanks, >>> Udi. >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From mohammed.arafa at gmail.com Thu Jan 28 12:12:51 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 28 Jan 2016 14:12:51 +0200 Subject: [Rdo-list] [rdo-manager] - frustrating node Message-ID: hi all i am attempting to build a 2 node basic overcloud. my previous emails have been talking about the problems i encountered. what i have : - 1 vm called rdo with undercloud AND overcloud. this one is has not been updated since november and i keep restoring snapshots to that date. - a 2nd vm called rdo2, full updated, overcloud fails to deploy to a specific physical node observations: (unscientific!) the 2 physical nodes are both good. i tested by redeploying on rdo again and again. i even swapped their order in instackenv.json and redeploying succesfully from instackenv.json step. however i have a particular machine that refuses to deploy. it doesnt matter what order. if it is the controller, it fails, if it is the compute it fails. i am using the same flavour on both rdo vms. but again, i believe i have ruled out that variable. how far did i reach? over the past few days i have opened the console and watched this particular machine pxe boot, get an ip, reboot, change its hostname to reflect the ip, reboot to localhost.localdomain (?) and the power off. i am not saying i sat down and watched it for the entire 209 minutes but i have observed it unscientifically last error: Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates Stack failed with status: Resource CREATE failed: resources.Controller: ResourceInError: resources[0].resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" Heat Stack create failed. real 209m19.252s user 0m21.695s sys 0m2.402s what am i looking for?: what do i look for in the logs? and my logs are huge; they dont get rotated for some reason i would like to know the reason this particular physical machine refuses to deploy, so i can fix it. i believe i have eliminated all variables except the machine itself and it has me puzzled and frustrated as i need to move on to the next stage of network isolation. any ideas? thanks -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Thu Jan 28 13:32:06 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 28 Jan 2016 08:32:06 -0500 Subject: [Rdo-list] [rdo-manager] - frustrating node In-Reply-To: References: Message-ID: <56AA1856.6000608@redhat.com> On 01/28/2016 07:12 AM, Mohammed Arafa wrote: > hi all > > i am attempting to build a 2 node basic overcloud. my previous emails have > been talking about the problems i encountered. > > what i have : > - 1 vm called rdo with undercloud AND overcloud. this one is has not been > updated since november and i keep restoring snapshots to that date. > - a 2nd vm called rdo2, full updated, overcloud fails to deploy to a > specific physical node > > observations: (unscientific!) > the 2 physical nodes are both good. i tested by redeploying on rdo again > and again. i even swapped their order in instackenv.json and redeploying > succesfully from instackenv.json step. > however i have a particular machine that refuses to deploy. it doesnt > matter what order. if it is the controller, it fails, if it is the compute > it fails. > i am using the same flavour on both rdo vms. but again, i believe i have > ruled out that variable. > > how far did i reach? > over the past few days i have opened the console and watched this > particular machine pxe boot, get an ip, reboot, change its hostname to > reflect the ip, reboot to localhost.localdomain (?) and the power off. i am > not saying i sat down and watched it for the entire 209 minutes but i have > observed it unscientifically > > last error: > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > Stack failed with status: Resource CREATE failed: resources.Controller: > ResourceInError: resources[0].resources.Controller: Went to status ERROR > due to "Message: No valid host was found. There are not enough hosts > available., Code: 500" > Heat Stack create failed. > > real 209m19.252s > user 0m21.695s > sys 0m2.402s > > what am i looking for?: > what do i look for in the logs? and my logs are huge; they dont get rotated > for some reason > i would like to know the reason this particular physical machine refuses to > deploy, so i can fix it. i believe i have eliminated all variables except > the machine itself and it has me puzzled and frustrated as i need to move > on to the next stage of network isolation. > > any ideas? > The point at which it is failing seems to be before the node is fully deployed. Which is to say, before we start doing puppet applies on it to configure it. This is a helpful distinction, because we can limit the search space for possible issues. This is almost certainly a Nova/Ironic issue. The best log to look at for Nova in this case would be the scheduler log at /var/log/nova/nova-scheduler.log, while the best log to look at for Ironic would be the conductor log at /var/log/ironic/ironic-conductor.log. If your logs are very large, it may be better to delete them, and reproduce the issue in order to further limit the search space. Note, that the issue is most likely reproduced within the first 30min of that test, so you wont need to wait for the full 200+ which I am guessing just hits the deploy timeout. > thanks > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From afazekas at redhat.com Thu Jan 28 13:50:35 2016 From: afazekas at redhat.com (Attila Fazekas) Date: Thu, 28 Jan 2016 08:50:35 -0500 (EST) Subject: [Rdo-list] Slow network performance on Kilo? In-Reply-To: References: <56A928FB.6000206@soe.ucsc.edu> Message-ID: <1389580690.15405313.1453989035284.JavaMail.zimbra@redhat.com> My first wild guess, it is something not OK around VLAN splinters. It can be both config or driver related issue. Just for the record, can you share these info - kernel version - ovs version (build) - nic (lspci -nn | grep Eth) Do you use some kind of bonding ? ----- Original Message ----- > From: "Boris Derzhavets" > To: "Erich Weiler" , rdo-list at redhat.com > Sent: Wednesday, January 27, 2016 10:22:04 PM > Subject: Re: [Rdo-list] Slow network performance on Kilo? > > > > ________________________________________ > From: rdo-list-bounces at redhat.com on behalf of > Erich Weiler > Sent: Wednesday, January 27, 2016 3:30 PM > To: rdo-list at redhat.com > Subject: [Rdo-list] Slow network performance on Kilo? > > Hi Y'all, > > I've seen several folks on the net with this problem, but I'm still > flailing a bit as to what is really going on. > > We are running RHEL 7 with RDO OpenStack Kilo. > > We are setting this environment up still, not quite done yet. But in > our testing, we are experiencing very slow network performance when > downloading or uploading to and from VMs. We get like 300Kb/s or so. > > We are using Neutron, MTU 9000 everywhere. I've tried disabling GSO, > LRO, TSO, GRO on the neutron interfaces, as well as the VM server > interfaces, still no improvement. I've tried lowing the VM MTU to 1500, > still no improvement. It's really strange. We do get connectivity, I > can ssh to the instances, but the network performance is just really, > really slow. It appears the instances can talk to each other very > quickly however. They just get slow network to the internet (i.e. when > packets go through the network node). > > We are using VLAN tenant network isolation. > > 1. Switch to VXLAN tunneling > 2. Activate DVR . It's already stable on RDO Kilo. > It will result routing of North-South && East-West traffic avoiding > Network Node. > > Boris. > > Can anyone point me in the right direction? I've been beating my head > against a wall and googling without avail for a week... > > Many thanks, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ukalifon at redhat.com Thu Jan 28 15:00:05 2016 From: ukalifon at redhat.com (Udi Kalifon) Date: Thu, 28 Jan 2016 17:00:05 +0200 Subject: [Rdo-list] Troubleshooting services after reboot of the overcloud Message-ID: Hello. I rebooted all my overcloud nodes. This is a Mitaka installation with rdo-manager on a virtual environment. The keystone service is not answering any more, and I have no clue what to do about it now that it's running under Apache. The httpd service itself is running. How do I troubleshoot this? Thanks, Udi. From ukalifon at redhat.com Thu Jan 28 15:08:46 2016 From: ukalifon at redhat.com (Udi Kalifon) Date: Thu, 28 Jan 2016 17:08:46 +0200 Subject: [Rdo-list] First Mitaka test day summary In-Reply-To: <56AA031F.7080004@redhat.com> References: <56A93405.5080605@redhat.com> <56AA031F.7080004@redhat.com> Message-ID: The deploy command was: openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ntp-server clock.redhat.com -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml To try and investigate this subject further, I enabled nested virtualization on the host, and changed the libvirt type to qemu. Unfortunately that also required turning off all the VMs on teh host for the kvm module to be reloaded - and now I have problems with keystone that doesn't want to come back up. I mailed the list about this problem in a separate thread. Thanks. Udi. On Thu, Jan 28, 2016 at 2:01 PM, John Trowbridge wrote: > > > On 01/28/2016 04:35 AM, Udi Kalifon wrote: >> I can't launch instances in Mitaka on a virtual environment. I made >> sure that nested virtualization is enabled on the host. If I do nova >> show on the failed instance I see this: >> >> {"message": "No valid host was found. There are not enough hosts >> available.", "code": 500, "details": " File >> \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line >> 372, in build_instances >> context, request_spec, filter_properties) >> File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", >> line 416, in _schedule_instances >> hosts = self.scheduler_client.select_destinations(context, >> spec_obj) >> File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", >> line 372, in wrapped >> return func(*args, **kwargs) >> File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", >> line 51, in select_destinations >> return self.queryclient.select_destinations(context, spec_obj) >> File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", >> line 37, in __run_method >> return getattr(self.instance, __name)(*args, **kwargs) >> File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", >> line 32, in select_destinations >> return self.scheduler_rpcapi.select_destinations(context, spec_obj) >> File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", >> line 121, in select_destinations >> return cctxt.call(ctxt, 'select_destinations', **msg_args) >> File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", >> line 158, in call >> retry=self.retry) >> File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", >> line 90, in _send >> timeout=timeout, retry=retry) >> File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", >> line 466, in send >> retry=retry) >> File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", >> line 457, in _send >> raise result >> ", "created": "2016-01-28T09:20:47Z"} >> >> >> On the controller, I see a lot of db-related errors in nova-scheduler.log: >> >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db [-] >> Unexpected error while reporting service status >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> Traceback (most recent call last): >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >> line 88, in _report_state >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> service.service_ref.save() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >> 221, in wrapper >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return fn(self, *args, **kwargs) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >> in save >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> self._check_minimum_version() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >> in _check_minimum_version >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> minver = self.get_minimum_version(self._context, self.binary) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >> 179, in wrapper >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> result = fn(cls, context, *args, **kwargs) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >> in get_minimum_version >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> use_slave=use_slave) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >> service_get_minimum_version >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> use_slave=use_slave) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >> 468, in service_get_minimum_version >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> filter(models.Service.forced_down == false()).\ >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2503, in scalar >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> ret = self.one() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2472, in one >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> ret = list(self) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2515, in __iter__ >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return self._execute_and_instances(context) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2528, in _execute_and_instances >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> close_with_result=True) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2519, in _connection_from_session >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kw) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 882, in connection >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> execution_options=execution_options) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 887, in _connection_for_bind >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> engine, execution_options) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 334, in _connection_for_bind >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> conn = bind.contextual_connect() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2036, in contextual_connect >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kwargs) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 92, in __init__ >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> self.dispatch.engine_connect(self, self.__branch) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line >> 258, in __call__ >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> fn(*args, **kw) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line >> 80, in _connect_ping_listener >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> connection.scalar(select([1])) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 844, in scalar >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return self.execute(object, *multiparams, **params).scalar() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 914, in execute >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return meth(self, multiparams, params) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line >> 323, in _execute_on_connection >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return connection._execute_clauseelement(self, multiparams, params) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 1010, in _execute_clauseelement >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> compiled_sql, distilled_params >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 1078, in _execute_context >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db None, None) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 1335, in _handle_dbapi_exception >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> util.raise_from_cause(newraise, exc_info) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line >> 199, in raise_from_cause >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> reraise(type(exception), exception, tb=exc_tb) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 1071, in _execute_context >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> conn = self._revalidate_connection() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 393, in _revalidate_connection >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> self.__connection = self.engine.raw_connection(_connection=self) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2099, in raw_connection >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> self.pool.unique_connection, _connection) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2075, in _wrap_pool_connect >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> util.reraise(*sys.exc_info()) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2069, in _wrap_pool_connect >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return fn() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 318, in >> unique_connection >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return _ConnectionFairy._checkout(self) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in >> _checkout >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> fairy = _ConnectionRecord.checkout(pool) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in >> checkout >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> rec.checkin() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", >> line 60, in __exit__ >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> compat.reraise(exc_type, exc_value, exc_tb) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in >> checkout >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> dbapi_connection = rec.get_connection() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 590, in >> get_connection >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> self.connection = self.__connect() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in >> __connect >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> connection = self.__pool._invoke_creator(self) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", >> line 97, in connect >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return dialect.connect(*cargs, **cparams) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >> line 377, in connect >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return self.dbapi.connect(*cargs, **cparams) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in >> Connect >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> return Connection(*args, **kwargs) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, >> in __init__ >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> self.connect() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, >> in connect >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> self._get_server_information() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, >> in _get_server_information >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> packet = self._read_packet() >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, >> in _read_packet >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> packet_header = self._read_bytes(4) >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, >> in _read_bytes >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> 2013, "Lost connection to MySQL server during query") >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost >> connection to MySQL server during query') [SQL: u'SELECT 1'] >> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db [-] >> Unexpected error while reporting service status >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> Traceback (most recent call last): >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >> line 88, in _report_state >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> service.service_ref.save() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >> 221, in wrapper >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> return fn(self, *args, **kwargs) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >> in save >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> self._check_minimum_version() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >> in _check_minimum_version >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> minver = self.get_minimum_version(self._context, self.binary) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >> 179, in wrapper >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> result = fn(cls, context, *args, **kwargs) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >> in get_minimum_version >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> use_slave=use_slave) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >> service_get_minimum_version >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> use_slave=use_slave) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >> 468, in service_get_minimum_version >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> filter(models.Service.forced_down == false()).\ >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2503, in scalar >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> ret = self.one() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2472, in one >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> ret = list(self) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2515, in __iter__ >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> return self._execute_and_instances(context) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2528, in _execute_and_instances >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> close_with_result=True) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2519, in _connection_from_session >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db **kw) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 882, in connection >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> execution_options=execution_options) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 887, in _connection_for_bind >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> engine, execution_options) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 334, in _connection_for_bind >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> conn = bind.contextual_connect() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2034, in contextual_connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> self._wrap_pool_connect(self.pool.connect, None), >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2073, in _wrap_pool_connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db e, >> dialect, self) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 1399, in _handle_dbapi_exception_noconnection >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> util.raise_from_cause(newraise, exc_info) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line >> 199, in raise_from_cause >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> reraise(type(exception), exception, tb=exc_tb) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2069, in _wrap_pool_connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return fn() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in >> connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> return _ConnectionFairy._checkout(self) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in >> _checkout >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> fairy = _ConnectionRecord.checkout(pool) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in >> checkout >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> rec.checkin() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", >> line 60, in __exit__ >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> compat.reraise(exc_type, exc_value, exc_tb) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in >> checkout >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> dbapi_connection = rec.get_connection() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 563, in >> get_connection >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> self.connection = self.__connect() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in >> __connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> connection = self.__pool._invoke_creator(self) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", >> line 97, in connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> return dialect.connect(*cargs, **cparams) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >> line 377, in connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> return self.dbapi.connect(*cargs, **cparams) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in >> Connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> return Connection(*args, **kwargs) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, >> in __init__ >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> self.connect() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, >> in connect >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> self._get_server_information() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, >> in _get_server_information >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> packet = self._read_packet() >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, >> in _read_packet >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> packet_header = self._read_bytes(4) >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, >> in _read_bytes >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> 2013, "Lost connection to MySQL server during query") >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost >> connection to MySQL server during query') >> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db [-] >> Unexpected error while reporting service status >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> Traceback (most recent call last): >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >> line 88, in _report_state >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> service.service_ref.save() >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >> 221, in wrapper >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> return fn(self, *args, **kwargs) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >> in save >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> self._check_minimum_version() >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >> in _check_minimum_version >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> minver = self.get_minimum_version(self._context, self.binary) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >> 179, in wrapper >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> result = fn(cls, context, *args, **kwargs) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >> in get_minimum_version >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> use_slave=use_slave) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >> service_get_minimum_version >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> use_slave=use_slave) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >> 468, in service_get_minimum_version >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> filter(models.Service.forced_down == false()).\ >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2503, in scalar >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> ret = self.one() >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2472, in one >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> ret = list(self) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2515, in __iter__ >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> return self._execute_and_instances(context) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2528, in _execute_and_instances >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> close_with_result=True) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >> 2519, in _connection_from_session >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db **kw) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 882, in connection >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> execution_options=execution_options) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 887, in _connection_for_bind >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> engine, execution_options) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >> 334, in _connection_for_bind >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> conn = bind.contextual_connect() >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2034, in contextual_connect >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> self._wrap_pool_connect(self.pool.connect, None), >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >> 2069, in _wrap_pool_connect >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db return fn() >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in >> connect >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> return _ConnectionFairy._checkout(self) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 729, in >> _checkout >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db fairy) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line >> 258, in __call__ >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> fn(*args, **kw) >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line >> 352, in checkout >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db if >> connection_record.info['pid'] != pid: >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db KeyError: 'pid' >> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool [-] >> Exception during reset or similar >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool Traceback >> (most recent call last): >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 631, in >> _finalize_fairy >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >> fairy._reset(pool) >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 765, in >> _reset >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >> pool._dialect.do_rollback(self) >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/base.py", >> line 2519, in do_rollback >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >> dbapi_connection.rollback() >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 723, >> in rollback >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >> self._execute_command(COMMAND.COM_QUERY, "ROLLBACK") >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 956, >> in _execute_command >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool raise >> err.InterfaceError("(0, '')") >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >> InterfaceError: (0, '') >> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool [-] >> Exception closing connection > 0x510f550> >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Traceback >> (most recent call last): >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 290, in >> _close_connection >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >> self._dialect.do_close(connection) >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >> line 418, in do_close >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >> dbapi_connection.close() >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 662, >> in close >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool raise >> err.Error("Already closed") >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Error: >> Already closed >> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >> 2016-01-28 08:25:27.009 1121 INFO nova.servicegroup.drivers.db [-] >> Recovered from being unable to report status. >> 2016-01-28 08:27:21.165 31979 WARNING >> oslo_reports.guru_meditation_report [-] Guru mediation now registers >> SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 >> will no longer be registered in a future release, so please use >> SIGUSR2 to generate reports. >> 2016-01-28 08:27:21.544 31979 WARNING oslo_config.cfg >> [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Option >> "use_local" from group "conductor" is deprecated for removal. Its >> value may be silently ignored in the future. >> 2016-01-28 08:27:21.606 31979 INFO oslo_service.periodic_task >> [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Skipping periodic >> task _periodic_update_dns because its interval is negative >> 2016-01-28 08:27:21.635 31979 INFO nova.service [-] Starting scheduler >> node (version 13.0.0-dev80.el7.centos) >> >> Was anyone successful in launching instances in Mitaka? I installed >> with the director, it's an HA deployment without network isolation in >> a virtualized environment. >> > > CI is passing on that configuration, and I have had success with it > manually. Could you post your deploy command? > >> Thanks a lot, >> Udi. >> >> On Wed, Jan 27, 2016 at 11:17 PM, John Trowbridge wrote: >>> >>> >>> On 01/27/2016 01:25 PM, Udi Kalifon wrote: >>>> Hello. >>>> >>>> The good news is that I succeeded to deploy :). I haven't yet tried to >>>> test the overcloud for any sanity, but in past test days I was never >>>> able to report any success - so maybe it's a sign that things are >>>> stabilizing. >>> >>> That's awesome! >>> >>>> >>>> I deployed with rdo-manager on a virtual setup according to the >>>> instructions in https://www.rdoproject.org/rdo-manager/. I wasn't able >>>> to deploy with network isolation, because I assume that my templates >>>> from 7.x require changes, but I haven't seen any documentation on >>>> what's changed. If you can point me in the right direction to get >>>> network isolation working for this environment I will test it >>>> tomorrow. >>>> >>>> Some of the problems I hit today: >>>> >>>> 1) The link to the quickstart guide from the testday page >>>> https://www.rdoproject.org/testday/mitaka/milestone2/ points to a very >>>> old github page. The correct link should be the one I already >>>> mentioned: https://www.rdoproject.org/rdo-manager/ >>> >>> I fixed that link, thanks! >>> >>>> >>>> 2) The prerequisites to installing ansible are not documented. On a >>>> fresh CentOS 7 I had to install python-virtualenv, git, and gcc. I >>>> then ran "easy_install pip" and "pip install >>>> git+https://github.com/ansible/ansible.git at v2.0.0-0.6.rc1#egg=ansible" >>>> to be able to run the playbook which installs the virtual environment. >>> >>> The quickstart.sh will do all of that for you, but if you wanted to >>> submit a patch for more detailed instructions for manually setting up >>> the virtualenv, that would great. For tripleo-quickstart, gerrit is >>> setup and follows the same gerrit workflow as everything else. >>> >>>> >>>> Thanks, >>>> Udi. >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com From trown at redhat.com Thu Jan 28 15:15:42 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 28 Jan 2016 10:15:42 -0500 Subject: [Rdo-list] First Mitaka test day summary In-Reply-To: References: <56A93405.5080605@redhat.com> <56AA031F.7080004@redhat.com> Message-ID: <56AA309E.4060506@redhat.com> On 01/28/2016 10:08 AM, Udi Kalifon wrote: > The deploy command was: > openstack overcloud deploy --templates --control-scale 3 > --compute-scale 1 --ntp-server clock.redhat.com -e > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > For virtual deployments, it is needed to pass also '--libvirt-type qemu'. It is probably worth trying a redeploy by deleting the overcloud and adding that argument. > To try and investigate this subject further, I enabled nested > virtualization on the host, and changed the libvirt type to qemu. > Unfortunately that also required turning off all the VMs on teh host > for the kvm module to be reloaded - and now I have problems with > keystone that doesn't want to come back up. I mailed the list about > this problem in a separate thread. > > Thanks. > Udi. > > On Thu, Jan 28, 2016 at 2:01 PM, John Trowbridge wrote: >> >> >> On 01/28/2016 04:35 AM, Udi Kalifon wrote: >>> I can't launch instances in Mitaka on a virtual environment. I made >>> sure that nested virtualization is enabled on the host. If I do nova >>> show on the failed instance I see this: >>> >>> {"message": "No valid host was found. There are not enough hosts >>> available.", "code": 500, "details": " File >>> \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line >>> 372, in build_instances >>> context, request_spec, filter_properties) >>> File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", >>> line 416, in _schedule_instances >>> hosts = self.scheduler_client.select_destinations(context, >>> spec_obj) >>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", >>> line 372, in wrapped >>> return func(*args, **kwargs) >>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", >>> line 51, in select_destinations >>> return self.queryclient.select_destinations(context, spec_obj) >>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", >>> line 37, in __run_method >>> return getattr(self.instance, __name)(*args, **kwargs) >>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", >>> line 32, in select_destinations >>> return self.scheduler_rpcapi.select_destinations(context, spec_obj) >>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", >>> line 121, in select_destinations >>> return cctxt.call(ctxt, 'select_destinations', **msg_args) >>> File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", >>> line 158, in call >>> retry=self.retry) >>> File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", >>> line 90, in _send >>> timeout=timeout, retry=retry) >>> File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", >>> line 466, in send >>> retry=retry) >>> File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", >>> line 457, in _send >>> raise result >>> ", "created": "2016-01-28T09:20:47Z"} >>> >>> >>> On the controller, I see a lot of db-related errors in nova-scheduler.log: >>> >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db [-] >>> Unexpected error while reporting service status >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> Traceback (most recent call last): >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >>> line 88, in _report_state >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> service.service_ref.save() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>> 221, in wrapper >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return fn(self, *args, **kwargs) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >>> in save >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> self._check_minimum_version() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >>> in _check_minimum_version >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> minver = self.get_minimum_version(self._context, self.binary) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>> 179, in wrapper >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> result = fn(cls, context, *args, **kwargs) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >>> in get_minimum_version >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> use_slave=use_slave) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >>> service_get_minimum_version >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> use_slave=use_slave) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >>> 468, in service_get_minimum_version >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> filter(models.Service.forced_down == false()).\ >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2503, in scalar >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> ret = self.one() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2472, in one >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> ret = list(self) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2515, in __iter__ >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return self._execute_and_instances(context) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2528, in _execute_and_instances >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> close_with_result=True) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2519, in _connection_from_session >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kw) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 882, in connection >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> execution_options=execution_options) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 887, in _connection_for_bind >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> engine, execution_options) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 334, in _connection_for_bind >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> conn = bind.contextual_connect() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2036, in contextual_connect >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kwargs) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 92, in __init__ >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> self.dispatch.engine_connect(self, self.__branch) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line >>> 258, in __call__ >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> fn(*args, **kw) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line >>> 80, in _connect_ping_listener >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> connection.scalar(select([1])) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 844, in scalar >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return self.execute(object, *multiparams, **params).scalar() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 914, in execute >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return meth(self, multiparams, params) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line >>> 323, in _execute_on_connection >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return connection._execute_clauseelement(self, multiparams, params) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 1010, in _execute_clauseelement >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> compiled_sql, distilled_params >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 1078, in _execute_context >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db None, None) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 1335, in _handle_dbapi_exception >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> util.raise_from_cause(newraise, exc_info) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line >>> 199, in raise_from_cause >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> reraise(type(exception), exception, tb=exc_tb) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 1071, in _execute_context >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> conn = self._revalidate_connection() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 393, in _revalidate_connection >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> self.__connection = self.engine.raw_connection(_connection=self) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2099, in raw_connection >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> self.pool.unique_connection, _connection) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2075, in _wrap_pool_connect >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> util.reraise(*sys.exc_info()) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2069, in _wrap_pool_connect >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db return fn() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 318, in >>> unique_connection >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return _ConnectionFairy._checkout(self) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in >>> _checkout >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> fairy = _ConnectionRecord.checkout(pool) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in >>> checkout >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> rec.checkin() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", >>> line 60, in __exit__ >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> compat.reraise(exc_type, exc_value, exc_tb) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in >>> checkout >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> dbapi_connection = rec.get_connection() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 590, in >>> get_connection >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> self.connection = self.__connect() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in >>> __connect >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> connection = self.__pool._invoke_creator(self) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", >>> line 97, in connect >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return dialect.connect(*cargs, **cparams) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >>> line 377, in connect >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return self.dbapi.connect(*cargs, **cparams) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in >>> Connect >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> return Connection(*args, **kwargs) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, >>> in __init__ >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> self.connect() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, >>> in connect >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> self._get_server_information() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, >>> in _get_server_information >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> packet = self._read_packet() >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, >>> in _read_packet >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> packet_header = self._read_bytes(4) >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, >>> in _read_bytes >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> 2013, "Lost connection to MySQL server during query") >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost >>> connection to MySQL server during query') [SQL: u'SELECT 1'] >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db [-] >>> Unexpected error while reporting service status >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> Traceback (most recent call last): >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >>> line 88, in _report_state >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> service.service_ref.save() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>> 221, in wrapper >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> return fn(self, *args, **kwargs) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >>> in save >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> self._check_minimum_version() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >>> in _check_minimum_version >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> minver = self.get_minimum_version(self._context, self.binary) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>> 179, in wrapper >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> result = fn(cls, context, *args, **kwargs) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >>> in get_minimum_version >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> use_slave=use_slave) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >>> service_get_minimum_version >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> use_slave=use_slave) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >>> 468, in service_get_minimum_version >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> filter(models.Service.forced_down == false()).\ >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2503, in scalar >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> ret = self.one() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2472, in one >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> ret = list(self) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2515, in __iter__ >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> return self._execute_and_instances(context) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2528, in _execute_and_instances >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> close_with_result=True) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2519, in _connection_from_session >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db **kw) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 882, in connection >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> execution_options=execution_options) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 887, in _connection_for_bind >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> engine, execution_options) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 334, in _connection_for_bind >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> conn = bind.contextual_connect() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2034, in contextual_connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> self._wrap_pool_connect(self.pool.connect, None), >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2073, in _wrap_pool_connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db e, >>> dialect, self) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 1399, in _handle_dbapi_exception_noconnection >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> util.raise_from_cause(newraise, exc_info) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line >>> 199, in raise_from_cause >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> reraise(type(exception), exception, tb=exc_tb) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2069, in _wrap_pool_connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db return fn() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in >>> connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> return _ConnectionFairy._checkout(self) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in >>> _checkout >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> fairy = _ConnectionRecord.checkout(pool) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in >>> checkout >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> rec.checkin() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", >>> line 60, in __exit__ >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> compat.reraise(exc_type, exc_value, exc_tb) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in >>> checkout >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> dbapi_connection = rec.get_connection() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 563, in >>> get_connection >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> self.connection = self.__connect() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in >>> __connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> connection = self.__pool._invoke_creator(self) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", >>> line 97, in connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> return dialect.connect(*cargs, **cparams) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >>> line 377, in connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> return self.dbapi.connect(*cargs, **cparams) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in >>> Connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> return Connection(*args, **kwargs) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, >>> in __init__ >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> self.connect() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, >>> in connect >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> self._get_server_information() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, >>> in _get_server_information >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> packet = self._read_packet() >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, >>> in _read_packet >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> packet_header = self._read_bytes(4) >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, >>> in _read_bytes >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> 2013, "Lost connection to MySQL server during query") >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost >>> connection to MySQL server during query') >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db [-] >>> Unexpected error while reporting service status >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> Traceback (most recent call last): >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >>> line 88, in _report_state >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> service.service_ref.save() >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>> 221, in wrapper >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> return fn(self, *args, **kwargs) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >>> in save >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> self._check_minimum_version() >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >>> in _check_minimum_version >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> minver = self.get_minimum_version(self._context, self.binary) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>> 179, in wrapper >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> result = fn(cls, context, *args, **kwargs) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >>> in get_minimum_version >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> use_slave=use_slave) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >>> service_get_minimum_version >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> use_slave=use_slave) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >>> 468, in service_get_minimum_version >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> filter(models.Service.forced_down == false()).\ >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2503, in scalar >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> ret = self.one() >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2472, in one >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> ret = list(self) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2515, in __iter__ >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> return self._execute_and_instances(context) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2528, in _execute_and_instances >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> close_with_result=True) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>> 2519, in _connection_from_session >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db **kw) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 882, in connection >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> execution_options=execution_options) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 887, in _connection_for_bind >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> engine, execution_options) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>> 334, in _connection_for_bind >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> conn = bind.contextual_connect() >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2034, in contextual_connect >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> self._wrap_pool_connect(self.pool.connect, None), >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>> 2069, in _wrap_pool_connect >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db return fn() >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in >>> connect >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> return _ConnectionFairy._checkout(self) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 729, in >>> _checkout >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db fairy) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line >>> 258, in __call__ >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> fn(*args, **kw) >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line >>> 352, in checkout >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db if >>> connection_record.info['pid'] != pid: >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db KeyError: 'pid' >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool [-] >>> Exception during reset or similar >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool Traceback >>> (most recent call last): >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 631, in >>> _finalize_fairy >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>> fairy._reset(pool) >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 765, in >>> _reset >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>> pool._dialect.do_rollback(self) >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/base.py", >>> line 2519, in do_rollback >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>> dbapi_connection.rollback() >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 723, >>> in rollback >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>> self._execute_command(COMMAND.COM_QUERY, "ROLLBACK") >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 956, >>> in _execute_command >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool raise >>> err.InterfaceError("(0, '')") >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>> InterfaceError: (0, '') >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool [-] >>> Exception closing connection >> 0x510f550> >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Traceback >>> (most recent call last): >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 290, in >>> _close_connection >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >>> self._dialect.do_close(connection) >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >>> line 418, in do_close >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >>> dbapi_connection.close() >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 662, >>> in close >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool raise >>> err.Error("Already closed") >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Error: >>> Already closed >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >>> 2016-01-28 08:25:27.009 1121 INFO nova.servicegroup.drivers.db [-] >>> Recovered from being unable to report status. >>> 2016-01-28 08:27:21.165 31979 WARNING >>> oslo_reports.guru_meditation_report [-] Guru mediation now registers >>> SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 >>> will no longer be registered in a future release, so please use >>> SIGUSR2 to generate reports. >>> 2016-01-28 08:27:21.544 31979 WARNING oslo_config.cfg >>> [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Option >>> "use_local" from group "conductor" is deprecated for removal. Its >>> value may be silently ignored in the future. >>> 2016-01-28 08:27:21.606 31979 INFO oslo_service.periodic_task >>> [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Skipping periodic >>> task _periodic_update_dns because its interval is negative >>> 2016-01-28 08:27:21.635 31979 INFO nova.service [-] Starting scheduler >>> node (version 13.0.0-dev80.el7.centos) >>> >>> Was anyone successful in launching instances in Mitaka? I installed >>> with the director, it's an HA deployment without network isolation in >>> a virtualized environment. >>> >> >> CI is passing on that configuration, and I have had success with it >> manually. Could you post your deploy command? >> >>> Thanks a lot, >>> Udi. >>> >>> On Wed, Jan 27, 2016 at 11:17 PM, John Trowbridge wrote: >>>> >>>> >>>> On 01/27/2016 01:25 PM, Udi Kalifon wrote: >>>>> Hello. >>>>> >>>>> The good news is that I succeeded to deploy :). I haven't yet tried to >>>>> test the overcloud for any sanity, but in past test days I was never >>>>> able to report any success - so maybe it's a sign that things are >>>>> stabilizing. >>>> >>>> That's awesome! >>>> >>>>> >>>>> I deployed with rdo-manager on a virtual setup according to the >>>>> instructions in https://www.rdoproject.org/rdo-manager/. I wasn't able >>>>> to deploy with network isolation, because I assume that my templates >>>>> from 7.x require changes, but I haven't seen any documentation on >>>>> what's changed. If you can point me in the right direction to get >>>>> network isolation working for this environment I will test it >>>>> tomorrow. >>>>> >>>>> Some of the problems I hit today: >>>>> >>>>> 1) The link to the quickstart guide from the testday page >>>>> https://www.rdoproject.org/testday/mitaka/milestone2/ points to a very >>>>> old github page. The correct link should be the one I already >>>>> mentioned: https://www.rdoproject.org/rdo-manager/ >>>> >>>> I fixed that link, thanks! >>>> >>>>> >>>>> 2) The prerequisites to installing ansible are not documented. On a >>>>> fresh CentOS 7 I had to install python-virtualenv, git, and gcc. I >>>>> then ran "easy_install pip" and "pip install >>>>> git+https://github.com/ansible/ansible.git at v2.0.0-0.6.rc1#egg=ansible" >>>>> to be able to run the playbook which installs the virtual environment. >>>> >>>> The quickstart.sh will do all of that for you, but if you wanted to >>>> submit a patch for more detailed instructions for manually setting up >>>> the virtualenv, that would great. For tripleo-quickstart, gerrit is >>>> setup and follows the same gerrit workflow as everything else. >>>> >>>>> >>>>> Thanks, >>>>> Udi. >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com From iovadia at redhat.com Thu Jan 28 15:31:05 2016 From: iovadia at redhat.com (Ido Ovadia) Date: Thu, 28 Jan 2016 10:31:05 -0500 (EST) Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 In-Reply-To: <407267512.13564360.1453992878734.JavaMail.zimbra@redhat.com> Message-ID: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> Hello, I deployed Mitaka with rdo-manager on a virtual setup (udercloud, ceph, compute, 3*controller) according to the instructions in https://www.rdoproject.org/rdo-manager/ Overcloud deployment failed with code: 6 Need some guide to solve this...... openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --ntp-server clock.redhat.com -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml --libvirt-type qemu ....... 2016-01-28 14:22:37 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: CREATE_IN_PROGRESS Stack CREATE started 2016-01-28 14:22:37 [1]: CREATE_IN_PROGRESS state changed 2016-01-28 14:22:37 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:37 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:38 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:38 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:38 [0]: CREATE_IN_PROGRESS state changed 2016-01-28 14:22:38 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:39 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:39 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:39 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:39 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:39 [2]: CREATE_IN_PROGRESS state changed 2016-01-28 14:22:40 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:22:40 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:45 [2]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) 2016-01-28 14:23:46 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:48 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:48 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:49 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:49 [2]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:58 [1]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) 2016-01-28 14:23:58 [1]: CREATE_FAILED Error: resources[1]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:01 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:02 [1]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:15 [0]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) 2016-01-28 14:24:15 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2016-01-28 14:24:16 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: CREATE_FAILED Resource CREATE failed: Error: resources[2]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2016-01-28 14:24:16 [0]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:17 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:18 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:18 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-01-28 14:24:18 [0]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:19 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-01-28 14:24:19 [0]: SIGNAL_COMPLETE Unknown 2016-01-28 14:28:19 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted 2016-01-28 14:28:19 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 Stack overcloud CREATE_FAILED Deployment failed: Heat Stack create failed. ------------------------------------------------------------------ more info ========= heat deployment-show 91352911-6df8-4797-be2b-2789b3b5e066 output http://pastebin.test.redhat.com/344518 ----------------------------------------------------------------- From sasha at redhat.com Thu Jan 28 15:40:55 2016 From: sasha at redhat.com (Sasha Chuzhoy) Date: Thu, 28 Jan 2016 10:40:55 -0500 (EST) Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 In-Reply-To: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> References: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> Message-ID: <220029169.18366841.1453995655573.JavaMail.zimbra@redhat.com> The error code suggests there was a puppet error. You can use heat to debug it. It might also be useful to login to the overcloud nodes and check logs. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Ido Ovadia" > To: "rdo-list" > Sent: Thursday, January 28, 2016 10:31:05 AM > Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 > > Hello, > > I deployed Mitaka with rdo-manager on a virtual setup (udercloud, ceph, > compute, 3*controller) according to the > instructions in https://www.rdoproject.org/rdo-manager/ > > Overcloud deployment failed with code: 6 > > > Need some guide to solve this...... > > > openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 > --ceph-storage-scale 1 --ntp-server clock.redhat.com -e > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > -e > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > --libvirt-type qemu > > ....... > > 2016-01-28 14:22:37 > [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: > CREATE_IN_PROGRESS Stack CREATE started > 2016-01-28 14:22:37 [1]: CREATE_IN_PROGRESS state changed > 2016-01-28 14:22:37 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:37 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:38 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:38 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:38 [0]: CREATE_IN_PROGRESS state changed > 2016-01-28 14:22:38 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [2]: CREATE_IN_PROGRESS state changed > 2016-01-28 14:22:40 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:40 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:45 [2]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > 2016-01-28 14:23:46 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:48 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:48 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:49 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:49 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:58 [1]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > 2016-01-28 14:23:58 [1]: CREATE_FAILED Error: resources[1]: Deployment to > server failed: deploy_status_code : Deployment exited with non-zero status > code: 6 > 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:01 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:02 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:15 [0]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > 2016-01-28 14:24:15 [0]: CREATE_FAILED Error: resources[0]: Deployment to > server failed: deploy_status_code : Deployment exited with non-zero status > code: 6 > 2016-01-28 14:24:16 > [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: > CREATE_FAILED Resource CREATE failed: Error: resources[2]: Deployment to > server failed: deploy_status_code : Deployment exited with non-zero status > code: 6 > 2016-01-28 14:24:16 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:17 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED > Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 6 > 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:18 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:18 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y]: > CREATE_FAILED Resource CREATE failed: Error: > resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to > server failed: deploy_status_code: Deployment exited with non-zero status > code: 6 > 2016-01-28 14:24:18 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:19 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:19 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:28:19 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE > aborted > 2016-01-28 14:28:19 [overcloud]: CREATE_FAILED Resource CREATE failed: > Error: > resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[2]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 6 > Stack overcloud CREATE_FAILED > Deployment failed: Heat Stack create failed. > > ------------------------------------------------------------------ > > more info > ========= > > heat deployment-show 91352911-6df8-4797-be2b-2789b3b5e066 output > http://pastebin.test.redhat.com/344518 > > ----------------------------------------------------------------- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From sasha at redhat.com Thu Jan 28 15:43:04 2016 From: sasha at redhat.com (Sasha Chuzhoy) Date: Thu, 28 Jan 2016 10:43:04 -0500 (EST) Subject: [Rdo-list] First Mitaka test day summary In-Reply-To: <56AA309E.4060506@redhat.com> References: <56A93405.5080605@redhat.com> <56AA031F.7080004@redhat.com> <56AA309E.4060506@redhat.com> Message-ID: <1966145064.18367948.1453995784676.JavaMail.zimbra@redhat.com> With nested virtualization we don't need "--libvirt-type qemu". With that said, we noticed controller failures (one controller is down) on that setup. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "John Trowbridge" > To: "Udi Kalifon" > Cc: "rdo-list" > Sent: Thursday, January 28, 2016 10:15:42 AM > Subject: Re: [Rdo-list] First Mitaka test day summary > > > > On 01/28/2016 10:08 AM, Udi Kalifon wrote: > > The deploy command was: > > openstack overcloud deploy --templates --control-scale 3 > > --compute-scale 1 --ntp-server clock.redhat.com -e > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > > For virtual deployments, it is needed to pass also '--libvirt-type qemu'. > > It is probably worth trying a redeploy by deleting the overcloud and > adding that argument. > > > To try and investigate this subject further, I enabled nested > > virtualization on the host, and changed the libvirt type to qemu. > > Unfortunately that also required turning off all the VMs on teh host > > for the kvm module to be reloaded - and now I have problems with > > keystone that doesn't want to come back up. I mailed the list about > > this problem in a separate thread. > > > > Thanks. > > Udi. > > > > On Thu, Jan 28, 2016 at 2:01 PM, John Trowbridge wrote: > >> > >> > >> On 01/28/2016 04:35 AM, Udi Kalifon wrote: > >>> I can't launch instances in Mitaka on a virtual environment. I made > >>> sure that nested virtualization is enabled on the host. If I do nova > >>> show on the failed instance I see this: > >>> > >>> {"message": "No valid host was found. There are not enough hosts > >>> available.", "code": 500, "details": " File > >>> \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line > >>> 372, in build_instances > >>> context, request_spec, filter_properties) > >>> File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", > >>> line 416, in _schedule_instances > >>> hosts = self.scheduler_client.select_destinations(context, > >>> spec_obj) > >>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", > >>> line 372, in wrapped > >>> return func(*args, **kwargs) > >>> File > >>> \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", > >>> line 51, in select_destinations > >>> return self.queryclient.select_destinations(context, spec_obj) > >>> File > >>> \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", > >>> line 37, in __run_method > >>> return getattr(self.instance, __name)(*args, **kwargs) > >>> File > >>> \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", > >>> line 32, in select_destinations > >>> return self.scheduler_rpcapi.select_destinations(context, spec_obj) > >>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", > >>> line 121, in select_destinations > >>> return cctxt.call(ctxt, 'select_destinations', **msg_args) > >>> File > >>> \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", > >>> line 158, in call > >>> retry=self.retry) > >>> File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", > >>> line 90, in _send > >>> timeout=timeout, retry=retry) > >>> File > >>> \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", > >>> line 466, in send > >>> retry=retry) > >>> File > >>> \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", > >>> line 457, in _send > >>> raise result > >>> ", "created": "2016-01-28T09:20:47Z"} > >>> > >>> > >>> On the controller, I see a lot of db-related errors in > >>> nova-scheduler.log: > >>> > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db [-] > >>> Unexpected error while reporting service status > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> Traceback (most recent call last): > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", > >>> line 88, in _report_state > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> service.service_ref.save() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > >>> 221, in wrapper > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return fn(self, *args, **kwargs) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, > >>> in save > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> self._check_minimum_version() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, > >>> in _check_minimum_version > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> minver = self.get_minimum_version(self._context, self.binary) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > >>> 179, in wrapper > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> result = fn(cls, context, *args, **kwargs) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, > >>> in get_minimum_version > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> use_slave=use_slave) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in > >>> service_get_minimum_version > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> use_slave=use_slave) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line > >>> 468, in service_get_minimum_version > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> filter(models.Service.forced_down == false()).\ > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2503, in scalar > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> ret = self.one() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2472, in one > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> ret = list(self) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2515, in __iter__ > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return self._execute_and_instances(context) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2528, in _execute_and_instances > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> close_with_result=True) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2519, in _connection_from_session > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kw) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 882, in connection > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> execution_options=execution_options) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 887, in _connection_for_bind > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> engine, execution_options) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 334, in _connection_for_bind > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> conn = bind.contextual_connect() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2036, in contextual_connect > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> **kwargs) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 92, in __init__ > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> self.dispatch.engine_connect(self, self.__branch) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line > >>> 258, in __call__ > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> fn(*args, **kw) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line > >>> 80, in _connect_ping_listener > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> connection.scalar(select([1])) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 844, in scalar > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return self.execute(object, *multiparams, **params).scalar() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 914, in execute > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return meth(self, multiparams, params) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line > >>> 323, in _execute_on_connection > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return connection._execute_clauseelement(self, multiparams, params) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 1010, in _execute_clauseelement > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> compiled_sql, distilled_params > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 1078, in _execute_context > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db None, > >>> None) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 1335, in _handle_dbapi_exception > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> util.raise_from_cause(newraise, exc_info) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line > >>> 199, in raise_from_cause > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> reraise(type(exception), exception, tb=exc_tb) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 1071, in _execute_context > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> conn = self._revalidate_connection() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 393, in _revalidate_connection > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> self.__connection = self.engine.raw_connection(_connection=self) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2099, in raw_connection > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> self.pool.unique_connection, _connection) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2075, in _wrap_pool_connect > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> util.reraise(*sys.exc_info()) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2069, in _wrap_pool_connect > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return fn() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 318, in > >>> unique_connection > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return _ConnectionFairy._checkout(self) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in > >>> _checkout > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> fairy = _ConnectionRecord.checkout(pool) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in > >>> checkout > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> rec.checkin() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", > >>> line 60, in __exit__ > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> compat.reraise(exc_type, exc_value, exc_tb) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in > >>> checkout > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> dbapi_connection = rec.get_connection() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 590, in > >>> get_connection > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> self.connection = self.__connect() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in > >>> __connect > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> connection = self.__pool._invoke_creator(self) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", > >>> line 97, in connect > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return dialect.connect(*cargs, **cparams) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", > >>> line 377, in connect > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return self.dbapi.connect(*cargs, **cparams) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in > >>> Connect > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> return Connection(*args, **kwargs) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, > >>> in __init__ > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> self.connect() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, > >>> in connect > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> self._get_server_information() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, > >>> in _get_server_information > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> packet = self._read_packet() > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, > >>> in _read_packet > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> packet_header = self._read_bytes(4) > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, > >>> in _read_bytes > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> 2013, "Lost connection to MySQL server during query") > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost > >>> connection to MySQL server during query') [SQL: u'SELECT 1'] > >>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db [-] > >>> Unexpected error while reporting service status > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> Traceback (most recent call last): > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", > >>> line 88, in _report_state > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> service.service_ref.save() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > >>> 221, in wrapper > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> return fn(self, *args, **kwargs) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, > >>> in save > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> self._check_minimum_version() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, > >>> in _check_minimum_version > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> minver = self.get_minimum_version(self._context, self.binary) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > >>> 179, in wrapper > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> result = fn(cls, context, *args, **kwargs) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, > >>> in get_minimum_version > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> use_slave=use_slave) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in > >>> service_get_minimum_version > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> use_slave=use_slave) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line > >>> 468, in service_get_minimum_version > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> filter(models.Service.forced_down == false()).\ > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2503, in scalar > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> ret = self.one() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2472, in one > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> ret = list(self) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2515, in __iter__ > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> return self._execute_and_instances(context) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2528, in _execute_and_instances > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> close_with_result=True) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2519, in _connection_from_session > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db **kw) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 882, in connection > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> execution_options=execution_options) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 887, in _connection_for_bind > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> engine, execution_options) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 334, in _connection_for_bind > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> conn = bind.contextual_connect() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2034, in contextual_connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> self._wrap_pool_connect(self.pool.connect, None), > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2073, in _wrap_pool_connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db e, > >>> dialect, self) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 1399, in _handle_dbapi_exception_noconnection > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> util.raise_from_cause(newraise, exc_info) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line > >>> 199, in raise_from_cause > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> reraise(type(exception), exception, tb=exc_tb) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2069, in _wrap_pool_connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> return fn() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in > >>> connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> return _ConnectionFairy._checkout(self) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in > >>> _checkout > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> fairy = _ConnectionRecord.checkout(pool) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in > >>> checkout > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> rec.checkin() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", > >>> line 60, in __exit__ > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> compat.reraise(exc_type, exc_value, exc_tb) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in > >>> checkout > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> dbapi_connection = rec.get_connection() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 563, in > >>> get_connection > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> self.connection = self.__connect() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in > >>> __connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> connection = self.__pool._invoke_creator(self) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", > >>> line 97, in connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> return dialect.connect(*cargs, **cparams) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", > >>> line 377, in connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> return self.dbapi.connect(*cargs, **cparams) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in > >>> Connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> return Connection(*args, **kwargs) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, > >>> in __init__ > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> self.connect() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, > >>> in connect > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> self._get_server_information() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, > >>> in _get_server_information > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> packet = self._read_packet() > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, > >>> in _read_packet > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> packet_header = self._read_bytes(4) > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, > >>> in _read_bytes > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> 2013, "Lost connection to MySQL server during query") > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost > >>> connection to MySQL server during query') > >>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db [-] > >>> Unexpected error while reporting service status > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> Traceback (most recent call last): > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", > >>> line 88, in _report_state > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> service.service_ref.save() > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > >>> 221, in wrapper > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> return fn(self, *args, **kwargs) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, > >>> in save > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> self._check_minimum_version() > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, > >>> in _check_minimum_version > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> minver = self.get_minimum_version(self._context, self.binary) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line > >>> 179, in wrapper > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> result = fn(cls, context, *args, **kwargs) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, > >>> in get_minimum_version > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> use_slave=use_slave) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in > >>> service_get_minimum_version > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> use_slave=use_slave) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line > >>> 468, in service_get_minimum_version > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> filter(models.Service.forced_down == false()).\ > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2503, in scalar > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> ret = self.one() > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2472, in one > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> ret = list(self) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2515, in __iter__ > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> return self._execute_and_instances(context) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2528, in _execute_and_instances > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> close_with_result=True) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line > >>> 2519, in _connection_from_session > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db **kw) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 882, in connection > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> execution_options=execution_options) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 887, in _connection_for_bind > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> engine, execution_options) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line > >>> 334, in _connection_for_bind > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> conn = bind.contextual_connect() > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2034, in contextual_connect > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> self._wrap_pool_connect(self.pool.connect, None), > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line > >>> 2069, in _wrap_pool_connect > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> return fn() > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in > >>> connect > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> return _ConnectionFairy._checkout(self) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 729, in > >>> _checkout > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> fairy) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line > >>> 258, in __call__ > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> fn(*args, **kw) > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File > >>> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line > >>> 352, in checkout > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db if > >>> connection_record.info['pid'] != pid: > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db KeyError: > >>> 'pid' > >>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool [-] > >>> Exception during reset or similar > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool Traceback > >>> (most recent call last): > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 631, in > >>> _finalize_fairy > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > >>> fairy._reset(pool) > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 765, in > >>> _reset > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > >>> pool._dialect.do_rollback(self) > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/base.py", > >>> line 2519, in do_rollback > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > >>> dbapi_connection.rollback() > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 723, > >>> in rollback > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > >>> self._execute_command(COMMAND.COM_QUERY, "ROLLBACK") > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 956, > >>> in _execute_command > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool raise > >>> err.InterfaceError("(0, '')") > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > >>> InterfaceError: (0, '') > >>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool [-] > >>> Exception closing connection >>> 0x510f550> > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Traceback > >>> (most recent call last): > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 290, in > >>> _close_connection > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool > >>> self._dialect.do_close(connection) > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File > >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", > >>> line 418, in do_close > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool > >>> dbapi_connection.close() > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File > >>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 662, > >>> in close > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool raise > >>> err.Error("Already closed") > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Error: > >>> Already closed > >>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool > >>> 2016-01-28 08:25:27.009 1121 INFO nova.servicegroup.drivers.db [-] > >>> Recovered from being unable to report status. > >>> 2016-01-28 08:27:21.165 31979 WARNING > >>> oslo_reports.guru_meditation_report [-] Guru mediation now registers > >>> SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 > >>> will no longer be registered in a future release, so please use > >>> SIGUSR2 to generate reports. > >>> 2016-01-28 08:27:21.544 31979 WARNING oslo_config.cfg > >>> [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Option > >>> "use_local" from group "conductor" is deprecated for removal. Its > >>> value may be silently ignored in the future. > >>> 2016-01-28 08:27:21.606 31979 INFO oslo_service.periodic_task > >>> [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Skipping periodic > >>> task _periodic_update_dns because its interval is negative > >>> 2016-01-28 08:27:21.635 31979 INFO nova.service [-] Starting scheduler > >>> node (version 13.0.0-dev80.el7.centos) > >>> > >>> Was anyone successful in launching instances in Mitaka? I installed > >>> with the director, it's an HA deployment without network isolation in > >>> a virtualized environment. > >>> > >> > >> CI is passing on that configuration, and I have had success with it > >> manually. Could you post your deploy command? > >> > >>> Thanks a lot, > >>> Udi. > >>> > >>> On Wed, Jan 27, 2016 at 11:17 PM, John Trowbridge > >>> wrote: > >>>> > >>>> > >>>> On 01/27/2016 01:25 PM, Udi Kalifon wrote: > >>>>> Hello. > >>>>> > >>>>> The good news is that I succeeded to deploy :). I haven't yet tried to > >>>>> test the overcloud for any sanity, but in past test days I was never > >>>>> able to report any success - so maybe it's a sign that things are > >>>>> stabilizing. > >>>> > >>>> That's awesome! > >>>> > >>>>> > >>>>> I deployed with rdo-manager on a virtual setup according to the > >>>>> instructions in https://www.rdoproject.org/rdo-manager/. I wasn't able > >>>>> to deploy with network isolation, because I assume that my templates > >>>>> from 7.x require changes, but I haven't seen any documentation on > >>>>> what's changed. If you can point me in the right direction to get > >>>>> network isolation working for this environment I will test it > >>>>> tomorrow. > >>>>> > >>>>> Some of the problems I hit today: > >>>>> > >>>>> 1) The link to the quickstart guide from the testday page > >>>>> https://www.rdoproject.org/testday/mitaka/milestone2/ points to a very > >>>>> old github page. The correct link should be the one I already > >>>>> mentioned: https://www.rdoproject.org/rdo-manager/ > >>>> > >>>> I fixed that link, thanks! > >>>> > >>>>> > >>>>> 2) The prerequisites to installing ansible are not documented. On a > >>>>> fresh CentOS 7 I had to install python-virtualenv, git, and gcc. I > >>>>> then ran "easy_install pip" and "pip install > >>>>> git+https://github.com/ansible/ansible.git at v2.0.0-0.6.rc1#egg=ansible" > >>>>> to be able to run the playbook which installs the virtual environment. > >>>> > >>>> The quickstart.sh will do all of that for you, but if you wanted to > >>>> submit a patch for more detailed instructions for manually setting up > >>>> the virtualenv, that would great. For tripleo-quickstart, gerrit is > >>>> setup and follows the same gerrit workflow as everything else. > >>>> > >>>>> > >>>>> Thanks, > >>>>> Udi. > >>>>> > >>>>> _______________________________________________ > >>>>> Rdo-list mailing list > >>>>> Rdo-list at redhat.com > >>>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>>> > >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>>> > >>>> > >>>> _______________________________________________ > >>>> Rdo-list mailing list > >>>> Rdo-list at redhat.com > >>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>> > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From morazi at redhat.com Thu Jan 28 15:44:53 2016 From: morazi at redhat.com (Mike Orazi) Date: Thu, 28 Jan 2016 10:44:53 -0500 Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 In-Reply-To: <220029169.18366841.1453995655573.JavaMail.zimbra@redhat.com> References: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> <220029169.18366841.1453995655573.JavaMail.zimbra@redhat.com> Message-ID: <56AA3775.2040007@redhat.com> It looks like the most suspect part of the trace in the pastebin points to an issue with setting up ceph in particular. Might be worth double checking ceph specific parameters to see if they look correct. - Mike On 01/28/2016 10:40 AM, Sasha Chuzhoy wrote: > The error code suggests there was a puppet error. > You can use heat to debug it. It might also be useful to login to the overcloud nodes and check logs. > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- >> From: "Ido Ovadia" >> To: "rdo-list" >> Sent: Thursday, January 28, 2016 10:31:05 AM >> Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 >> >> Hello, >> >> I deployed Mitaka with rdo-manager on a virtual setup (udercloud, ceph, >> compute, 3*controller) according to the >> instructions in https://www.rdoproject.org/rdo-manager/ >> >> Overcloud deployment failed with code: 6 >> >> >> Need some guide to solve this...... >> >> >> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 >> --ceph-storage-scale 1 --ntp-server clock.redhat.com -e >> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >> --libvirt-type qemu >> >> ....... >> >> 2016-01-28 14:22:37 >> [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: >> CREATE_IN_PROGRESS Stack CREATE started >> 2016-01-28 14:22:37 [1]: CREATE_IN_PROGRESS state changed >> 2016-01-28 14:22:37 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:37 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:38 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:38 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:38 [0]: CREATE_IN_PROGRESS state changed >> 2016-01-28 14:22:38 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [2]: CREATE_IN_PROGRESS state changed >> 2016-01-28 14:22:40 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:40 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:45 [2]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >> 2016-01-28 14:23:46 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:48 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:48 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:49 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:49 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:58 [1]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >> 2016-01-28 14:23:58 [1]: CREATE_FAILED Error: resources[1]: Deployment to >> server failed: deploy_status_code : Deployment exited with non-zero status >> code: 6 >> 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:01 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:02 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:15 [0]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >> 2016-01-28 14:24:15 [0]: CREATE_FAILED Error: resources[0]: Deployment to >> server failed: deploy_status_code : Deployment exited with non-zero status >> code: 6 >> 2016-01-28 14:24:16 >> [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: >> CREATE_FAILED Resource CREATE failed: Error: resources[2]: Deployment to >> server failed: deploy_status_code : Deployment exited with non-zero status >> code: 6 >> 2016-01-28 14:24:16 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:17 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED >> Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:18 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:18 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y]: >> CREATE_FAILED Resource CREATE failed: Error: >> resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to >> server failed: deploy_status_code: Deployment exited with non-zero status >> code: 6 >> 2016-01-28 14:24:18 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:19 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:19 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:28:19 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE >> aborted >> 2016-01-28 14:28:19 [overcloud]: CREATE_FAILED Resource CREATE failed: >> Error: >> resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[2]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> >> ------------------------------------------------------------------ >> >> more info >> ========= >> >> heat deployment-show 91352911-6df8-4797-be2b-2789b3b5e066 output >> http://pastebin.test.redhat.com/344518 >> >> ----------------------------------------------------------------- >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From trown at redhat.com Thu Jan 28 15:46:59 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 28 Jan 2016 10:46:59 -0500 Subject: [Rdo-list] First Mitaka test day summary In-Reply-To: <1966145064.18367948.1453995784676.JavaMail.zimbra@redhat.com> References: <56A93405.5080605@redhat.com> <56AA031F.7080004@redhat.com> <56AA309E.4060506@redhat.com> <1966145064.18367948.1453995784676.JavaMail.zimbra@redhat.com> Message-ID: <56AA37F3.7020303@redhat.com> On 01/28/2016 10:43 AM, Sasha Chuzhoy wrote: > With nested virtualization we don't need "--libvirt-type qemu". > With that said, we noticed controller failures (one controller is down) on that setup. > Thanks. > Hmm, is nested virtualization documented anywhere? It is not how we do CI, and in my experience it does not work well on RHEL/CentOS. I have heard that improvements have been made on the latest Fedoras, but I have never had much success with nested virt. I think for virtual setups in particular, it would be good to mirror exactly what is used in CI. This is the benefit of virtual after all. > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- >> From: "John Trowbridge" >> To: "Udi Kalifon" >> Cc: "rdo-list" >> Sent: Thursday, January 28, 2016 10:15:42 AM >> Subject: Re: [Rdo-list] First Mitaka test day summary >> >> >> >> On 01/28/2016 10:08 AM, Udi Kalifon wrote: >>> The deploy command was: >>> openstack overcloud deploy --templates --control-scale 3 >>> --compute-scale 1 --ntp-server clock.redhat.com -e >>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>> >> >> For virtual deployments, it is needed to pass also '--libvirt-type qemu'. >> >> It is probably worth trying a redeploy by deleting the overcloud and >> adding that argument. >> >>> To try and investigate this subject further, I enabled nested >>> virtualization on the host, and changed the libvirt type to qemu. >>> Unfortunately that also required turning off all the VMs on teh host >>> for the kvm module to be reloaded - and now I have problems with >>> keystone that doesn't want to come back up. I mailed the list about >>> this problem in a separate thread. >>> >>> Thanks. >>> Udi. >>> >>> On Thu, Jan 28, 2016 at 2:01 PM, John Trowbridge wrote: >>>> >>>> >>>> On 01/28/2016 04:35 AM, Udi Kalifon wrote: >>>>> I can't launch instances in Mitaka on a virtual environment. I made >>>>> sure that nested virtualization is enabled on the host. If I do nova >>>>> show on the failed instance I see this: >>>>> >>>>> {"message": "No valid host was found. There are not enough hosts >>>>> available.", "code": 500, "details": " File >>>>> \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line >>>>> 372, in build_instances >>>>> context, request_spec, filter_properties) >>>>> File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", >>>>> line 416, in _schedule_instances >>>>> hosts = self.scheduler_client.select_destinations(context, >>>>> spec_obj) >>>>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", >>>>> line 372, in wrapped >>>>> return func(*args, **kwargs) >>>>> File >>>>> \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", >>>>> line 51, in select_destinations >>>>> return self.queryclient.select_destinations(context, spec_obj) >>>>> File >>>>> \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", >>>>> line 37, in __run_method >>>>> return getattr(self.instance, __name)(*args, **kwargs) >>>>> File >>>>> \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", >>>>> line 32, in select_destinations >>>>> return self.scheduler_rpcapi.select_destinations(context, spec_obj) >>>>> File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", >>>>> line 121, in select_destinations >>>>> return cctxt.call(ctxt, 'select_destinations', **msg_args) >>>>> File >>>>> \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", >>>>> line 158, in call >>>>> retry=self.retry) >>>>> File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", >>>>> line 90, in _send >>>>> timeout=timeout, retry=retry) >>>>> File >>>>> \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", >>>>> line 466, in send >>>>> retry=retry) >>>>> File >>>>> \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", >>>>> line 457, in _send >>>>> raise result >>>>> ", "created": "2016-01-28T09:20:47Z"} >>>>> >>>>> >>>>> On the controller, I see a lot of db-related errors in >>>>> nova-scheduler.log: >>>>> >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db [-] >>>>> Unexpected error while reporting service status >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> Traceback (most recent call last): >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >>>>> line 88, in _report_state >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> service.service_ref.save() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>>>> 221, in wrapper >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return fn(self, *args, **kwargs) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >>>>> in save >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> self._check_minimum_version() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >>>>> in _check_minimum_version >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> minver = self.get_minimum_version(self._context, self.binary) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>>>> 179, in wrapper >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> result = fn(cls, context, *args, **kwargs) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >>>>> in get_minimum_version >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> use_slave=use_slave) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >>>>> service_get_minimum_version >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> use_slave=use_slave) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >>>>> 468, in service_get_minimum_version >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> filter(models.Service.forced_down == false()).\ >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2503, in scalar >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> ret = self.one() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2472, in one >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> ret = list(self) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2515, in __iter__ >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return self._execute_and_instances(context) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2528, in _execute_and_instances >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> close_with_result=True) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2519, in _connection_from_session >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db **kw) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 882, in connection >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> execution_options=execution_options) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 887, in _connection_for_bind >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> engine, execution_options) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 334, in _connection_for_bind >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> conn = bind.contextual_connect() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2036, in contextual_connect >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> **kwargs) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 92, in __init__ >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> self.dispatch.engine_connect(self, self.__branch) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line >>>>> 258, in __call__ >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> fn(*args, **kw) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line >>>>> 80, in _connect_ping_listener >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> connection.scalar(select([1])) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 844, in scalar >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return self.execute(object, *multiparams, **params).scalar() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 914, in execute >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return meth(self, multiparams, params) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line >>>>> 323, in _execute_on_connection >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return connection._execute_clauseelement(self, multiparams, params) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 1010, in _execute_clauseelement >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> compiled_sql, distilled_params >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 1078, in _execute_context >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db None, >>>>> None) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 1335, in _handle_dbapi_exception >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> util.raise_from_cause(newraise, exc_info) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line >>>>> 199, in raise_from_cause >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> reraise(type(exception), exception, tb=exc_tb) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 1071, in _execute_context >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> conn = self._revalidate_connection() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 393, in _revalidate_connection >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> self.__connection = self.engine.raw_connection(_connection=self) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2099, in raw_connection >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> self.pool.unique_connection, _connection) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2075, in _wrap_pool_connect >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> util.reraise(*sys.exc_info()) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2069, in _wrap_pool_connect >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return fn() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 318, in >>>>> unique_connection >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return _ConnectionFairy._checkout(self) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in >>>>> _checkout >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> fairy = _ConnectionRecord.checkout(pool) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in >>>>> checkout >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> rec.checkin() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", >>>>> line 60, in __exit__ >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> compat.reraise(exc_type, exc_value, exc_tb) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in >>>>> checkout >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> dbapi_connection = rec.get_connection() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 590, in >>>>> get_connection >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> self.connection = self.__connect() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in >>>>> __connect >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> connection = self.__pool._invoke_creator(self) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", >>>>> line 97, in connect >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return dialect.connect(*cargs, **cparams) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >>>>> line 377, in connect >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return self.dbapi.connect(*cargs, **cparams) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in >>>>> Connect >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> return Connection(*args, **kwargs) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, >>>>> in __init__ >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> self.connect() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, >>>>> in connect >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> self._get_server_information() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, >>>>> in _get_server_information >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> packet = self._read_packet() >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, >>>>> in _read_packet >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> packet_header = self._read_bytes(4) >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, >>>>> in _read_bytes >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> 2013, "Lost connection to MySQL server during query") >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost >>>>> connection to MySQL server during query') [SQL: u'SELECT 1'] >>>>> 2016-01-28 08:25:02.645 1121 ERROR nova.servicegroup.drivers.db >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db [-] >>>>> Unexpected error while reporting service status >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> Traceback (most recent call last): >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >>>>> line 88, in _report_state >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> service.service_ref.save() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>>>> 221, in wrapper >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> return fn(self, *args, **kwargs) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >>>>> in save >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> self._check_minimum_version() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >>>>> in _check_minimum_version >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> minver = self.get_minimum_version(self._context, self.binary) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>>>> 179, in wrapper >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> result = fn(cls, context, *args, **kwargs) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >>>>> in get_minimum_version >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> use_slave=use_slave) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >>>>> service_get_minimum_version >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> use_slave=use_slave) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >>>>> 468, in service_get_minimum_version >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> filter(models.Service.forced_down == false()).\ >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2503, in scalar >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> ret = self.one() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2472, in one >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> ret = list(self) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2515, in __iter__ >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> return self._execute_and_instances(context) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2528, in _execute_and_instances >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> close_with_result=True) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2519, in _connection_from_session >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db **kw) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 882, in connection >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> execution_options=execution_options) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 887, in _connection_for_bind >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> engine, execution_options) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 334, in _connection_for_bind >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> conn = bind.contextual_connect() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2034, in contextual_connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> self._wrap_pool_connect(self.pool.connect, None), >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2073, in _wrap_pool_connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db e, >>>>> dialect, self) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 1399, in _handle_dbapi_exception_noconnection >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> util.raise_from_cause(newraise, exc_info) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line >>>>> 199, in raise_from_cause >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> reraise(type(exception), exception, tb=exc_tb) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2069, in _wrap_pool_connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> return fn() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in >>>>> connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> return _ConnectionFairy._checkout(self) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 708, in >>>>> _checkout >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> fairy = _ConnectionRecord.checkout(pool) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 485, in >>>>> checkout >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> rec.checkin() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", >>>>> line 60, in __exit__ >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> compat.reraise(exc_type, exc_value, exc_tb) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 482, in >>>>> checkout >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> dbapi_connection = rec.get_connection() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 563, in >>>>> get_connection >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> self.connection = self.__connect() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 602, in >>>>> __connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> connection = self.__pool._invoke_creator(self) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", >>>>> line 97, in connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> return dialect.connect(*cargs, **cparams) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >>>>> line 377, in connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> return self.dbapi.connect(*cargs, **cparams) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/__init__.py", line 88, in >>>>> Connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> return Connection(*args, **kwargs) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 657, >>>>> in __init__ >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> self.connect() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 850, >>>>> in connect >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> self._get_server_information() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1061, >>>>> in _get_server_information >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> packet = self._read_packet() >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 895, >>>>> in _read_packet >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> packet_header = self._read_bytes(4) >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 922, >>>>> in _read_bytes >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> 2013, "Lost connection to MySQL server during query") >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost >>>>> connection to MySQL server during query') >>>>> 2016-01-28 08:25:08.191 1121 ERROR nova.servicegroup.drivers.db >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db [-] >>>>> Unexpected error while reporting service status >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> Traceback (most recent call last): >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", >>>>> line 88, in _report_state >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> service.service_ref.save() >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>>>> 221, in wrapper >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> return fn(self, *args, **kwargs) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 282, >>>>> in save >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> self._check_minimum_version() >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 258, >>>>> in _check_minimum_version >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> minver = self.get_minimum_version(self._context, self.binary) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line >>>>> 179, in wrapper >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> result = fn(cls, context, *args, **kwargs) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 330, >>>>> in get_minimum_version >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> use_slave=use_slave) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 118, in >>>>> service_get_minimum_version >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> use_slave=use_slave) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line >>>>> 468, in service_get_minimum_version >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> filter(models.Service.forced_down == false()).\ >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2503, in scalar >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> ret = self.one() >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2472, in one >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> ret = list(self) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2515, in __iter__ >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> return self._execute_and_instances(context) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2528, in _execute_and_instances >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> close_with_result=True) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line >>>>> 2519, in _connection_from_session >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db **kw) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 882, in connection >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> execution_options=execution_options) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 887, in _connection_for_bind >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> engine, execution_options) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line >>>>> 334, in _connection_for_bind >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> conn = bind.contextual_connect() >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2034, in contextual_connect >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> self._wrap_pool_connect(self.pool.connect, None), >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line >>>>> 2069, in _wrap_pool_connect >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> return fn() >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 376, in >>>>> connect >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> return _ConnectionFairy._checkout(self) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 729, in >>>>> _checkout >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> fairy) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/event/attr.py", line >>>>> 258, in __call__ >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> fn(*args, **kw) >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db File >>>>> "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line >>>>> 352, in checkout >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db if >>>>> connection_record.info['pid'] != pid: >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db KeyError: >>>>> 'pid' >>>>> 2016-01-28 08:25:16.966 1121 ERROR nova.servicegroup.drivers.db >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool [-] >>>>> Exception during reset or similar >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool Traceback >>>>> (most recent call last): >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 631, in >>>>> _finalize_fairy >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>>>> fairy._reset(pool) >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 765, in >>>>> _reset >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>>>> pool._dialect.do_rollback(self) >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/base.py", >>>>> line 2519, in do_rollback >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>>>> dbapi_connection.rollback() >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 723, >>>>> in rollback >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>>>> self._execute_command(COMMAND.COM_QUERY, "ROLLBACK") >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 956, >>>>> in _execute_command >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool raise >>>>> err.InterfaceError("(0, '')") >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>>>> InterfaceError: (0, '') >>>>> 2016-01-28 08:25:16.969 1121 ERROR sqlalchemy.pool.QueuePool >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool [-] >>>>> Exception closing connection >>>> 0x510f550> >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Traceback >>>>> (most recent call last): >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 290, in >>>>> _close_connection >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >>>>> self._dialect.do_close(connection) >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >>>>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", >>>>> line 418, in do_close >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >>>>> dbapi_connection.close() >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool File >>>>> "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 662, >>>>> in close >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool raise >>>>> err.Error("Already closed") >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool Error: >>>>> Already closed >>>>> 2016-01-28 08:25:16.971 1121 ERROR sqlalchemy.pool.QueuePool >>>>> 2016-01-28 08:25:27.009 1121 INFO nova.servicegroup.drivers.db [-] >>>>> Recovered from being unable to report status. >>>>> 2016-01-28 08:27:21.165 31979 WARNING >>>>> oslo_reports.guru_meditation_report [-] Guru mediation now registers >>>>> SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 >>>>> will no longer be registered in a future release, so please use >>>>> SIGUSR2 to generate reports. >>>>> 2016-01-28 08:27:21.544 31979 WARNING oslo_config.cfg >>>>> [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Option >>>>> "use_local" from group "conductor" is deprecated for removal. Its >>>>> value may be silently ignored in the future. >>>>> 2016-01-28 08:27:21.606 31979 INFO oslo_service.periodic_task >>>>> [req-d3d0b357-c23b-4e8b-b322-32f5b225dc19 - - - - -] Skipping periodic >>>>> task _periodic_update_dns because its interval is negative >>>>> 2016-01-28 08:27:21.635 31979 INFO nova.service [-] Starting scheduler >>>>> node (version 13.0.0-dev80.el7.centos) >>>>> >>>>> Was anyone successful in launching instances in Mitaka? I installed >>>>> with the director, it's an HA deployment without network isolation in >>>>> a virtualized environment. >>>>> >>>> >>>> CI is passing on that configuration, and I have had success with it >>>> manually. Could you post your deploy command? >>>> >>>>> Thanks a lot, >>>>> Udi. >>>>> >>>>> On Wed, Jan 27, 2016 at 11:17 PM, John Trowbridge >>>>> wrote: >>>>>> >>>>>> >>>>>> On 01/27/2016 01:25 PM, Udi Kalifon wrote: >>>>>>> Hello. >>>>>>> >>>>>>> The good news is that I succeeded to deploy :). I haven't yet tried to >>>>>>> test the overcloud for any sanity, but in past test days I was never >>>>>>> able to report any success - so maybe it's a sign that things are >>>>>>> stabilizing. >>>>>> >>>>>> That's awesome! >>>>>> >>>>>>> >>>>>>> I deployed with rdo-manager on a virtual setup according to the >>>>>>> instructions in https://www.rdoproject.org/rdo-manager/. I wasn't able >>>>>>> to deploy with network isolation, because I assume that my templates >>>>>>> from 7.x require changes, but I haven't seen any documentation on >>>>>>> what's changed. If you can point me in the right direction to get >>>>>>> network isolation working for this environment I will test it >>>>>>> tomorrow. >>>>>>> >>>>>>> Some of the problems I hit today: >>>>>>> >>>>>>> 1) The link to the quickstart guide from the testday page >>>>>>> https://www.rdoproject.org/testday/mitaka/milestone2/ points to a very >>>>>>> old github page. The correct link should be the one I already >>>>>>> mentioned: https://www.rdoproject.org/rdo-manager/ >>>>>> >>>>>> I fixed that link, thanks! >>>>>> >>>>>>> >>>>>>> 2) The prerequisites to installing ansible are not documented. On a >>>>>>> fresh CentOS 7 I had to install python-virtualenv, git, and gcc. I >>>>>>> then ran "easy_install pip" and "pip install >>>>>>> git+https://github.com/ansible/ansible.git at v2.0.0-0.6.rc1#egg=ansible" >>>>>>> to be able to run the playbook which installs the virtual environment. >>>>>> >>>>>> The quickstart.sh will do all of that for you, but if you wanted to >>>>>> submit a patch for more detailed instructions for manually setting up >>>>>> the virtualenv, that would great. For tripleo-quickstart, gerrit is >>>>>> setup and follows the same gerrit workflow as everything else. >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Udi. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Rdo-list mailing list >>>>>>> Rdo-list at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>> >>>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Rdo-list mailing list >>>>>> Rdo-list at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>> >>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From trown at redhat.com Thu Jan 28 15:54:56 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 28 Jan 2016 10:54:56 -0500 Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 In-Reply-To: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> References: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> Message-ID: <56AA39D0.2030203@redhat.com> On 01/28/2016 10:31 AM, Ido Ovadia wrote: > Hello, > > I deployed Mitaka with rdo-manager on a virtual setup (udercloud, ceph, compute, 3*controller) according to the > instructions in https://www.rdoproject.org/rdo-manager/ > > Overcloud deployment failed with code: 6 > > > Need some guide to solve this...... > > > openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --ntp-server clock.redhat.com -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml --libvirt-type qemu > > ....... > > 2016-01-28 14:22:37 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: CREATE_IN_PROGRESS Stack CREATE started > 2016-01-28 14:22:37 [1]: CREATE_IN_PROGRESS state changed > 2016-01-28 14:22:37 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:37 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:38 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:38 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:38 [0]: CREATE_IN_PROGRESS state changed > 2016-01-28 14:22:38 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:39 [2]: CREATE_IN_PROGRESS state changed > 2016-01-28 14:22:40 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:22:40 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:45 [2]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > 2016-01-28 14:23:46 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:48 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:48 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:49 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:49 [2]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:58 [1]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > 2016-01-28 14:23:58 [1]: CREATE_FAILED Error: resources[1]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 > 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:01 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:02 [1]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:15 [0]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > 2016-01-28 14:24:15 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 > 2016-01-28 14:24:16 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: CREATE_FAILED Resource CREATE failed: Error: resources[2]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 > 2016-01-28 14:24:16 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:17 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:18 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:18 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > 2016-01-28 14:24:18 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:19 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:24:19 [0]: SIGNAL_COMPLETE Unknown > 2016-01-28 14:28:19 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted > 2016-01-28 14:28:19 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > Stack overcloud CREATE_FAILED > Deployment failed: Heat Stack create failed. > > ------------------------------------------------------------------ > > more info > ========= > > heat deployment-show 91352911-6df8-4797-be2b-2789b3b5e066 output http://pastebin.test.redhat.com/344518 > > ----------------------------------------------------------------- First, thanks for trying out RDO Manager! I am working on a patch to the image building right now to include ceph. The quickstart images currently do not include ceph because I wanted to explicitly remove EPEL, but ceph is not yet in one of the CentOS repos. I think the plan is to put it in the storage SIG, but in the interim, I am patching the image build process to include EPEL only for the ceph packages.[1] Sorry this did not make it for the test day. Would you mind retrying the deploy without ceph? [1] https://review.gerrithub.io/261510 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From trown at redhat.com Thu Jan 28 16:18:49 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 28 Jan 2016 11:18:49 -0500 Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 In-Reply-To: <56AA39D0.2030203@redhat.com> References: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> <56AA39D0.2030203@redhat.com> Message-ID: <56AA3F69.4030501@redhat.com> On 01/28/2016 10:54 AM, John Trowbridge wrote: > > > On 01/28/2016 10:31 AM, Ido Ovadia wrote: >> Hello, >> >> I deployed Mitaka with rdo-manager on a virtual setup (udercloud, ceph, compute, 3*controller) according to the >> instructions in https://www.rdoproject.org/rdo-manager/ >> >> Overcloud deployment failed with code: 6 >> >> >> Need some guide to solve this...... >> >> >> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --ntp-server clock.redhat.com -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml --libvirt-type qemu >> >> ....... >> >> 2016-01-28 14:22:37 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: CREATE_IN_PROGRESS Stack CREATE started >> 2016-01-28 14:22:37 [1]: CREATE_IN_PROGRESS state changed >> 2016-01-28 14:22:37 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:37 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:38 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:38 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:38 [0]: CREATE_IN_PROGRESS state changed >> 2016-01-28 14:22:38 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:39 [2]: CREATE_IN_PROGRESS state changed >> 2016-01-28 14:22:40 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:22:40 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:45 [2]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >> 2016-01-28 14:23:46 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:48 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:48 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:49 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:49 [2]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:58 [1]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >> 2016-01-28 14:23:58 [1]: CREATE_FAILED Error: resources[1]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >> 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:01 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:02 [1]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:15 [0]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >> 2016-01-28 14:24:15 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >> 2016-01-28 14:24:16 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: CREATE_FAILED Resource CREATE failed: Error: resources[2]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >> 2016-01-28 14:24:16 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:17 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >> 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:18 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:18 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >> 2016-01-28 14:24:18 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:19 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:24:19 [0]: SIGNAL_COMPLETE Unknown >> 2016-01-28 14:28:19 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted >> 2016-01-28 14:28:19 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> >> ------------------------------------------------------------------ >> >> more info >> ========= >> >> heat deployment-show 91352911-6df8-4797-be2b-2789b3b5e066 output http://pastebin.test.redhat.com/344518 >> >> ----------------------------------------------------------------- > > First, thanks for trying out RDO Manager! > > I am working on a patch to the image building right now to include ceph. > The quickstart images currently do not include ceph because I wanted to > explicitly remove EPEL, but ceph is not yet in one of the CentOS repos. > I think the plan is to put it in the storage SIG, but in the interim, I > am patching the image build process to include EPEL only for the ceph > packages.[1] Sorry this did not make it for the test day. Would you mind > retrying the deploy without ceph? > > [1] https://review.gerrithub.io/261510 Alternatively, and this would help me verify the patch above could you do the following from the undercloud (as the stack user) to update the overcloud-full image and try again: http://ur1.ca/ogh11 >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From marius at remote-lab.net Thu Jan 28 16:28:03 2016 From: marius at remote-lab.net (Marius Cornea) Date: Thu, 28 Jan 2016 17:28:03 +0100 Subject: [Rdo-list] Troubleshooting services after reboot of the overcloud In-Reply-To: References: Message-ID: Hi Udi, I'd log in to the controller nodes and check the pacemaker resources with pcs status and /var/log/messages. If you were running 3 controllers my guess is that the galera cluster fails to start. On Thu, Jan 28, 2016 at 4:00 PM, Udi Kalifon wrote: > Hello. > > I rebooted all my overcloud nodes. This is a Mitaka installation with > rdo-manager on a virtual environment. The keystone service is not > answering any more, and I have no clue what to do about it now that > it's running under Apache. The httpd service itself is running. > > How do I troubleshoot this? > > Thanks, > Udi. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From iovadia at redhat.com Thu Jan 28 17:24:02 2016 From: iovadia at redhat.com (Ido Ovadia) Date: Thu, 28 Jan 2016 12:24:02 -0500 (EST) Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 In-Reply-To: <56AA3F69.4030501@redhat.com> References: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> <56AA39D0.2030203@redhat.com> <56AA3F69.4030501@redhat.com> Message-ID: <434912660.13633481.1454001842094.JavaMail.zimbra@redhat.com> Thank you John, I have tried the patch for the ceph [1] Overcloud successfully deployed. I haven't yet tested the overcloud [1] https://review.gerrithub.io/261510 Thanks, Ido ----- Original Message ----- > > > On 01/28/2016 10:54 AM, John Trowbridge wrote: > > > > > > On 01/28/2016 10:31 AM, Ido Ovadia wrote: > >> Hello, > >> > >> I deployed Mitaka with rdo-manager on a virtual setup (udercloud, ceph, > >> compute, 3*controller) according to the > >> instructions in https://www.rdoproject.org/rdo-manager/ > >> > >> Overcloud deployment failed with code: 6 > >> > >> > >> Need some guide to solve this...... > >> > >> > >> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 > >> --ceph-storage-scale 1 --ntp-server clock.redhat.com -e > >> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > >> -e > >> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > >> --libvirt-type qemu > >> > >> ....... > >> > >> 2016-01-28 14:22:37 > >> [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: > >> CREATE_IN_PROGRESS Stack CREATE started > >> 2016-01-28 14:22:37 [1]: CREATE_IN_PROGRESS state changed > >> 2016-01-28 14:22:37 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:37 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:38 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:38 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:38 [0]: CREATE_IN_PROGRESS state changed > >> 2016-01-28 14:22:38 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:39 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:39 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:39 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:39 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:39 [2]: CREATE_IN_PROGRESS state changed > >> 2016-01-28 14:22:40 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:22:40 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:45 [2]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > >> 2016-01-28 14:23:46 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:48 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:48 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:49 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:49 [2]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:58 [1]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > >> 2016-01-28 14:23:58 [1]: CREATE_FAILED Error: resources[1]: Deployment to > >> server failed: deploy_status_code : Deployment exited with non-zero > >> status code: 6 > >> 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:01 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:02 [1]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:15 [0]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) > >> 2016-01-28 14:24:15 [0]: CREATE_FAILED Error: resources[0]: Deployment to > >> server failed: deploy_status_code : Deployment exited with non-zero > >> status code: 6 > >> 2016-01-28 14:24:16 > >> [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: > >> CREATE_FAILED Resource CREATE failed: Error: resources[2]: Deployment to > >> server failed: deploy_status_code : Deployment exited with non-zero > >> status code: 6 > >> 2016-01-28 14:24:16 [0]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:17 [ControllerServicesBaseDeployment_Step2]: > >> CREATE_FAILED Error: > >> resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment > >> to server failed: deploy_status_code: Deployment exited with non-zero > >> status code: 6 > >> 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:18 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:18 > >> [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y]: CREATE_FAILED > >> Resource CREATE failed: Error: > >> resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment > >> to server failed: deploy_status_code: Deployment exited with non-zero > >> status code: 6 > >> 2016-01-28 14:24:18 [0]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:19 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:24:19 [0]: SIGNAL_COMPLETE Unknown > >> 2016-01-28 14:28:19 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE > >> aborted > >> 2016-01-28 14:28:19 [overcloud]: CREATE_FAILED Resource CREATE failed: > >> Error: > >> resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[2]: > >> Deployment to server failed: deploy_status_code: Deployment exited with > >> non-zero status code: 6 > >> Stack overcloud CREATE_FAILED > >> Deployment failed: Heat Stack create failed. > >> > >> ------------------------------------------------------------------ > >> > >> more info > >> ========= > >> > >> heat deployment-show 91352911-6df8-4797-be2b-2789b3b5e066 output > >> http://pastebin.test.redhat.com/344518 > >> > >> ----------------------------------------------------------------- > > > > First, thanks for trying out RDO Manager! > > > > I am working on a patch to the image building right now to include ceph. > > The quickstart images currently do not include ceph because I wanted to > > explicitly remove EPEL, but ceph is not yet in one of the CentOS repos. > > I think the plan is to put it in the storage SIG, but in the interim, I > > am patching the image build process to include EPEL only for the ceph > > packages.[1] Sorry this did not make it for the test day. Would you mind > > retrying the deploy without ceph? > > > > [1] https://review.gerrithub.io/261510 > > Alternatively, and this would help me verify the patch above could you > do the following from the undercloud (as the stack user) to update the > overcloud-full image and try again: > > http://ur1.ca/ogh11 > > > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From trown at redhat.com Thu Jan 28 17:33:25 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 28 Jan 2016 12:33:25 -0500 Subject: [Rdo-list] Mitaka: Overcloud installation failed code 6 In-Reply-To: <434912660.13633481.1454001842094.JavaMail.zimbra@redhat.com> References: <1903491730.13584026.1453995065797.JavaMail.zimbra@redhat.com> <56AA39D0.2030203@redhat.com> <56AA3F69.4030501@redhat.com> <434912660.13633481.1454001842094.JavaMail.zimbra@redhat.com> Message-ID: <56AA50E5.1090009@redhat.com> On 01/28/2016 12:24 PM, Ido Ovadia wrote: > Thank you John, > > I have tried the patch for the ceph [1] > > Overcloud successfully deployed. > > I haven't yet tested the overcloud > > [1] https://review.gerrithub.io/261510 > > Thanks, > Ido > Awesome! Thanks for helping me test it out. > ----- Original Message ----- >> >> >> On 01/28/2016 10:54 AM, John Trowbridge wrote: >>> >>> >>> On 01/28/2016 10:31 AM, Ido Ovadia wrote: >>>> Hello, >>>> >>>> I deployed Mitaka with rdo-manager on a virtual setup (udercloud, ceph, >>>> compute, 3*controller) according to the >>>> instructions in https://www.rdoproject.org/rdo-manager/ >>>> >>>> Overcloud deployment failed with code: 6 >>>> >>>> >>>> Need some guide to solve this...... >>>> >>>> >>>> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 >>>> --ceph-storage-scale 1 --ntp-server clock.redhat.com -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >>>> --libvirt-type qemu >>>> >>>> ....... >>>> >>>> 2016-01-28 14:22:37 >>>> [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: >>>> CREATE_IN_PROGRESS Stack CREATE started >>>> 2016-01-28 14:22:37 [1]: CREATE_IN_PROGRESS state changed >>>> 2016-01-28 14:22:37 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:37 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:38 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:38 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:38 [0]: CREATE_IN_PROGRESS state changed >>>> 2016-01-28 14:22:38 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:39 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:39 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:39 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:39 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:39 [2]: CREATE_IN_PROGRESS state changed >>>> 2016-01-28 14:22:40 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:22:40 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:45 [2]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >>>> 2016-01-28 14:23:46 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:48 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:48 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:49 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:49 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:58 [1]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >>>> 2016-01-28 14:23:58 [1]: CREATE_FAILED Error: resources[1]: Deployment to >>>> server failed: deploy_status_code : Deployment exited with non-zero >>>> status code: 6 >>>> 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:01 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:02 [1]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:15 [0]: SIGNAL_IN_PROGRESS Signal: deployment failed (6) >>>> 2016-01-28 14:24:15 [0]: CREATE_FAILED Error: resources[0]: Deployment to >>>> server failed: deploy_status_code : Deployment exited with non-zero >>>> status code: 6 >>>> 2016-01-28 14:24:16 >>>> [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: >>>> CREATE_FAILED Resource CREATE failed: Error: resources[2]: Deployment to >>>> server failed: deploy_status_code : Deployment exited with non-zero >>>> status code: 6 >>>> 2016-01-28 14:24:16 [0]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:17 [ControllerServicesBaseDeployment_Step2]: >>>> CREATE_FAILED Error: >>>> resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment >>>> to server failed: deploy_status_code: Deployment exited with non-zero >>>> status code: 6 >>>> 2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:18 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:18 >>>> [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y]: CREATE_FAILED >>>> Resource CREATE failed: Error: >>>> resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment >>>> to server failed: deploy_status_code: Deployment exited with non-zero >>>> status code: 6 >>>> 2016-01-28 14:24:18 [0]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:19 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:24:19 [0]: SIGNAL_COMPLETE Unknown >>>> 2016-01-28 14:28:19 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE >>>> aborted >>>> 2016-01-28 14:28:19 [overcloud]: CREATE_FAILED Resource CREATE failed: >>>> Error: >>>> resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[2]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> Stack overcloud CREATE_FAILED >>>> Deployment failed: Heat Stack create failed. >>>> >>>> ------------------------------------------------------------------ >>>> >>>> more info >>>> ========= >>>> >>>> heat deployment-show 91352911-6df8-4797-be2b-2789b3b5e066 output >>>> http://pastebin.test.redhat.com/344518 >>>> >>>> ----------------------------------------------------------------- >>> >>> First, thanks for trying out RDO Manager! >>> >>> I am working on a patch to the image building right now to include ceph. >>> The quickstart images currently do not include ceph because I wanted to >>> explicitly remove EPEL, but ceph is not yet in one of the CentOS repos. >>> I think the plan is to put it in the storage SIG, but in the interim, I >>> am patching the image build process to include EPEL only for the ceph >>> packages.[1] Sorry this did not make it for the test day. Would you mind >>> retrying the deploy without ceph? >>> >>> [1] https://review.gerrithub.io/261510 >> >> Alternatively, and this would help me verify the patch above could you >> do the following from the undercloud (as the stack user) to update the >> overcloud-full image and try again: >> >> http://ur1.ca/ogh11 >> >> >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> From chrclark at cisco.com Thu Jan 28 20:13:48 2016 From: chrclark at cisco.com (Chris Clark (chrclark)) Date: Thu, 28 Jan 2016 20:13:48 +0000 Subject: [Rdo-list] Slow network performance on Kilo? In-Reply-To: <1389580690.15405313.1453989035284.JavaMail.zimbra@redhat.com> References: <56A928FB.6000206@soe.ucsc.edu> <1389580690.15405313.1453989035284.JavaMail.zimbra@redhat.com> Message-ID: <79049f1858fa4933a596a892c8e43e64@XCH-RCD-011.cisco.com> You might also test by lowering the VM MTU to 1400 to avoid possible fragmentation issues somewhere? Just as a test to see what happens. -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Attila Fazekas Sent: Thursday, January 28, 2016 8:51 AM To: Boris Derzhavets Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Slow network performance on Kilo? My first wild guess, it is something not OK around VLAN splinters. It can be both config or driver related issue. Just for the record, can you share these info - kernel version - ovs version (build) - nic (lspci -nn | grep Eth) Do you use some kind of bonding ? ----- Original Message ----- > From: "Boris Derzhavets" > To: "Erich Weiler" , rdo-list at redhat.com > Sent: Wednesday, January 27, 2016 10:22:04 PM > Subject: Re: [Rdo-list] Slow network performance on Kilo? > > > > ________________________________________ > From: rdo-list-bounces at redhat.com on > behalf of Erich Weiler > Sent: Wednesday, January 27, 2016 3:30 PM > To: rdo-list at redhat.com > Subject: [Rdo-list] Slow network performance on Kilo? > > Hi Y'all, > > I've seen several folks on the net with this problem, but I'm still > flailing a bit as to what is really going on. > > We are running RHEL 7 with RDO OpenStack Kilo. > > We are setting this environment up still, not quite done yet. But in > our testing, we are experiencing very slow network performance when > downloading or uploading to and from VMs. We get like 300Kb/s or so. > > We are using Neutron, MTU 9000 everywhere. I've tried disabling GSO, > LRO, TSO, GRO on the neutron interfaces, as well as the VM server > interfaces, still no improvement. I've tried lowing the VM MTU to 1500, > still no improvement. It's really strange. We do get connectivity, I > can ssh to the instances, but the network performance is just really, > really slow. It appears the instances can talk to each other very > quickly however. They just get slow network to the internet (i.e. > when packets go through the network node). > > We are using VLAN tenant network isolation. > > 1. Switch to VXLAN tunneling > 2. Activate DVR . It's already stable on RDO Kilo. > It will result routing of North-South && East-West traffic avoiding > Network Node. > > Boris. > > Can anyone point me in the right direction? I've been beating my head > against a wall and googling without avail for a week... > > Many thanks, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From brandon.james at sunayu.com Thu Jan 28 20:23:40 2016 From: brandon.james at sunayu.com (Brandon James) Date: Thu, 28 Jan 2016 15:23:40 -0500 Subject: [Rdo-list] Tempest Config Problem Mikata Message-ID: Hello, I have been able to successfully install my over and under cloud via the trippleo quick start method. I am having issues however when running the config tempest portion of the overcloud validation. I would like to complete this and run the required test after. I have listed the error I am seeing below. I made sure I ran the command source ~/overcloudrc prior to running this command so I am unsure what is causing this issue. I am also using the latest Mikata version. tools/config_tempest.py --out etc/tempest.conf --network-id $public_net_id --deployer-input ~/tempest-deployer-input.conf --debug --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD network.tenant_network_cidr 192.168.0.0/24 object-storage.operator_role swiftoperator orchestration.stack_owner_role heat_stack_owner 2016-01-28 20:14:55.286 1479 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2016-01-28 20:14:55.365 1479 INFO __main__ [-] Reading defaults from file '/home/stack/tempest/etc/default-overrides.conf' 2016-01-28 20:14:55.367 1479 INFO __main__ [-] Adding options from deployer-input file '/home/stack/tempest-deployer-input.conf' 2016-01-28 20:14:55.367 1479 DEBUG __main__ [-] Setting [compute-feature-enabled] console_output = false set tools/config_tempest.py:403 2016-01-28 20:14:55.367 1479 DEBUG __main__ [-] Setting [object-storage] operator_role = swiftoperator set tools/config_tempest.py:403 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [orchestration] stack_owner_role = heat_stack_user set tools/config_tempest.py:403 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [volume] backend1_name = tripleo_iscsi set tools/config_tempest.py:403 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [volume-feature-enabled] bootable = true set tools/config_tempest.py:403 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [identity] uri = http://192.0.2.6:5000/v2.0 set tools/config_tempest.py:403 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [identity] admin_password = UVbm3YJsqjWRGUsFzhjcrf498 set tools/config_tempest.py:403 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [network] tenant_network_cidr = 192.168.0.0/24 set tools/config_tempest.py:403 2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [object-storage] operator_role = swiftoperator set tools/config_tempest.py:403 2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [orchestration] stack_owner_role = heat_stack_owner set tools/config_tempest.py:403 2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [identity] uri_v3 = http://192.0.2.6:5000/v3 set tools/config_tempest.py:403 2016-01-28 20:14:55.490 1479 INFO tempest_lib.common.rest_client [req-70031732-c6fa-4968-b163-a154bfee6881 ] Request (main): 200 POST http://192.0.2.6:5000/v2.0/tokens 2016-01-28 20:14:55.516 1479 INFO tempest_lib.common.rest_client [req-99efe4e1-e698-469d-9119-8dd25dc2f076 ] Request (main): 200 GET http://192.0.2.6:35357/v2.0/tenants 0.025s 2016-01-28 20:14:55.516 1479 DEBUG __main__ [-] Setting [identity] admin_tenant_id = 9eab7137a4cd4857b8419e608cf75639 set tools/config_tempest.py:403 2016-01-28 20:14:55.524 1479 CRITICAL tempest [-] ServiceError: Request on service 'compute' with url ' http://192.0.2.6:8774/v2/9eab7137a4cd4857b8419e608cf75639/extensions' failed with code 503 2016-01-28 20:14:55.524 1479 ERROR tempest Traceback (most recent call last): 2016-01-28 20:14:55.524 1479 ERROR tempest File "tools/config_tempest.py", line 772, in 2016-01-28 20:14:55.524 1479 ERROR tempest main() 2016-01-28 20:14:55.524 1479 ERROR tempest File "tools/config_tempest.py", line 149, in main 2016-01-28 20:14:55.524 1479 ERROR tempest object_store_discovery=conf.get_bool_value(swift_discover)) 2016-01-28 20:14:55.524 1479 ERROR tempest File "/home/stack/tempest/tempest/common/api_discovery.py", line 157, in discover 2016-01-28 20:14:55.524 1479 ERROR tempest services[name]['extensions'] = service.get_extensions() 2016-01-28 20:14:55.524 1479 ERROR tempest File "/home/stack/tempest/tempest/common/api_discovery.py", line 75, in get_extensions 2016-01-28 20:14:55.524 1479 ERROR tempest body = self.do_get(self.service_url + '/extensions') 2016-01-28 20:14:55.524 1479 ERROR tempest File "/home/stack/tempest/tempest/common/api_discovery.py", line 53, in do_get 2016-01-28 20:14:55.524 1479 ERROR tempest " with code %d" % (self.name, url, r.status)) 2016-01-28 20:14:55.524 1479 ERROR tempest ServiceError: Request on service 'compute' with url ' http://192.0.2.6:8774/v2/9eab7137a4cd4857b8419e608cf75639/extensions' failed with code 503 2016-01-28 20:14:55.524 1479 ERROR tempest -- Thanks, Brandon J -------------- next part -------------- An HTML attachment was scrubbed... URL: From dradez at redhat.com Thu Jan 28 22:26:44 2016 From: dradez at redhat.com (Dan Radez) Date: Thu, 28 Jan 2016 17:26:44 -0500 Subject: [Rdo-list] RDO Manager :: Ceph OSDs on the Compute Nodes Message-ID: <56AA95A4.8070207@redhat.com> I was asked to post this to the list when I started this, Here's the first draft: https://review.openstack.org/#/c/273754/ It needs a bit of work still, but it's a start. The OSD will provision correctly as long as the compute OSD configuration happens after the controller ceph configurations, which is rarely. A rerun of puppet on the compute nodes after over cloud deployment will register the OSDs on the compute nodes into the ceph cluster. Working with OOO folks to sort out the right way to make a dependency on the controller ceph configuration to complete before the compute ceph configuration fires. Radez From afazekas at redhat.com Fri Jan 29 06:43:50 2016 From: afazekas at redhat.com (Attila Fazekas) Date: Fri, 29 Jan 2016 01:43:50 -0500 (EST) Subject: [Rdo-list] Tempest Config Problem Mikata In-Reply-To: References: Message-ID: <1180229674.15714263.1454049830523.JavaMail.zimbra@redhat.com> The service responded with 503 [1], It is very likely there is an issue on the server side. - It can be a haproxy issue (connecting to a backend), - network service issue - an issue with the actual nova-api service In this case the /var/log/nova/* should have a stack trace with the related error. The below command likely produces a similar error. $ nova list-extensions [1] http://www.checkupdown.com/status/E503.html ----- Original Message ----- > From: "Brandon James" > To: Rdo-list at redhat.com > Sent: Thursday, January 28, 2016 9:23:40 PM > Subject: [Rdo-list] Tempest Config Problem Mikata > > Hello, > > I have been able to successfully install my over and under cloud via the > trippleo quick start method. I am having issues however when running the > config tempest portion of the overcloud validation. I would like to complete > this and run the required test after. I have listed the error I am seeing > below. I made sure I ran the command source ~/overcloudrc prior to running > this command so I am unsure what is causing this issue. I am also using the > latest Mikata version. > > tools/config_tempest.py --out etc/tempest.conf --network-id $public_net_id > --deployer-input ~/tempest-deployer-input.conf --debug --create identity.uri > $OS_AUTH_URL identity.admin_password $OS_PASSWORD > network.tenant_network_cidr 192.168.0.0/24 object-storage.operator_role > swiftoperator orchestration.stack_owner_role heat_stack_owner > 2016-01-28 20:14:55.286 1479 INFO tempest [-] Using tempest config file > /etc/tempest/tempest.conf > 2016-01-28 20:14:55.365 1479 INFO __main__ [-] Reading defaults from file > '/home/stack/tempest/etc/default-overrides.conf' > 2016-01-28 20:14:55.367 1479 INFO __main__ [-] Adding options from > deployer-input file '/home/stack/tempest-deployer-input.conf' > 2016-01-28 20:14:55.367 1479 DEBUG __main__ [-] Setting > [compute-feature-enabled] console_output = false set > tools/config_tempest.py:403 > 2016-01-28 20:14:55.367 1479 DEBUG __main__ [-] Setting [object-storage] > operator_role = swiftoperator set tools/config_tempest.py:403 > 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [orchestration] > stack_owner_role = heat_stack_user set tools/config_tempest.py:403 > 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [volume] > backend1_name = tripleo_iscsi set tools/config_tempest.py:403 > 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting > [volume-feature-enabled] bootable = true set tools/config_tempest.py:403 > 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [identity] uri = > http://192.0.2.6:5000/v2.0 set tools/config_tempest.py:403 > 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [identity] > admin_password = UVbm3YJsqjWRGUsFzhjcrf498 set tools/config_tempest.py:403 > 2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [network] > tenant_network_cidr = 192.168.0.0/24 set tools/config_tempest.py:403 > 2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [object-storage] > operator_role = swiftoperator set tools/config_tempest.py:403 > 2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [orchestration] > stack_owner_role = heat_stack_owner set tools/config_tempest.py:403 > 2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [identity] uri_v3 = > http://192.0.2.6:5000/v3 set tools/config_tempest.py:403 > 2016-01-28 20:14:55.490 1479 INFO tempest_lib.common.rest_client > [req-70031732-c6fa-4968-b163-a154bfee6881 ] Request (main): 200 POST > http://192.0.2.6:5000/v2.0/tokens > 2016-01-28 20:14:55.516 1479 INFO tempest_lib.common.rest_client > [req-99efe4e1-e698-469d-9119-8dd25dc2f076 ] Request (main): 200 GET > http://192.0.2.6:35357/v2.0/tenants 0.025s > 2016-01-28 20:14:55.516 1479 DEBUG __main__ [-] Setting [identity] > admin_tenant_id = 9eab7137a4cd4857b8419e608cf75639 set > tools/config_tempest.py:403 > 2016-01-28 20:14:55.524 1479 CRITICAL tempest [-] ServiceError: Request on > service 'compute' with url ' > http://192.0.2.6:8774/v2/9eab7137a4cd4857b8419e608cf75639/extensions ' > failed with code 503 > 2016-01-28 20:14:55.524 1479 ERROR tempest Traceback (most recent call last): > 2016-01-28 20:14:55.524 1479 ERROR tempest File "tools/config_tempest.py", > line 772, in > 2016-01-28 20:14:55.524 1479 ERROR tempest main() > 2016-01-28 20:14:55.524 1479 ERROR tempest File "tools/config_tempest.py", > line 149, in main > 2016-01-28 20:14:55.524 1479 ERROR tempest > object_store_discovery=conf.get_bool_value(swift_discover)) > 2016-01-28 20:14:55.524 1479 ERROR tempest File > "/home/stack/tempest/tempest/common/api_discovery.py", line 157, in discover > 2016-01-28 20:14:55.524 1479 ERROR tempest services[name]['extensions'] = > service.get_extensions() > 2016-01-28 20:14:55.524 1479 ERROR tempest File > "/home/stack/tempest/tempest/common/api_discovery.py", line 75, in > get_extensions > 2016-01-28 20:14:55.524 1479 ERROR tempest body = > self.do_get(self.service_url + '/extensions') > 2016-01-28 20:14:55.524 1479 ERROR tempest File > "/home/stack/tempest/tempest/common/api_discovery.py", line 53, in do_get > 2016-01-28 20:14:55.524 1479 ERROR tempest " with code %d" % ( self.name , > url, r.status)) > 2016-01-28 20:14:55.524 1479 ERROR tempest ServiceError: Request on service > 'compute' with url ' > http://192.0.2.6:8774/v2/9eab7137a4cd4857b8419e608cf75639/extensions ' > failed with code 503 > 2016-01-28 20:14:55.524 1479 ERROR tempest > > > -- > > > Thanks, > Brandon J > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From weiler at soe.ucsc.edu Fri Jan 29 22:05:08 2016 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Fri, 29 Jan 2016 14:05:08 -0800 Subject: [Rdo-list] Slow network performance on Kilo? In-Reply-To: <1389580690.15405313.1453989035284.JavaMail.zimbra@redhat.com> References: <56A928FB.6000206@soe.ucsc.edu> <1389580690.15405313.1453989035284.JavaMail.zimbra@redhat.com> Message-ID: <56ABE214.9050901@soe.ucsc.edu> > My first wild guess, > it is something not OK around VLAN splinters. Ya, that's a good idea, that's helped me before IIRC... I tried enabling vlan splintering on openvswitch just now, but it didn't seem to help: /usr/bin/ovs-vsctl set interface eth0 other-config:enable-vlan-splinters=true I ran that command on the VLAN trunkn interface on my test compute node running my VM, and also on the network node's two ports, the trunk on the inside and the external facing port. Didn't seem to do anything. But maybe I'm doing it wrong? > It can be both config or driver related issue. > > Just for the record, can you share these info > - kernel version > - ovs version (build) > - nic (lspci -nn | grep Eth) [root at node-170 ~]# uname -a Linux node-170 3.10.0-229.20.1.el7.x86_64 #1 SMP Tue Nov 3 19:10:07 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux [root at node-170 ~]# cat /etc/redhat-release CentOS Linux release 7.1.1503 (Core) [root at node-170 ~]# lspci -nn | grep Eth 02:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01) 02:00.1 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01) 03:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01) 03:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01) > Do you use some kind of bonding ? No bonding, just a single wire. Thanks for the help!!! cheers, erich From dsneddon at redhat.com Fri Jan 29 22:16:48 2016 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 29 Jan 2016 14:16:48 -0800 Subject: [Rdo-list] Slow network performance on Kilo? In-Reply-To: <56A928FB.6000206@soe.ucsc.edu> References: <56A928FB.6000206@soe.ucsc.edu> Message-ID: <56ABE4D0.6090407@redhat.com> On 01/27/2016 12:30 PM, Erich Weiler wrote: > Hi Y'all, > > I've seen several folks on the net with this problem, but I'm still > flailing a bit as to what is really going on. > > We are running RHEL 7 with RDO OpenStack Kilo. > > We are setting this environment up still, not quite done yet. But in > our testing, we are experiencing very slow network performance when > downloading or uploading to and from VMs. We get like 300Kb/s or so. > > We are using Neutron, MTU 9000 everywhere. I've tried disabling GSO, > LRO, TSO, GRO on the neutron interfaces, as well as the VM server > interfaces, still no improvement. I've tried lowing the VM MTU to > 1500, still no improvement. It's really strange. We do get > connectivity, I can ssh to the instances, but the network performance > is just really, really slow. It appears the instances can talk to each > other very quickly however. They just get slow network to the internet > (i.e. when packets go through the network node). > > We are using VLAN tenant network isolation. > > Can anyone point me in the right direction? I've been beating my head > against a wall and googling without avail for a week... > > Many thanks, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com To use jumbo frames with Neutron, you first need to make sure that all layers have the same MTU: bridge, interface, VLAN, and bond (if applicable). Check these settings on both Controllers and Computes: In /etc/nova/nova.conf, edit or create this line: network_device_mtu=9000 In /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: veth_mtu = 9000 That will increase the MTU on the virtual Ethernet adapter to. You can get away with an MTU here that matches the wire in Neutron VLAN mode. In VXLAN you'd want this value to be 50 bytes less than the MTU on the wire. You also want to check the DHCP options in the file /etc/neutron/dnsmasq-neutron.conf on all controllers: dhcp-option-force=26,9000 You can also try setting that last option to 1400, that will limit the VM MTU, but if things speed up then you know you've got a bottleneck somewhere (like a less-than 1500-byte MTU somewhere in the path of VLAN traffic). -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From Arkady_Kanevsky at dell.com Fri Jan 29 22:28:10 2016 From: Arkady_Kanevsky at dell.com (Arkady_Kanevsky at dell.com) Date: Fri, 29 Jan 2016 22:28:10 +0000 Subject: [Rdo-list] Slow network performance on Kilo? In-Reply-To: <56ABE4D0.6090407@redhat.com> References: <56A928FB.6000206@soe.ucsc.edu> <56ABE4D0.6090407@redhat.com> Message-ID: <53fa7cbff1a24da4b24e823f21291fee@AUSX13MPS308.AMER.DELL.COM> And you will need switch settings to match that MTU. -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Dan Sneddon Sent: Friday, January 29, 2016 4:17 PM To: rdo-list at redhat.com Subject: Re: [Rdo-list] Slow network performance on Kilo? On 01/27/2016 12:30 PM, Erich Weiler wrote: > Hi Y'all, > > I've seen several folks on the net with this problem, but I'm still > flailing a bit as to what is really going on. > > We are running RHEL 7 with RDO OpenStack Kilo. > > We are setting this environment up still, not quite done yet. But in > our testing, we are experiencing very slow network performance when > downloading or uploading to and from VMs. We get like 300Kb/s or so. > > We are using Neutron, MTU 9000 everywhere. I've tried disabling GSO, > LRO, TSO, GRO on the neutron interfaces, as well as the VM server > interfaces, still no improvement. I've tried lowing the VM MTU to > 1500, still no improvement. It's really strange. We do get > connectivity, I can ssh to the instances, but the network performance > is just really, really slow. It appears the instances can talk to > each other very quickly however. They just get slow network to the > internet (i.e. when packets go through the network node). > > We are using VLAN tenant network isolation. > > Can anyone point me in the right direction? I've been beating my head > against a wall and googling without avail for a week... > > Many thanks, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com To use jumbo frames with Neutron, you first need to make sure that all layers have the same MTU: bridge, interface, VLAN, and bond (if applicable). Check these settings on both Controllers and Computes: In /etc/nova/nova.conf, edit or create this line: network_device_mtu=9000 In /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: veth_mtu = 9000 That will increase the MTU on the virtual Ethernet adapter to. You can get away with an MTU here that matches the wire in Neutron VLAN mode. In VXLAN you'd want this value to be 50 bytes less than the MTU on the wire. You also want to check the DHCP options in the file /etc/neutron/dnsmasq-neutron.conf on all controllers: dhcp-option-force=26,9000 You can also try setting that last option to 1400, that will limit the VM MTU, but if things speed up then you know you've got a bottleneck somewhere (like a less-than 1500-byte MTU somewhere in the path of VLAN traffic). -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: