From iwienand at redhat.com Mon Nov 2 04:59:51 2015 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 2 Nov 2015 15:59:51 +1100 Subject: [Rdo-list] rdoproject.org down? Message-ID: <5636EDC7.107@redhat.com> Hi, This seems to have been down for some time; as I write this I can't connect, but some of my build monitoring shows it not working as of 14:00 UTC, which would be 9am EST (I think) Might be planned, but I didn't see any notices -i === FAIL: http://nodepool.openstack.org/rax-dfw.devstack-centos7.log ---- 2015-11-01 14:18:55,175 INFO nodepool.image.build.rax-dfw.devstack-centos7: Loaded plugins: fastestmirror, langpacks 2015-11-01 14:18:56,200 INFO nodepool.image.build.rax-dfw.devstack-centos7: Cannot open: https://rdoproject.org/repos/rdo-release.rpm. Skipping. 2015-11-01 14:18:56,200 INFO nodepool.image.build.rax-dfw.devstack-centos7: Error: Nothing to do 2015-11-01 14:18:56,221 DEBUG nodepool.image.build.rax-dfw.devstack-centos7: *** FAILED to run setup script (1) From mrunge at redhat.com Mon Nov 2 08:31:08 2015 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 2 Nov 2015 09:31:08 +0100 Subject: [Rdo-list] Kilo Horizon session timeout and cookie In-Reply-To: References: <56315C5B.9000605@redhat.com> <56333F7F.5020205@redhat.com> Message-ID: <56371F4C.2040306@redhat.com> On 30/10/15 14:18, Tom Buskey wrote: > The bug reports say you need to add AUTH_USER_MODE and SESSION_ENGINE to > /etc/openstack-dashboard/local_settings but neither rpm does. The only > way to know about it is to read the bug reports > > On Fri, Oct 30, 2015 at 5:59 AM, Martin Pavl?sek > wrote: > In my understanding, both reports differ, and esp. reasons differ. Matthias From ihrachys at redhat.com Mon Nov 2 11:34:33 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 2 Nov 2015 12:34:33 +0100 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: > On 21 Oct 2015, at 15:32, Matt Kassawara wrote: > > I think packages available for standalone installation (i.e., without a deployment tool) should include complete upstream configuration files in standard locations without modification. In the case of *-dist.conf files with RDO packages, they seldom receive updates leading to deprecation warnings and sometimes override useful upstream default values. For example, most if not all services default to keystone for authentication (auth_strategy), yet the RDO neutron packages revert authentication to "noauth" in the *-dist.conf file. In another example, the RDO keystone package only includes the keystone-paste.ini file as /usr/share/keystone/keystone-dist-paste.ini rather than using the standard location and name which leads to confusion, particularly for new users. The installation guide contains quite a few extra steps and option-value pairs that work around the existence and contents of *-dist.conf files... additions that unnecessarily increase complexity for our audience of new users. Can you provide links to the guide pages that are complicated by the existence of -dist.conf files? I agree that some values may not be optimal (f.e. auth_strategy indeed should not be overridden; I sent a patch [1] to remove it from -dist.conf); but in principle, there should be a way for distributions to change defaults, and it should not be expected that all distributions ship identical configuration files. [1]: https://review.gerrithub.io/#/c/251170/ Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From rbowen at redhat.com Mon Nov 2 11:35:54 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 2 Nov 2015 06:35:54 -0500 Subject: [Rdo-list] rdoproject.org down? In-Reply-To: <5636EDC7.107@redhat.com> References: <5636EDC7.107@redhat.com> Message-ID: <56374A9A.3010006@redhat.com> Thanks. I'm checking into it. On 11/01/2015 11:59 PM, Ian Wienand wrote: > Hi, > > This seems to have been down for some time; as I write this I can't > connect, but some of my build monitoring shows it not working as of > 14:00 UTC, which would be 9am EST (I think) > > Might be planned, but I didn't see any notices > > -i > > === > > FAIL: http://nodepool.openstack.org/rax-dfw.devstack-centos7.log > ---- > 2015-11-01 14:18:55,175 INFO nodepool.image.build.rax-dfw.devstack-centos7: Loaded plugins: fastestmirror, langpacks > 2015-11-01 14:18:56,200 INFO nodepool.image.build.rax-dfw.devstack-centos7: Cannot open: https://rdoproject.org/repos/rdo-release.rpm. Skipping. > 2015-11-01 14:18:56,200 INFO nodepool.image.build.rax-dfw.devstack-centos7: Error: Nothing to do > 2015-11-01 14:18:56,221 DEBUG nodepool.image.build.rax-dfw.devstack-centos7: *** FAILED to run setup script (1) > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From javier.pena at redhat.com Mon Nov 2 12:15:38 2015 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 2 Nov 2015 07:15:38 -0500 (EST) Subject: [Rdo-list] [delorean] Planned Delorean upgrade on November 5 In-Reply-To: <1462586131.1057713.1446466256080.JavaMail.zimbra@redhat.com> Message-ID: <1480718606.1062116.1446466538471.JavaMail.zimbra@redhat.com> Dear rdo-list, We are planning to update the current Delorean instance next Thursday, November 5. The upgrade should bring a bigger spec VM and several improvements on the instance configuration. During the upgrade, the Delorean repos will still be available through the backup instance, but new packages will not be processed until the upgrade is completed. If you have any questions or concerns, please let us know. Regards, Javier From ihrachys at redhat.com Mon Nov 2 12:55:27 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 2 Nov 2015 13:55:27 +0100 Subject: [Rdo-list] Failed to deploy overcloud with network isolation on BM. In-Reply-To: <5627B604.4010706@redhat.com> References: <1421541564.61524022.1445391447568.JavaMail.zimbra@redhat.com> <1122769870.61527644.1445391866459.JavaMail.zimbra@redhat.com> <5627B604.4010706@redhat.com> Message-ID: <80D8209B-A358-4B6B-8584-E9AA81CC8528@redhat.com> > On 21 Oct 2015, at 17:57, Dan Sneddon wrote: > > What is a good general set of logs for a failed deployment where the > failure cause isn't clear? It should include /etc/neutron, /var/log/neutron/, /var/log/messages. Also it would be great to see other relevant logs like libvirt or openvswitch. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hguemar at fedoraproject.org Mon Nov 2 15:00:02 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 2 Nov 2015 15:00:02 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20151102150002.E73EA60A3FD9@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2015-11-04 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting ([Agenda: https://etherpad.openstack.org/p/RDO-Packaging](https://etherpad.openstack.org/p/RDO-Packaging)) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From mkassawara at gmail.com Tue Nov 3 01:08:13 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Mon, 2 Nov 2015 18:08:13 -0700 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: Ihar, I think distribution packages should bundle upstream source without alteration to maximize flexibility for authors of deployment tools (or simple instructions) that choose to use packages. In other words, distribution packages should include few if any decisions on how to deploy services. Instead, leave those decisions to authors of deployment tools including organizations that produce distribution packages. For example, decisions on how to deploy OpenStack using RDO packages should reside in products like Packstack and RHEL-OSP. In the meantime, content in /usr/share/$service directories impacts the following portions in the installation guide: 1) http://docs.openstack.org/draft/install-guide-rdo/keystone-verify.html - The keystone-paste.ini file should reside in the /etc/keystone directory. 2) http://docs.openstack.org/draft/install-guide-rdo/glance.html - The glance-api-dist.conf and glance-registry-dist.conf files contain defunct options in the [keystone_authtoken] section. Also, the *-paste.ini files should reside in the /etc/glance directory. 3) http://docs.openstack.org/draft/install-guide-rdo/nova.html - The nova-dist.conf file contains defunct options in the [keystone_authtoken] section, assumes use of nova-network, and contains several opinions about libvirt configuration. 4) http://docs.openstack.org/draft/install-guide-rdo/neutron.html - The neutron-dist.conf file specifies a notification driver regardless of a consumer (e.g., ceilometer) and disables nova-neutron interaction. Also, the *-paste.ini file should reside in the /etc/neutron directory. 5) http://docs.openstack.org/draft/install-guide-rdo/cinder.html - The cinder-dist.conf file contains defunct options in the [keystone_authtoken] section. Interestingly, the *-paste.ini files correctly reside in the /etc/cinder directory. 6) http://docs.openstack.org/draft/install-guide-rdo/swift.html - Interestingly, no /usr/share/swift directory exists. However, the configuration files in /etc/swift are considerably out of date and easier to overwrite from upstream source than attempt to fix via procedure. 7) http://docs.openstack.org/draft/install-guide-rdo/heat.html - The heat-dist.conf file contains defunct options in the [keystone_authtoken] section, contains a defunct database connection option (belongs in [database]), and enables a defunct message queue (Qpid). Also, the *-paste.ini file should reside in the /etc/heat directory. I haven't looked at the ceilometer packages recently, but I suspect they involve similar issues. Matt On Mon, Nov 2, 2015 at 4:34 AM, Ihar Hrachyshka wrote: > > > On 21 Oct 2015, at 15:32, Matt Kassawara wrote: > > > > I think packages available for standalone installation (i.e., without a > deployment tool) should include complete upstream configuration files in > standard locations without modification. In the case of *-dist.conf files > with RDO packages, they seldom receive updates leading to deprecation > warnings and sometimes override useful upstream default values. For > example, most if not all services default to keystone for authentication > (auth_strategy), yet the RDO neutron packages revert authentication to > "noauth" in the *-dist.conf file. In another example, the RDO keystone > package only includes the keystone-paste.ini file as > /usr/share/keystone/keystone-dist-paste.ini rather than using the standard > location and name which leads to confusion, particularly for new users. The > installation guide contains quite a few extra steps and option-value pairs > that work around the existence and contents of *-dist.conf files... > additions that unnecessarily increase complexity for our audience of new > users. > > Can you provide links to the guide pages that are complicated by the > existence of -dist.conf files? > > I agree that some values may not be optimal (f.e. auth_strategy indeed > should not be overridden; I sent a patch [1] to remove it from -dist.conf); > but in principle, there should be a way for distributions to change > defaults, and it should not be expected that all distributions ship > identical configuration files. > > [1]: https://review.gerrithub.io/#/c/251170/ > > Ihar > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Tue Nov 3 08:53:22 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 3 Nov 2015 09:53:22 +0100 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: > /usr/share/$service directories impacts the following portions in the > installation guide: > > 1) http://docs.openstack.org/draft/install-guide-rdo/keystone-verify.html - > The keystone-paste.ini file should reside in the /etc/keystone directory. We had discussions in the past about paste.inis: they shouldn't be treated as configuration files and it's upstream bug if user is forced to edit it. User configurable knobs should be all in conf! In particular for Keystone admin token I've started https://review.openstack.org/185464 unfortunately it is not merged yet. Out of date dist.conf options will be updated but if they're defunct, they'll be ignored, otherwise setting option in /etc/ overrides it. Cheers, Alan From ihrachys at redhat.com Tue Nov 3 11:59:15 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 3 Nov 2015 12:59:15 +0100 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: Matt Kassawara wrote: > Ihar, > > I think distribution packages should bundle upstream source without > alteration to maximize flexibility for authors of deployment tools (or > simple instructions) that choose to use packages. In other words, > distribution packages should include few if any decisions on how to > deploy services. Instead, leave those decisions to authors of deployment > tools including organizations that produce distribution packages. For > example, decisions on how to deploy OpenStack using RDO packages should > reside in products like Packstack and RHEL-OSP. In the meantime, content > in /usr/share/$service directories impacts the following portions in the > installation guide: You mix things here. RDO *is* a product, and *is* successfully used by companies without paying for RHEL-OSP subscription. Manual installation is still a supported way to deploy RDO, so anything that makes deployer life easier (like reasonable defaults) is beneficial. Below, I will comment on neutron only and will leave other components to respective team members. > > 1) http://docs.openstack.org/draft/install-guide-rdo/keystone-verify.html > - The keystone-paste.ini file should reside in the /etc/keystone > directory. > > 2) http://docs.openstack.org/draft/install-guide-rdo/glance.html - The > glance-api-dist.conf and glance-registry-dist.conf files contain defunct > options in the [keystone_authtoken] section. Also, the *-paste.ini files > should reside in the /etc/glance directory. > > 3) http://docs.openstack.org/draft/install-guide-rdo/nova.html - The > nova-dist.conf file contains defunct options in the [keystone_authtoken] > section, assumes use of nova-network, and contains several opinions about > libvirt configuration. > > 4) http://docs.openstack.org/draft/install-guide-rdo/neutron.html - The > neutron-dist.conf file specifies a notification driver regardless of a > consumer (e.g., ceilometer) and disables nova-neutron interaction. Also, > the *-paste.ini file should reside in the /etc/neutron directory. > I agree nova-neutron notifications should not be disabled (I merged a patch for that yesterday: https://review.gerrithub.io/#/c/251171/) For notification driver, I am not sure I follow. The assumption is that DHCP agent is a common piece of neutron setup that is widely used, and since it relies on RPC notifications, we enable it by default. Do you believe it?s better to make everyone using refarch neutron to define it for themselves? For *-paste.ini file, I believe the RDO assumption is that there is no reason to modify it, hence it?s not available for user modifications. Can you show me the exact place where installation guide became more complex due to -paste.ini file located under /usr/share? > 5) http://docs.openstack.org/draft/install-guide-rdo/cinder.html - The > cinder-dist.conf file contains defunct options in the > [keystone_authtoken] section. Interestingly, the *-paste.ini files > correctly reside in the /etc/cinder directory. > > 6) http://docs.openstack.org/draft/install-guide-rdo/swift.html - > Interestingly, no /usr/share/swift directory exists. However, the > configuration files in /etc/swift are considerably out of date and easier > to overwrite from upstream source than attempt to fix via procedure. > > 7) http://docs.openstack.org/draft/install-guide-rdo/heat.html - The > heat-dist.conf file contains defunct options in the [keystone_authtoken] > section, contains a defunct database connection option (belongs in > [database]), and enables a defunct message queue (Qpid). Also, the > *-paste.ini file should reside in the /etc/heat directory. > > I haven't looked at the ceilometer packages recently, but I suspect they > involve similar issues. > > Matt > > > > On Mon, Nov 2, 2015 at 4:34 AM, Ihar Hrachyshka > wrote: > > > On 21 Oct 2015, at 15:32, Matt Kassawara wrote: > > > > I think packages available for standalone installation (i.e., without a > deployment tool) should include complete upstream configuration files in > standard locations without modification. In the case of *-dist.conf files > with RDO packages, they seldom receive updates leading to deprecation > warnings and sometimes override useful upstream default values. For > example, most if not all services default to keystone for authentication > (auth_strategy), yet the RDO neutron packages revert authentication to > "noauth" in the *-dist.conf file. In another example, the RDO keystone > package only includes the keystone-paste.ini file as > /usr/share/keystone/keystone-dist-paste.ini rather than using the > standard location and name which leads to confusion, particularly for new > users. The installation guide contains quite a few extra steps and > option-value pairs that work around the existence and contents of > *-dist.conf files... additions that unnecessarily increase complexity for > our audience of new users. > > Can you provide links to the guide pages that are complicated by the > existence of -dist.conf files? > > I agree that some values may not be optimal (f.e. auth_strategy indeed > should not be overridden; I sent a patch [1] to remove it from > -dist.conf); but in principle, there should be a way for distributions to > change defaults, and it should not be expected that all distributions > ship identical configuration files. > > [1]: https://review.gerrithub.io/#/c/251170/ > > Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From apevec at gmail.com Tue Nov 3 13:31:08 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 3 Nov 2015 14:31:08 +0100 Subject: [Rdo-list] Fwd: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit In-Reply-To: References: Message-ID: Hi rdo, this upstream discussion seems to be important for Mitaka cycle and RDO should get ready to support it. I've opened https://trello.com/c/G5kyYZ5e/103-add-dlm-support with the initial list of tasks, please add/change if I got something wrong! Cheers, Alan From bderzhavets at hotmail.com Tue Nov 3 17:28:11 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 3 Nov 2015 17:28:11 +0000 Subject: [Rdo-list] Is it possible set up Liberty (or current) delorean repos for F23 ? In-Reply-To: References: , Message-ID: Actually, I've already asked first my question in message topic. Second one :- Does # yum -y install yum-plugin-priorities # cd /etc/yum.repos.d/ # wget http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo # wget http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo install properly current delorean builds for CentOS 7.1 ? Third ( if Second is correct ) :- Is it possible to get access to current delorean build openstack-neutron-???-.src.rpm Thanks Boris. From rbowen at redhat.com Tue Nov 3 18:07:14 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 3 Nov 2015 13:07:14 -0500 Subject: [Rdo-list] [Rdo-newsletter] November 2015 RDO Community Newsletter Message-ID: <5638F7D2.6010406@redhat.com> (This newsletter is also available online at http://rdoproject.org/newsletter/2015_november ) Quick links: * Quick Start - http://rdoproject.org/quickstart * Mailing Lists - http://rdoproject.org/Mailing_lists * RDO packages - http://rdoproject.org/repos/ with the trunk packages in http://rdoproject.org/repos/openstack/openstack-trunk/ * RDO blog - http://rdoproject.org/blog * Q&A - http://ask.openstack.org/ * Open Tickets - http://tm3.org/rdobugs * Twitter - http://twitter.com/rdocommunity OpenStack Summit and Meetup =========================== Last week, many of us were in Tokyo for the OpenStack Summit, where thousands of OpenStack enthusiasts gathered for 4 days of presentations, planning sessions, and other gatherings. For those that missed the event - or for those that were there but couldn't attend everything - all of that content was recorded, and can be watched on the OpenStack YouTube channel, or at https://www.openstack.org/summit/tokyo-2015/videos/ On Wednesday, we had the RDO Community Meetup, and roughly 70 people were in attendance. A wide variety of topics were discussed - the agenda can be seen in the etherpad at https://etherpad.openstack.org/p/rdo-tokyo - and although we didn't get to everything, we covered most of it. I'll be posting a more detailed update later this week on the RDO blog at http://rdoproject.org/blog Mailing List Update =================== The mailing list was particularly busy in October, leading up to the Liberty release. (For the full archives see https://www.redhat.com/archives/rdo-list/2015-October/thread.html ) Highlights include: * RDO-Manager status update - trown updated us on the status of RDO Manager for Liberty at https://www.redhat.com/archives/rdo-list/2015-October/msg00296.html and there was some followup from various people. If you're interested in trying out RDO-Manager, that thread is a good place to start. * Test Day - a significant mount of traffic was generated by the final Liberty test day. Stay tuned to rdo-list for information about upcoming test days in the Mitaka cycle. * Bug Statistics - Chandan Kumar continues to provide his weekly bug statistics mailing. You can see the latest at https://www.redhat.com/archives/rdo-list/2015-October/msg00383.html This is one of the best ways to stay up to date with how RDO development is going. Website Update ============== Last month I reported that we were almost ready to push out the new website. I'm pleased to say that we did, in fact, push the new site live, and it's now running at http://rdoproject.org/ The new site is running on Middleman - https://middlemanapp.com/ - and it's now easier than ever to send changes for the site. Fork the website on Github at https://github.com/redhat-openstack/website and send us pull requests. If you're looking for something that needs help, see the open issues list at https://github.com/redhat-openstack/website/issues Upcoming Events =============== OpenStack Summit is over, and we're entering a fairly quiet period so far as major events go. However, looking out a little further, there's some dates to save coming up. * FOSDEM FOSDEM 2016 will be held, as usual, on the last weekend in January. That's January 30 and 31, in Brussels, Belgium. More details about the event may be found on the FOSDEM website at https://fosdem.org/2016/ This year, there's two main places to get RDO content at FOSDEM. First, there's the Virtualization/IaaS devroom, where OpenStack and other IaaS projects will have talks. If you'd like to speak at FOSDEM, submit your talks for the virtualization/iaas devroom at http://goo.gl/ZOS8W3 On the day before FOSDEM - Friday, January 29th - we'll be holding an all-day RDO community meetup, in conjunction with the annual CentOS Dojo. The event will be held at the IBM office in Brussels, which is where we met last year. A call for presentations will be announced on the rdo-list mailing list later this week. Watch the RDO blog - http://rdoproject.org/blog/ - and Twitter - @rdoproject - for updates as we have more details. * Test Days One outcome of the RDO meetup at OpenStack Summit was a desire for more test days. We plan to have test days scheduled at least once a month during the Mitaka cycle. Exact dates are to be determined in the coming days, and will be discussed on the rdo-list mailing list. * Docs Days With the success of the recent doc sprint day, we're planning to make this a regular event, to encourage participation in improving the RDO website and documentation. Watch the RDO blog for discussion of when these will be held, and how you can participate. Keep in touch ============= There's lots of ways to stay in in touch with what's going on in the RDO community. The best ways are ... WWW * RDO - http://rdoproject.org/ * OpenStack Q&A - http://ask.openstack.org/ Mailing Lists: * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter IRC * IRC - #rdo on Freenode.irc.net * Puppet module development - #rdo-puppet Social Media: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * Facebook - http://facebook.com/rdocommunity Thanks again for being part of the RDO community! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From mkassawara at gmail.com Tue Nov 3 19:21:24 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Tue, 3 Nov 2015 12:21:24 -0700 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: I agree that *-paste.ini files should remain static. Keystone contains the only one that we need to edit (for security reasons) and the patch to move this configuration out of keystone-paste.ini needs attention from the keystone project. As for the installation guide, I prefer to unify the documentation for editing keystone-paste.ini for all distributions. Furthermore, our audience (mostly new users) likely feels more confident about editing files that reside in a less "intimidating" location such as /etc/$service. Best I can tell, neutron (and all other services) separate "mandatory" message queue access (the 'rpc_backend' option) from notification access because the latter only pertains to deployments with a consumer for notifications such as ceilometer. Without a consumer, notification queues pile up and lead to stability problems. Hence, the 'notification_driver' option defaults to a blank value that essentially disables such notifications. The upstream configuration file comments this option out and installation guide doesn't explicitly configure it which means neutron uses the value of 'notification_driver' from the neutron-dist.conf file and sends notifications to a queue without a consumer. While I'm thinking about it, I'm trying to determine the source of a memory leak (or strange increase in consumption) in my RDO Liberty environment (and prior releases) and should try disabling the notification driver. In comparison, my Ubuntu Liberty environment containing the same services and virtual resources has stable memory usage. On Tue, Nov 3, 2015 at 4:59 AM, Ihar Hrachyshka wrote: > Matt Kassawara wrote: > > Ihar, >> >> I think distribution packages should bundle upstream source without >> alteration to maximize flexibility for authors of deployment tools (or >> simple instructions) that choose to use packages. In other words, >> distribution packages should include few if any decisions on how to deploy >> services. Instead, leave those decisions to authors of deployment tools >> including organizations that produce distribution packages. For example, >> decisions on how to deploy OpenStack using RDO packages should reside in >> products like Packstack and RHEL-OSP. In the meantime, content in >> /usr/share/$service directories impacts the following portions in the >> installation guide: >> > > You mix things here. RDO *is* a product, and *is* successfully used by > companies without paying for RHEL-OSP subscription. Manual installation is > still a supported way to deploy RDO, so anything that makes deployer life > easier (like reasonable defaults) is beneficial. > > Below, I will comment on neutron only and will leave other components to > respective team members. > > >> 1) http://docs.openstack.org/draft/install-guide-rdo/keystone-verify.html >> - The keystone-paste.ini file should reside in the /etc/keystone directory. >> >> 2) http://docs.openstack.org/draft/install-guide-rdo/glance.html - The >> glance-api-dist.conf and glance-registry-dist.conf files contain defunct >> options in the [keystone_authtoken] section. Also, the *-paste.ini files >> should reside in the /etc/glance directory. >> >> 3) http://docs.openstack.org/draft/install-guide-rdo/nova.html - The >> nova-dist.conf file contains defunct options in the [keystone_authtoken] >> section, assumes use of nova-network, and contains several opinions about >> libvirt configuration. >> >> 4) http://docs.openstack.org/draft/install-guide-rdo/neutron.html - The >> neutron-dist.conf file specifies a notification driver regardless of a >> consumer (e.g., ceilometer) and disables nova-neutron interaction. Also, >> the *-paste.ini file should reside in the /etc/neutron directory. >> >> > I agree nova-neutron notifications should not be disabled (I merged a > patch for that yesterday: https://review.gerrithub.io/#/c/251171/) > > For notification driver, I am not sure I follow. The assumption is that > DHCP agent is a common piece of neutron setup that is widely used, and > since it relies on RPC notifications, we enable it by default. Do you > believe it?s better to make everyone using refarch neutron to define it for > themselves? > > For *-paste.ini file, I believe the RDO assumption is that there is no > reason to modify it, hence it?s not available for user modifications. Can > you show me the exact place where installation guide became more complex > due to -paste.ini file located under /usr/share? > > > 5) http://docs.openstack.org/draft/install-guide-rdo/cinder.html - The >> cinder-dist.conf file contains defunct options in the [keystone_authtoken] >> section. Interestingly, the *-paste.ini files correctly reside in the >> /etc/cinder directory. >> >> 6) http://docs.openstack.org/draft/install-guide-rdo/swift.html - >> Interestingly, no /usr/share/swift directory exists. However, the >> configuration files in /etc/swift are considerably out of date and easier >> to overwrite from upstream source than attempt to fix via procedure. >> >> 7) http://docs.openstack.org/draft/install-guide-rdo/heat.html - The >> heat-dist.conf file contains defunct options in the [keystone_authtoken] >> section, contains a defunct database connection option (belongs in >> [database]), and enables a defunct message queue (Qpid). Also, the >> *-paste.ini file should reside in the /etc/heat directory. >> >> I haven't looked at the ceilometer packages recently, but I suspect they >> involve similar issues. >> >> Matt >> >> >> >> On Mon, Nov 2, 2015 at 4:34 AM, Ihar Hrachyshka >> wrote: >> >> > On 21 Oct 2015, at 15:32, Matt Kassawara wrote: >> > >> > I think packages available for standalone installation (i.e., without a >> deployment tool) should include complete upstream configuration files in >> standard locations without modification. In the case of *-dist.conf files >> with RDO packages, they seldom receive updates leading to deprecation >> warnings and sometimes override useful upstream default values. For >> example, most if not all services default to keystone for authentication >> (auth_strategy), yet the RDO neutron packages revert authentication to >> "noauth" in the *-dist.conf file. In another example, the RDO keystone >> package only includes the keystone-paste.ini file as >> /usr/share/keystone/keystone-dist-paste.ini rather than using the standard >> location and name which leads to confusion, particularly for new users. The >> installation guide contains quite a few extra steps and option-value pairs >> that work around the existence and contents of *-dist.conf files... >> additions that unnecessarily increase complexity for our audience of new >> users. >> >> Can you provide links to the guide pages that are complicated by the >> existence of -dist.conf files? >> >> I agree that some values may not be optimal (f.e. auth_strategy indeed >> should not be overridden; I sent a patch [1] to remove it from -dist.conf); >> but in principle, there should be a way for distributions to change >> defaults, and it should not be expected that all distributions ship >> identical configuration files. >> >> [1]: https://review.gerrithub.io/#/c/251170/ >> >> Ihar >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Nov 3 19:50:47 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 3 Nov 2015 14:50:47 -0500 Subject: [Rdo-list] RDO blog roundup, week of November 3 Message-ID: <56391017.10002@redhat.com> Due to travels, it's been 2 weeks since the last RDO blog update. In conjunction with OpenStack Summit having been last week, this means that I have a huge list of blog posts to catch you up on this week. (The web version of this summary may be found at http://rdoproject.org/blog/2015/11/rdo-blog-roundup-week-of-november-3/ Ansible 2.0: The Docker connection driver, by Lars Kellogg-Stedman As the release of Ansible 2.0 draws closer, I'd like to take a look at some of the new features that are coming down the pipe. In this post, we'll look at the docker connection driver. ? read more at http://tm3.org/35 RDO test day summary by Rich Bowen A big thanks to everyone that participated in the RDO Test Day over the last 48 hours. Here's a few statistics from the day. ? read more at http://tm3.org/36 Bootstrapping Ansible on Fedora 23 by Lars Kellogg-Stedman If you've tried running Ansible against a Fedora 23 system, you may have run into the following problem: ? read more at http://tm3.org/37 Stupid Ansible Tricks: Running a role from the command line by Lars Kellogg-Stedman When writing Ansible roles I occasionally want a way to just run a role from the command line, without having to muck about with a playbook. I've seen similar requests on the mailing lists and on irc. ? read more at http://tm3.org/38 Admin, by Adam Young While I tend to play up bug 968696 for dramatic effect, the reality is we have a logical contradiction on what we mean by ?admin? when talking about RBAC. ? read more at http://tm3.org/39 Announcing the Release of Kolla Liberty by Steve Dake Hello OpenStackers! The Kolla community is pleased to announce the release of the Kolla Liberty. This release fixes 432 bugs and implements 58 blueprints! ? read more at http://tm3.org/3a DevOps in a Bi-Modal World (Part 3 of 4) by James Labocki In Part 2 of this series, we discussed what IT needs to do in a Mode 1 world to make itself more relevant to the business and reduce complexity. In this part, we will turn our attention to Mode 2 and discuss how the organization can solve its challenges by improving agility and increasing scalability. ? read more at http://tm3.org/3b Cinder HA Active-Active specs up for review by Gorka Eguileor It?s been some time since the last time I talked here about High Availability Active-Active configurations in Openstack?s Cinder service, and now I am quite pleased -and a little bit embarrassed it took so long- to announce that all specs are now up for reviewing. ? read more at http://tm3.org/3c RDO Liberty released in CentOS Cloud SIG by Alan Pevec We are pleased to announce the general availability of the RDO build for OpenStack Liberty for CentOS Linux 7 x86_64, suitable for building private, public and hybrid clouds. OpenStack Liberty is the 12th release of the open source software collaboratively built by a large number of contributors around the OpenStack.org project space. ? read more at http://tm3.org/3d DevOps in a Bi-Modal World (Part 4 of 4) by James Labocki In this series we have seen the complexity of bridging the gap between existing infrastructure and processes (Mode 1) and new, agile processes and architectures (Mode 2). Each brings its own set of challenges and demands on the organization. In Mode-1 organizations are looking to increase relevance and reduce complexity, and in Mode-2 they are looking to improve agility and increase scalability. In this post we will discuss how Red Hat addresses and solves each of these challenges. ? read more at http://tm3.org/3e OpenStack Security Groups using OVN ACLs by Russell Bryant OpenStack Security Groups give you a way to define packet filtering policy that is implemented by the cloud infrastructure. OVN and its OpenStack Neutron integration now includes support for security groups and this post discusses how it works. ? read more at http://tm3.org/3f RDO Liberty DVR Neutron workflow on CentOS 7.1 by Boris Derzhavets Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html DVR is supposed to address following problems which has traditional 3 Node deployment schema:- ? read more at http://tm3.org/34 Why the Operating System Matters by Margaret Dawson In IT today, we love to note that the infrastructure layer has become commoditized. This started with virtualization, as we could create many virtual machines within a single physical machine. Cloud has taken us further with a key value proposition of delivering cloud services on any standard server or virtualized environment, enabling easier scalability and faster service delivery, among other benefits. ? read more at http://tm3.org/3g Ansible 2.0: New OpenStack modules by Lars Kellogg-Stedman This is the second in a loose sequence of articles looking at new features in Ansible 2.0. In the previous article I looked at the Docker connection driver. In this article, I would like to provide an overview of the new-and-much-improved suite of modules for interacting with an OpenStack environment, and provide a few examples of their use. ? read more at http://tm3.org/3h OpenStack Summit Tokyo ? Day 0 (Pre-event) by Jeff Jameson I?ve always enjoyed traveling to Tokyo, Japan, as the people are always so friendly and willing to help. Whether it?s finding my way through the Narita airport or just trying to find a place to eat, they?re always willing to help ? even with the language barrier. And each time I visit, I see something new, learn another word (or two) in Japanese, and it all just seems new and exciting all over again. Add in the excitement and buzz of an OpenStack Summit and you?ve got a great week in Tokyo! ? read more at http://tm3.org/3i Ducks by Rich Bowen Why ducks? ? read more at http://tm3.org/3j A Container Stack for OpenStack (Part 1 of 2) by Joe Fernandes Open source continues to be a tremendous source of innovation and nowhere is that more evident than at the biannual OpenStack Summit. Over the past couple of years, as OpenStack interest and adoption has grown, we?ve seen another important innovation emerge from the open source community in the form of Linux containers, driven by Docker and associated open source projects. As the world gathers in Tokyo for another OpenStack Summit, we wanted to talk about how Red Hat is bringing these two innovations together, to make OpenStack a great platform for running containerized applications. ? read more at http://tm3.org/3k OpenStack Summit Tokyo ? Day 1 by Jeff Jameson Kon?nichiwa from Tokyo, Japan where the 11th semi-annual OpenStack Summit is officially underway! This event has come a long way from its first gathering, more than five years ago, where 75 people gathered in Austin, Texas to learn about OpenStack in its infancy. That?s a sharp contrast with the 5,000+ people in attendance here in what marks Asia?s second OpenStack Summit. ? read more at http://tm3.org/3l What does RDO stand for? by Rich Bowen The RDO FAQ has long said that RDO doesn't stand for anything. This is a very unsatisfying answer to one of the most frequently asked questions about RDO. So, a little while ago, I started saying that RDO stands for "Rich's Distribution of OpenStack." Although tongue-in-cheek, this response emphasizes that RDO is a community driven project. So it's also Radez's Distribution of OpenStack, and Radhesh's and Russel's and red_trela's. (As well as lots of people whose names don't start with R!) ? read more at http://tm3.org/3m Proven OpenStack solutions. Simple OpenStack deployment. Powerful results. by Radhesh Balakrishnan According to an IDC global survey sponsored by Cisco of 3,643 enterprise executives responsible for IT decisions, 69% of respondents indicated that their organizations have a cloud adoption strategy in place. Of these organizations, 65% say OpenStack is an important part of their cloud strategy and had higher expectations for business improvements associated with cloud adoption. ? read more at http://tm3.org/3n OpenStack Summit Tokyo ? Day 2 by Jeff Jameson Hello again from Tokyo, Japan where the second day of OpenStack Summit has come to a close with plenty of news, interesting sessions, great discussion on the showfloor, and more. ? read more at http://tm3.org/3o A Container Stack for OpenStack (Part 2 of 2) Joe Fernandes In Part 1 of this blog series, I talked about how Red Hat has been working with the open source community to build a new container stack and our commitment to bring that to OpenStack. In Part 2 I will discuss additional capabilities Red Hat is working on to build an enterprise container infrastructure and how this is forms the foundation of our containerized application platform in OpenShift. ? read more at http://tm3.org/3p OpenStack Summit Tokyo ? Day 3 by Jeff Jameson Hello again from Tokyo, Japan where the third and final day of OpenStack Summit has come to a close. As with the previous days of the event, there was plenty of news, interesting sessions, great discussions on the show floor, and more. All would likely agree that the 11th OpenStack Summit was a rousing overall success! ? read more at http://tm3.org/3q Puppet OpenStack plans for Mitaka by Emilien Macchi Our Tokyo week just ended and I really enjoyed to be at the Summit with the Puppet OpenStack folks. This blog post summarizes what we did this week and what we plan for the next release. ? read more at http://tm3.org/3r From rbowen at redhat.com Tue Nov 3 21:20:18 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 3 Nov 2015 16:20:18 -0500 Subject: [Rdo-list] OpenStack Meetups, week of November 2 Message-ID: <56392512.1070605@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Wednesday November 04 in K?ln, DE: OpenStack Liberty & Mehr - http://www.meetup.com/OpenStack-X/events/225616146/ * Wednesday November 04 in Richardson, TX, US: OpenStack Cinder, Implementation Today and New Trends for Tomorrow - http://www.meetup.com/OpenStack-DFW/events/225205780/ * Wednesday November 04 in Orlando, FL, US: Nov Meetup - What's new in OpenStack Liberty? - http://www.meetup.com/Orlando-Central-Florida-OpenStack-Meetup/events/226233373/ * Wednesday November 04 in Toulouse, FR: Quoi de neuf du c?t? d'OpenStack / D?couvrez Ansible - http://www.meetup.com/Toulouse-DevOps/events/226016373/ * Saturday November 07 in Bowie, MD, US: InfoTechMeetup02 -Intro to OpenStack & Linux Lab (Part 1 of 2) - http://www.meetup.com/BowieInfoTechNetwork/events/225875477/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From hguemar at fedoraproject.org Tue Nov 3 23:33:20 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 4 Nov 2015 00:33:20 +0100 Subject: [Rdo-list] Is it possible set up Liberty (or current) delorean repos for F23 ? In-Reply-To: References: Message-ID: 2015-11-03 18:28 GMT+01:00 Boris Derzhavets : > Actually, I've already asked first my question in message topic. > > Second one :- > > Does > > # yum -y install yum-plugin-priorities > # cd /etc/yum.repos.d/ > # wget http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > # wget http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > install properly current delorean builds for CentOS 7.1 ? > > Third ( if Second is correct ) :- > Is it possible to get access to current delorean build openstack-neutron-???-.src.rpm > > > Thanks > Boris. > You can just use F22 repositories. From mrunge at redhat.com Wed Nov 4 07:52:29 2015 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 4 Nov 2015 08:52:29 +0100 Subject: [Rdo-list] [FOSDEM] Distributions Devroom CFP Message-ID: <20151104075229.GA16777@sofja.berg.ol> Hello, not all of you may have seen this announcement. Since RDO is a *distribution*, it should fit to the scope of the Distribution Devroom. Matthias ----- Forwarded message from Brian Stinson ----- Date: Mon, 2 Nov 2015 15:46:23 -0600 From: Brian Stinson To: distributions-devroom at lists.fosdem.org Cc: devel at lists.fedoraproject.org, centos-devel at centos.org, devroom-managers at lists.fosdem.org, fosdem at lists.fosdem.org Subject: [FOSDEM] Distributions Devroom CFP User-Agent: Mutt/1.5.23.1-rc1 (2014-03-12) FOSDEM 2016 - Distributions Devroom Call for Participation The Distributions devroom will take place 30 & 31 January, 2016 at FOSDEM, in room K.4.201 at Universit? Libre de Bruxelles, in Brussels, Belgium. As Linux distributions converge on similar tools, the problem space overlapping different distributions is growing. This standardization across the distributions presents an opportunity to develop generic solutions to the problems of aggregating, building, and maintaining the pieces that go into a distribution. We welcome submissions targeted at developers interested in issues unique to distributions, especially in the following topics: - Cross-distribution collaboration on common issues, eg: content distribution and documentation - Working with vendor relationships (eg. cloud providers, non-commodity hardware vendors etc ) - The future of distributions, emerging trends and evolving user demands from the idea of a platform - User experience management ( onboarding new users, facilitating technical growth, user to contribution transitions etc ) - Building trust and code relationships with the upstream components of a distribution - Solving traditional problems like package management, and content management (eg. rpm/dpkg/ostree/coreos ) - Contributor resource management, centralised trust management, key trust etc - Integration technologies like installers, deployment facilitation ( eg. cloud contextualisation ) Submissions may be in the form of 30-55 minute talks, panel sessions, round-table discussions, Birds of a Feather (BoF) sessions or lightning talks. Dates ------ Submission Deadline: 10th Dec 2015 Acceptance Notification: 15th Dec 2015 Final Schedule Posted: 17th Dec 2015 How to submit -------------- Visit https://penta.fosdem.org/submission/FOSDEM16 1.) If you do not have an account, create one here 2.) Click 'Create Event' 3.) Enter your presentation details 4.) Be sure to select the Distributions Devroom track! 5.) Submit What to include --------------- - The title of your submission - A 1-paragraph Abstract - A longer description including the benefit of your talk to your target audience - Approximate length / type of submission (talk, BoF, ...) - Links to related websites/blogs/talk material (if any) If you have any questions, feel free to contact the devroom organizers: distributions-devroom at lists.fosdem.org (https://lists.fosdem.org/listinfo/distributions-devroom) Cheers! Karanbir Singh (twitter: @kbsingh) and Brian Stinson (twitter: @bstinsonmhk) for and on behalf of The Distributions Devroom Program Committee _______________________________________________ FOSDEM mailing list FOSDEM at lists.fosdem.org https://lists.fosdem.org/listinfo/fosdem ----- End forwarded message ----- -- Matthias Runge From ihrachys at redhat.com Wed Nov 4 09:21:15 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 4 Nov 2015 10:21:15 +0100 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: <45B949C3-5820-4413-9A93-7A6DDE3C83F7@redhat.com> Matt Kassawara wrote: > I agree that *-paste.ini files should remain static. Keystone contains > the only one that we need to edit (for security reasons) and the patch to > move this configuration out of keystone-paste.ini needs attention from > the keystone project. As for the installation guide, I prefer to unify > the documentation for editing keystone-paste.ini for all distributions. > Furthermore, our audience (mostly new users) likely feels more confident > about editing files that reside in a less "intimidating" location such as > /etc/$service. > > Best I can tell, neutron (and all other services) separate "mandatory" > message queue access (the 'rpc_backend' option) from notification access > because the latter only pertains to deployments with a consumer for > notifications such as ceilometer. Without a consumer, notification queues > pile up and lead to stability problems. Hence, the 'notification_driver' > option defaults to a blank value that essentially disables such > notifications. The upstream configuration file comments this option out > and installation guide doesn't explicitly configure it which means > neutron uses the value of 'notification_driver' from the > neutron-dist.conf file and sends notifications to a queue without a > consumer. While I'm thinking about it, I'm trying to determine the source > of a memory leak (or strange increase in consumption) in my RDO Liberty > environment (and prior releases) and should try disabling the > notification driver. In comparison, my Ubuntu Liberty environment > containing the same services and virtual resources has stable memory > usage. Do you use DHCP agent from neutron? I think it requires notification driver to be enabled. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From apevec at gmail.com Wed Nov 4 11:57:46 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 4 Nov 2015 12:57:46 +0100 Subject: [Rdo-list] [FOSDEM] Distributions Devroom CFP In-Reply-To: <20151104075229.GA16777@sofja.berg.ol> References: <20151104075229.GA16777@sofja.berg.ol> Message-ID: > Since RDO is a *distribution*, it should fit to the scope of > the Distribution Devroom. Probably not, this is about _Linux_ distributions, better fit is virtualization devroom. There will be also RDO day collocated with CentOS Dojo day before FOSDEM, details TBD. Cheers, Alan From rbowen at redhat.com Wed Nov 4 13:24:43 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 4 Nov 2015 08:24:43 -0500 Subject: [Rdo-list] [FOSDEM] Distributions Devroom CFP In-Reply-To: References: <20151104075229.GA16777@sofja.berg.ol> Message-ID: <563A071B.5090408@redhat.com> On 11/04/2015 06:57 AM, Alan Pevec wrote: >> Since RDO is a *distribution*, it should fit to the scope of >> the Distribution Devroom. > > Probably not, this is about _Linux_ distributions, better fit is > virtualization devroom. > There will be also RDO day collocated with CentOS Dojo day before > FOSDEM, details TBD. Hoping for some kind of CFP for that today or tomorrow, as soon as I figure out how much space/time we have available to us. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From kchamart at redhat.com Wed Nov 4 13:26:53 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 4 Nov 2015 14:26:53 +0100 Subject: [Rdo-list] [FOSDEM] Distributions Devroom CFP In-Reply-To: References: <20151104075229.GA16777@sofja.berg.ol> Message-ID: <20151104132653.GA11677@tesla.redhat.com> On Wed, Nov 04, 2015 at 12:57:46PM +0100, Alan Pevec wrote: > > Since RDO is a *distribution*, it should fit to the scope of > > the Distribution Devroom. > > Probably not, this is about _Linux_ distributions, better fit is > virtualization devroom. You're right, the implicit assumption is Linux distributions, in context of FOSDEM, when they say 'distributions'. > There will be also RDO day collocated with CentOS Dojo day before > FOSDEM, details TBD. -- /kashyap From mrunge at redhat.com Wed Nov 4 13:29:55 2015 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 4 Nov 2015 14:29:55 +0100 Subject: [Rdo-list] [FOSDEM] Distributions Devroom CFP In-Reply-To: References: <20151104075229.GA16777@sofja.berg.ol> Message-ID: <20151104132955.GA10864@sofja.berg.ol> On Wed, Nov 04, 2015 at 12:57:46PM +0100, Alan Pevec wrote: > > Since RDO is a *distribution*, it should fit to the scope of > > the Distribution Devroom. > > Probably not, this is about _Linux_ distributions, better fit is > virtualization devroom. > There will be also RDO day collocated with CentOS Dojo day before > FOSDEM, details TBD. So, yes and no. Still this is experience with building a new distribution of *something*, including integration in other communities, creating a build tool, etc. I think it's pretty impressive, and will probably better fit to distribution dev room rather than to cloud devroom, where is building a community, integration between up- and downstream is not in focus. Matthias -- Matthias Runge From hguemar at fedoraproject.org Wed Nov 4 14:03:23 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 4 Nov 2015 15:03:23 +0100 Subject: [Rdo-list] [FOSDEM] Distributions Devroom CFP In-Reply-To: <20151104132955.GA10864@sofja.berg.ol> References: <20151104075229.GA16777@sofja.berg.ol> <20151104132955.GA10864@sofja.berg.ol> Message-ID: Just for the record, my RDO talk was accepted last year in the distro devroom. I think that the devroom committee has an open definition of what is a distro. IMHO, unless we're focusing on a specific technical feature, RDO fits better in the distro devroom. Regards, H. From astafeye at redhat.com Wed Nov 4 15:08:15 2015 From: astafeye at redhat.com (Alex Stafeyev) Date: Wed, 4 Nov 2015 10:08:15 -0500 (EST) Subject: [Rdo-list] Tried to delete stack -unsuccessful In-Reply-To: <586685322.3295816.1446649203323.JavaMail.zimbra@redhat.com> Message-ID: <111825564.3306068.1446649695450.JavaMail.zimbra@redhat.com> Hi all I tried to execute heat stack-delete overcloud. ( from instack) The deletion failed . one compute stuck in deleting mode ironic node-list +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | 1de78028-60ca-4bfc-92e0-b10d02d03866 | None | None | power on | available | False | | cb304da8-e349-4e75-a425-1a3170c0f608 | None | None | power on | available | False | | 5b7fd2e7-a444-464c-b960-eea731eea9b3 | None | 5dfe9f1f-bb54-4167-8c75-5120cd77fce1 | power on | deleting | False | | c9b7db56-a6bb-4f96-a31b-6d05125a2bec | None | None | power on | available | False | | ebf8cd0b-844e-4797-a37d-a4f7465e948a | None | None | power on | available | False | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ heat resource-list overcloud (I showed only the problematic one) +-------------------------------------------+--------------------------------------+---------------------------------------------------+--------------------+---------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +-------------------------------------------+--------------------------------------+---------------------------------------------------+--------------------+---------------------+ | Compute | 306a449a-4b31-4781-83d1-80269668d7ef | OS::Heat::ResourceGroup | DELETE_IN_PROGRESS | 2015-10-27T12:37:38 | <------------------------------ +-------------------------------------------+--------------------------------------+---------------------------------------------------+--------------------+---------------------+ nova list +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | 5dfe9f1f-bb54-4167-8c75-5120cd77fce1 | overcloud-novacompute-0 | ACTIVE | deleting | Running | ctlplane=192.0.2.9 | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ ssh heat-admin at 192.0.2.9 sudo journalctl -u os-collect-config Nov 04 14:54:31 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:54:31.790 1043 WARNING os_collect_config.zaqar [-] No auth_url configured. Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.217 1043 WARNING os_collect_config.cfn [-] 403 Client Error: AccessDenied Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.217 1043 WARNING os-collect-config [-] Source [cfn] Unavailable. Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.218 1043 WARNING os-collect-config [-] Source [request] Unavailable. Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.218 1043 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.218 1043 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.219 1043 WARNING os_collect_config.zaqar [-] No auth_url configured. Nov 04 14:55:32 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:32.619 1043 WARNING os_collect_config.cfn [-] 403 Client Error: AccessDenied tnx From astafeye at redhat.com Wed Nov 4 15:11:04 2015 From: astafeye at redhat.com (Alex Stafeyev) Date: Wed, 4 Nov 2015 10:11:04 -0500 (EST) Subject: [Rdo-list] Tried to delete stack -unsuccessful In-Reply-To: <111825564.3306068.1446649695450.JavaMail.zimbra@redhat.com> References: <111825564.3306068.1446649695450.JavaMail.zimbra@redhat.com> Message-ID: <711020392.3311122.1446649864851.JavaMail.zimbra@redhat.com> Hi all - liberty+CentOs 7. used the following guide for installation- https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html Tried to install 3 controller+2compute overcloud. Failed. Tried to delete existing stuck for reinstallation with the "heat stack-delete overcloud" command. The deletion failed . one compute stuck in deleting mode ironic node-list +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | 1de78028-60ca-4bfc-92e0-b10d02d03866 | None | None | power on | available | False | | cb304da8-e349-4e75-a425-1a3170c0f608 | None | None | power on | available | False | | 5b7fd2e7-a444-464c-b960-eea731eea9b3 | None | 5dfe9f1f-bb54-4167-8c75-5120cd77fce1 | power on | deleting | False | | c9b7db56-a6bb-4f96-a31b-6d05125a2bec | None | None | power on | available | False | | ebf8cd0b-844e-4797-a37d-a4f7465e948a | None | None | power on | available | False | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ heat resource-list overcloud (I showed only the problematic one) +-------------------------------------------+--------------------------------------+---------------------------------------------------+--------------------+---------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +-------------------------------------------+--------------------------------------+---------------------------------------------------+--------------------+---------------------+ | Compute | 306a449a-4b31-4781-83d1-80269668d7ef | OS::Heat::ResourceGroup | DELETE_IN_PROGRESS | 2015-10-27T12:37:38 | <------------------------------ +-------------------------------------------+--------------------------------------+---------------------------------------------------+--------------------+---------------------+ nova list +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | 5dfe9f1f-bb54-4167-8c75-5120cd77fce1 | overcloud-novacompute-0 | ACTIVE | deleting | Running | ctlplane=192.0.2.9 | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ ssh heat-admin at 192.0.2.9 sudo journalctl -u os-collect-config Nov 04 14:54:31 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:54:31.790 1043 WARNING os_collect_config.zaqar [-] No auth_url configured. Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.217 1043 WARNING os_collect_config.cfn [-] 403 Client Error: AccessDenied Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.217 1043 WARNING os-collect-config [-] Source [cfn] Unavailable. Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.218 1043 WARNING os-collect-config [-] Source [request] Unavailable. Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.218 1043 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.218 1043 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) Nov 04 14:55:02 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:02.219 1043 WARNING os_collect_config.zaqar [-] No auth_url configured. Nov 04 14:55:32 overcloud-novacompute-0.localdomain os-collect-config[1043]: 2015-11-04 14:55:32.619 1043 WARNING os_collect_config.cfn [-] 403 Client Error: AccessDenied tnx From mkassawara at gmail.com Wed Nov 4 13:46:46 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Wed, 4 Nov 2015 06:46:46 -0700 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <45B949C3-5820-4413-9A93-7A6DDE3C83F7@redhat.com> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> <45B949C3-5820-4413-9A93-7A6DDE3C83F7@redhat.com> Message-ID: Yes. On Wed, Nov 4, 2015 at 2:21 AM, Ihar Hrachyshka wrote: > Matt Kassawara wrote: > > I agree that *-paste.ini files should remain static. Keystone contains the >> only one that we need to edit (for security reasons) and the patch to move >> this configuration out of keystone-paste.ini needs attention from the >> keystone project. As for the installation guide, I prefer to unify the >> documentation for editing keystone-paste.ini for all distributions. >> Furthermore, our audience (mostly new users) likely feels more confident >> about editing files that reside in a less "intimidating" location such as >> /etc/$service. >> >> Best I can tell, neutron (and all other services) separate "mandatory" >> message queue access (the 'rpc_backend' option) from notification access >> because the latter only pertains to deployments with a consumer for >> notifications such as ceilometer. Without a consumer, notification queues >> pile up and lead to stability problems. Hence, the 'notification_driver' >> option defaults to a blank value that essentially disables such >> notifications. The upstream configuration file comments this option out and >> installation guide doesn't explicitly configure it which means neutron uses >> the value of 'notification_driver' from the neutron-dist.conf file and >> sends notifications to a queue without a consumer. While I'm thinking about >> it, I'm trying to determine the source of a memory leak (or strange >> increase in consumption) in my RDO Liberty environment (and prior releases) >> and should try disabling the notification driver. In comparison, my Ubuntu >> Liberty environment containing the same services and virtual resources has >> stable memory usage. >> > > Do you use DHCP agent from neutron? I think it requires notification > driver to be enabled. > > Ihar -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Nov 4 16:02:04 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 4 Nov 2015 11:02:04 -0500 Subject: [Rdo-list] FOSDEM RDO Day, call for participation Message-ID: <563A2BFC.8060504@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 We will be holding an RDO Community Day at FOSDEM, in conjunction with the CentOS Dojo. We have secured space at the IBM building in Brussels, where we will have a room for a day-long track of RDO and OpenStack content, as requested at our meetups in Vancouver and Tokyo. This event will occur on Friday, January 29th, prior to FOSDEM starting on the 30th. Exact details on times will come soon. In preparation for this event, please submit your proposals for sessions via the Google Form at http://goo.gl/forms/oDjI2BpCtm Note that if a session is to be a community open discussion (rather than a presentation), you're not necessarily signing up to be the main speaker. We'll have plenty of people there who can moderate/chair these general sessions. - -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlY6K/YACgkQXP03+sx4yJN8nQCePLxD36GtGUqn5irPTZxtTXvZ 8/cAoPYANFExMc3c0nbQvXVMcJhDw2g4 =168R -----END PGP SIGNATURE----- From apevec at gmail.com Wed Nov 4 17:19:12 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 4 Nov 2015 18:19:12 +0100 Subject: [Rdo-list] [meeting] RDO meeting (2015-11-04) Message-ID: ============================= #rdo: RDO meeting (2015-11-04) ============================= Meeting started by apevec at 15:01:06 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-11-04/rdo.2015-11-04-15.01.log.html . Meeting summary --------------- * rollcall (apevec, 15:01:43) * agenda at https://etherpad.openstack.org/p/RDO-Packaging (apevec, 15:01:53) * Mitaka Summit reports (apevec, 15:05:03) * release management session highlights - desynchronized Mitaka milestones and stable/liberty point releases (apevec, 15:20:18) * upstream packaging-rpm get momentum - extend core, deliverables for Mitaka (apevec, 15:20:49) * use Delorean tool for upstream infra builds, packaging-RPM builds will replace RDO delorean at some point (number80, 15:23:06) * RDO meetup (apevec, 15:26:12) * 70 people at RDO meetup, minutes coming, please blog your views if you were there! (apevec, 15:27:19) * ACTION: number80 write blog post about summit (number80, 15:32:03) * ACTION: trown blog about summit from rdo-manager perspective (trown, 15:32:19) * ACTION: rbowen start earlier our quest for RDO meetup room (apevec, 15:32:42) * RDO Mitaka themes (apevec, 15:33:39) * python3 (apevec, 15:34:55) * switch to PyMySQL https://trello.com/c/q0VoAYJq/89-migrate-mysql-python-to-pymysql (apevec, 15:36:27) * ACTION: jpena to include fedora rawhide worker in delorean rebuild (jpena, 15:39:17) * -tests subpackages and testdeps / enable %check (apevec, 15:42:19) * ACTION: apevec add card for tracking -tests subpackages / testdeps / enable %check progress (apevec, 15:43:13) * LINK: https://trello.com/c/pFBmc3rk/80-bump-rdo-liberty-ci-tests-from-minimal-to-smoke-tests (apevec, 15:49:13) * DLM support (apevec, 15:51:54) * more automation (apevec, 15:55:44) * rdo-manager quickstart (apevec, 15:57:28) * FOSEDM event (apevec, 16:00:07) * Delorean instance rebuild on Nov 5 (apevec, 16:00:46) * open floor (apevec, 16:01:54) Meeting ended at 16:02:33 UTC. Action Items ------------ * number80 write blog post about summit * trown blog about summit from rdo-manager perspective * rbowen start earlier our quest for RDO meetup room * jpena to include fedora rawhide worker in delorean rebuild * apevec add card for tracking -tests subpackages / testdeps / enable %check progress Action Items, by person ----------------------- * apevec * apevec add card for tracking -tests subpackages / testdeps / enable %check progress * jpena * jpena to include fedora rawhide worker in delorean rebuild * number80 * number80 write blog post about summit * rbowen * rbowen start earlier our quest for RDO meetup room * trown * trown blog about summit from rdo-manager perspective * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (105) * number80 (47) * dmsimard (23) * rbowen (21) * EmilienM (18) * jpena (14) * trown (12) * zodbot (9) * jruzicka (5) * jschlueter (5) * olap (4) * Humbedooh (2) * chandankumar (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From opavlenk at redhat.com Wed Nov 4 18:46:51 2015 From: opavlenk at redhat.com (Ola Pavlenko) Date: Wed, 4 Nov 2015 13:46:51 -0500 (EST) Subject: [Rdo-list] RDO liberty - failing to install undercloud on RHEL In-Reply-To: <373027741.3545286.1446662801575.JavaMail.zimbra@redhat.com> Message-ID: <1041831368.3545587.1446662811406.JavaMail.zimbra@redhat.com> Hi All, I was trying to install RDO on RHEL 7.1 and it failed on undercloud installation part. This guide was used: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html Due to the failure 2 bugs were submitted: https://bugzilla.redhat.com/show_bug.cgi?id=1277980 https://bugzilla.redhat.com/show_bug.cgi?id=1277990 Any help is much appreciated! -- Regards, Ola From ibravo at ltgfederal.com Wed Nov 4 19:05:00 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 4 Nov 2015 14:05:00 -0500 Subject: [Rdo-list] RDO liberty - failing to install undercloud on RHEL In-Reply-To: <1041831368.3545587.1446662811406.JavaMail.zimbra@redhat.com> References: <1041831368.3545587.1446662811406.JavaMail.zimbra@redhat.com> Message-ID: <54654AC1-21B0-4907-BB68-418E4E08D883@ltgfederal.com> Ola, Try installing epel before the proliant utils. - Ignacio Bravo LTG federal > On Nov 4, 2015, at 1:46 PM, Ola Pavlenko wrote: > > > Hi All, > > I was trying to install RDO on RHEL 7.1 and it failed on undercloud installation part. > > This guide was used: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html > > Due to the failure 2 bugs were submitted: > > https://bugzilla.redhat.com/show_bug.cgi?id=1277980 > > https://bugzilla.redhat.com/show_bug.cgi?id=1277990 > > > Any help is much appreciated! > > -- > Regards, > Ola > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From opavlenk at redhat.com Wed Nov 4 19:47:25 2015 From: opavlenk at redhat.com (Ola Pavlenko) Date: Wed, 4 Nov 2015 14:47:25 -0500 (EST) Subject: [Rdo-list] RDO liberty - failing to install undercloud on RHEL In-Reply-To: <54654AC1-21B0-4907-BB68-418E4E08D883@ltgfederal.com> References: <1041831368.3545587.1446662811406.JavaMail.zimbra@redhat.com> <54654AC1-21B0-4907-BB68-418E4E08D883@ltgfederal.com> Message-ID: <128130203.3610893.1446666445342.JavaMail.zimbra@redhat.com> I did it as part of the installation. >From the guide https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html Step 2 : sudo yum -y install epel-release And then I got no package available ...repository priority protections. So I just moved on. ----- Original Message ----- From: "Ignacio Bravo" To: "Ola Pavlenko" Cc: rdo-list at redhat.com Sent: Wednesday, November 4, 2015 9:05:00 PM Subject: Re: [Rdo-list] RDO liberty - failing to install undercloud on RHEL Ola, Try installing epel before the proliant utils. - Ignacio Bravo LTG federal > On Nov 4, 2015, at 1:46 PM, Ola Pavlenko wrote: > > > Hi All, > > I was trying to install RDO on RHEL 7.1 and it failed on undercloud installation part. > > This guide was used: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html > > Due to the failure 2 bugs were submitted: > > https://bugzilla.redhat.com/show_bug.cgi?id=1277980 > > https://bugzilla.redhat.com/show_bug.cgi?id=1277990 > > > Any help is much appreciated! > > -- > Regards, > Ola > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Ola From rbowen at redhat.com Wed Nov 4 21:42:17 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 4 Nov 2015 16:42:17 -0500 Subject: [Rdo-list] RDO community meetup at OpenStack Summit Message-ID: <563A7BB9.9050801@redhat.com> Last week in Tokyo we had a community meetup at the OpenStack Summit. We had about 70 people in attendance, and discussed a variety of topics. Thank you to those who attended, and especially those who spoke up. I've written up my notes from the event at https://www.rdoproject.org/blog/2015/11/community-meetup-at-openstack-tokyo/ and also posted the complete recording of the event (which is of variable quality, depending on where the speaker was standing at any given time). If you would like to make any edits or updates to the posting, please do so via Github - https://github.com/redhat-openstack/website/blob/master/source/blog/2015-11-04-community-meetup-at-openstack-tokyo.html.md If you attended and/or spoke at the meetup, and blog about it, please let me know, so that we can link to your post from this one. Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From chkumar246 at gmail.com Wed Nov 4 16:05:42 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 4 Nov 2015 21:35:42 +0530 Subject: [Rdo-list] RDO Bug Statistics (2015-11-04) Message-ID: # RDO Bugs on 2015-11-04 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 337 - Fixed (MODIFIED, POST, ON_QA): 189 ## Number of open bugs by component diskimage-builder [ 4] ++ distribution [ 14] ++++++++++ dnsmasq [ 1] Documentation [ 4] ++ instack [ 4] ++ instack-undercloud [ 28] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 11] +++++++ openstack-cinder [ 14] ++++++++++ openstack-foreman-inst... [ 2] + openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 2] + openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] +++++ openstack-manila [ 10] +++++++ openstack-neutron [ 10] +++++++ openstack-nova [ 18] ++++++++++++ openstack-packstack [ 56] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] +++++++ openstack-selinux [ 13] +++++++++ openstack-swift [ 3] ++ openstack-tripleo [ 26] ++++++++++++++++++ openstack-tripleo-heat... [ 5] +++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 4] ++ openvswitch [ 1] Package Review [ 3] ++ python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 1] rdo-manager [ 48] ++++++++++++++++++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (337 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-10-07 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-10-07 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (11 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-10-26 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1265721 ] http://bugzilla.redhat.com/1265721 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (NEW) Component: openstack-ceilometer Last change: 2015-09-23 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field [1265818 ] http://bugzilla.redhat.com/1265818 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-28 Summary: ceilometer polling agent does not start ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-manila (10 bugs) [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1277787 ] http://bugzilla.redhat.com/1277787 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: Glusterfs_driver: Export location for Glusterfs NFS- Ganesha is incorrect [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: puppet module for manila should include service type - shareV2 [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1277792 ] http://bugzilla.redhat.com/1277792 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: glusterfs_driver: Access-deny for glusterfs driver should be dynamic [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272954 ] http://bugzilla.redhat.com/1272954 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterFS_native_driver: snapshot delete doesn't delete snapshot entries that are in error state [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (10 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-20 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1272289 ] http://bugzilla.redhat.com/1272289 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-19 Summary: rdo-manager tempest smoke test failing on "floating ip pool not found' [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-10-23 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1271838 ] http://bugzilla.redhat.com/1271838 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-20 Summary: Baremetal basic non-HA deployment fails due to failing module import by neutron [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (18 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2015-10-19 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files ### openstack-packstack (56 bugs) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-10-27 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-10-08 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1269255 ] http://bugzilla.redhat.com/1269255 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Failed to start RabbitMQ broker. [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1276277 ] http://bugzilla.redhat.com/1276277 (NEW) Component: openstack-packstack Last change: 2015-10-31 Summary: packstack --allinone fails on CentOS 7 x86_64 1503-01 [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (13 bugs) [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-10-20 Summary: Glance over nfs fails due to selinux [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-29 Summary: Nova rootwrap-daemon requires a selinux exception [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (3 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-10-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (26 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: missing python-proliantutils [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (4 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1270615 ] http://bugzilla.redhat.com/1270615 (NEW) Component: openstack-utils Last change: 2015-10-11 Summary: openstack status still checking mysql not mariadb [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (3 bugs) [1272524 ] http://bugzilla.redhat.com/1272524 (NEW) Component: Package Review Last change: 2015-10-16 Summary: Review Request: Mistral - workflow Service for OpenStack cloud [1268372 ] http://bugzilla.redhat.com/1268372 (ASSIGNED) Component: Package Review Last change: 2015-10-29 Summary: Review Request: openstack-app-catalog-ui - openstack horizon plugin for the openstack app-catalog [1272513 ] http://bugzilla.redhat.com/1272513 (NEW) Component: Package Review Last change: 2015-10-16 Summary: Review Request: Murano - is an application catalog for OpenStack ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (48 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Support IPv6 [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-10-20 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1271335 ] http://bugzilla.redhat.com/1271335 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Support explicit configuration of L2 population [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: [RFE] Support enabling the port security extension [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-31 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (189 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (2 bugs) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2015-10-23 Summary: Ceilometer dbsync failing during HA deployment ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2015-10-26 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (60 bugs) [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (12 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (2 bugs) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account [1272572 ] http://bugzilla.redhat.com/1272572 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-10-28 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ON_QA) Component: Package Review Last change: 2015-10-09 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (9 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1272180 ] http://bugzilla.redhat.com/1272180 (MODIFIED) Component: rdo-manager Last change: 2015-10-19 Summary: Horizon doesn't load when deploying without pacemaker [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-11-04 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (9 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Nov 5 02:38:05 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 5 Nov 2015 04:38:05 +0200 Subject: [Rdo-list] [rdo-manager] where is horizon? Message-ID: i have .. finally .. installed rdo-manager / undercloud on baremetal on centos my undercloud.conf is at the bottom of this message. my question is what happened to horizon? why wasnt it installed with the rest of the undercloud? (i wont talk about how i had a semi heart attack when i found mysql disabled in openstack-status: false alert as it is now called mariadb = re bugzilla) i worked around this by installing openstack-dashboard and enabling the ip in /etc/openstack-dashboard/local_settings but it did throw me for a bit. [DEFAULT] image_path = ~/images local_ip = 10.200.3.2/24 undercloud_public_vip = 10.200.3.3 undercloud_admin_vip = 10.200.3.4 local_interface = eno1 masquerade_network = 10.200.3.0/24 dhcp_start = 10.200.3.10 dhcp_end = 10.200.3.99 network_cidr = 10.200.3.0/24 network_gateway = 10.200.3.1 discovery_interface = br-ctlplane discovery_iprange = 10.200.3.100,10.200.3.199 [auth] undercloud_admin_token = password undercloud_admin_password = password -- *805010942448935* * * *GR750055912MA* *Link to me on LinkedIn * From ibravo at ltgfederal.com Thu Nov 5 03:21:01 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 4 Nov 2015 22:21:01 -0500 Subject: [Rdo-list] [rdo-manager] where is horizon? In-Reply-To: References: Message-ID: Mohammed, The undercloud used to install with Tuskar UI. It no longer does, as folks are now working on another GUI based on the REST APIs. So, for the time being, there is no Horizon on the undercloud. Now, when you deploy the overcloud, that is a different history. After you do openstack deploy, and it goes through the installation, if you have a successful install, it will finalize with a message and an overcloudrc file where you can find the location of Horizon and the credentials for the overcloud environment. IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com > On Nov 4, 2015, at 9:38 PM, Mohammed Arafa wrote: > > i have .. finally .. installed rdo-manager / undercloud on baremetal on centos > > my undercloud.conf is at the bottom of this message. > > my question is what happened to horizon? why wasnt it installed with > the rest of the undercloud? (i wont talk about how i had a semi heart > attack when i found mysql disabled in openstack-status: false alert as > it is now called mariadb = re bugzilla) > > i worked around this by installing openstack-dashboard and enabling > the ip in /etc/openstack-dashboard/local_settings but it did throw me > for a bit. > > [DEFAULT] > image_path = ~/images > local_ip = 10.200.3.2/24 > undercloud_public_vip = 10.200.3.3 > undercloud_admin_vip = 10.200.3.4 > local_interface = eno1 > masquerade_network = 10.200.3.0/24 > dhcp_start = 10.200.3.10 > dhcp_end = 10.200.3.99 > network_cidr = 10.200.3.0/24 > network_gateway = 10.200.3.1 > discovery_interface = br-ctlplane > discovery_iprange = 10.200.3.100,10.200.3.199 > [auth] > undercloud_admin_token = password > undercloud_admin_password = password > > > -- > > > > > *805010942448935* > * * > > *GR750055912MA* > > *Link to me on LinkedIn * > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From celik.esra at tubitak.gov.tr Thu Nov 5 07:09:41 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Thu, 5 Nov 2015 09:09:41 +0200 (EET) Subject: [Rdo-list] [rdo-manager] where is horizon? In-Reply-To: References: Message-ID: <371711984.3915447.1446707381660.JavaMail.zimbra@tubitak.gov.tr> Hi Ignacio, As Tuskar is not available anymore, will the new GUI cover the installation of Overcloud only? or will we able to install the Undercloud over the GUI as well? And could you send the git repository link of the RESTful GUI project please? Thanks in advance Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr ----- Orijinal Mesaj ----- > Kimden: "Ignacio Bravo" > Kime: "Mohammed Arafa" > Kk: rdo-list at redhat.com > G?nderilenler: 5 Kas?m Per?embe 2015 6:21:01 > Konu: Re: [Rdo-list] [rdo-manager] where is horizon? > Mohammed, > The undercloud used to install with Tuskar UI. It no longer does, as folks > are now working on another GUI based on the REST APIs. So, for the time > being, there is no Horizon on the undercloud. > Now, when you deploy the overcloud, that is a different history. After you do > openstack deploy, and it goes through the installation, if you have a > successful install, it will finalize with a message and an overcloudrc file > where you can find the location of Horizon and the credentials for the > overcloud environment. > IB > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > On Nov 4, 2015, at 9:38 PM, Mohammed Arafa < mohammed.arafa at gmail.com > > > wrote: > > > i have .. finally .. installed rdo-manager / undercloud on baremetal on > > centos > > > my undercloud.conf is at the bottom of this message. > > > my question is what happened to horizon? why wasnt it installed with > > > the rest of the undercloud? (i wont talk about how i had a semi heart > > > attack when i found mysql disabled in openstack-status: false alert as > > > it is now called mariadb = re bugzilla) > > > i worked around this by installing openstack-dashboard and enabling > > > the ip in /etc/openstack-dashboard/local_settings but it did throw me > > > for a bit. > > > [DEFAULT] > > > image_path = ~/images > > > local_ip = 10.200.3.2/24 > > > undercloud_public_vip = 10.200.3.3 > > > undercloud_admin_vip = 10.200.3.4 > > > local_interface = eno1 > > > masquerade_network = 10.200.3.0/24 > > > dhcp_start = 10.200.3.10 > > > dhcp_end = 10.200.3.99 > > > network_cidr = 10.200.3.0/24 > > > network_gateway = 10.200.3.1 > > > discovery_interface = br-ctlplane > > > discovery_iprange = 10.200.3.100,10.200.3.199 > > > [auth] > > > undercloud_admin_token = password > > > undercloud_admin_password = password > > > -- > > > < > > https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 > > > > > > *805010942448935*< > > https://www.redhat.com/wapps/training/certification/verify.html?certNumber=805010942448935&verify=Verify > > > > > > * * > > > *GR750055912MA*< > > https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 > > > > > > *Link to me on LinkedIn < http://www.linkedin.com/in/mohammedarafa >* > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From amarkelov at yandex.ru Thu Nov 5 08:02:54 2015 From: amarkelov at yandex.ru (Markelov Andrey) Date: Thu, 05 Nov 2015 11:02:54 +0300 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty Message-ID: <681331446710574@web8h.yandex.ru> Hi guys, If we will see at Juno and Kilo installation guide for CentOS at docs.openstack.org we can see documented workaround about /etc/neutron/plugin.ini (symbolic link to ml2_conf.ini) . We need to edit start script for openswitch-agent as documented. With Liberty that workaround does not works because /usr/lib/systemd/system/neutron-openvswitch-agent.service was changed. ?Any? plugin.ini now not in ?config-file options for Openswitch-agent. And it not documented in Liberty install guide. As solution you can rename /etc/neutron.plugins/ml2_conf.ini to /etc/neutron.plugins/openswitch_agent.ini and it will work. Time by time I lead OpenStack Training cources and I want to explain config files and procidures in ?right way?. My questions are What the idea behind deleting plugin.ini from ?config-file options? Is the ml2_conf.ini obsolete? Is the plugin.ini obsolete? -- Best regards, Andrey Markelov From ihrachys at redhat.com Thu Nov 5 10:56:03 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 5 Nov 2015 11:56:03 +0100 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty In-Reply-To: <681331446710574@web8h.yandex.ru> References: <681331446710574@web8h.yandex.ru> Message-ID: <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> Markelov Andrey wrote: > Hi guys, > > If we will see at Juno and Kilo installation guide for CentOS at > docs.openstack.org we can see documented workaround about > /etc/neutron/plugin.ini (symbolic link to ml2_conf.ini) . We need to edit > start script for openswitch-agent as documented. Please report the bug against upstream docs, OVS agent should NOT read from plugin.ini or ml2_conf.ini. What it reads is merely openvswitch_agent.ini. I thought it was made clear enough once we renamed ovs_neutron_plugin.ini into openvswitch_agent.ini in Liberty, but apparently upstream docs team still assumes plugin.ini is meant to be read by the agent. The fact that devstack still (wrongfully) configures the agent with ml2_conf.ini does not help to clarify the intended setup either: https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/openvswitch_agent#L52 Can you btw provide a link to Liberty docs where it?s documented? > > With Liberty that workaround does not works because > /usr/lib/systemd/system/neutron-openvswitch-agent.service was changed. > ?Any? plugin.ini now not in ?config-file options for Openswitch-agent. > > And it not documented in Liberty install guide. > > As solution you can rename /etc/neutron.plugins/ml2_conf.ini to > /etc/neutron.plugins/openswitch_agent.ini and it will work. The proper solution is to stop configuring the agent using either ml2_conf.ini or plugin.ini, and instead put all agent configuration into openvswitch_agent.ini. > Time by time I lead OpenStack Training cources and I want to explain > config files and procidures in ?right way?. > > My questions are > What the idea behind deleting plugin.ini from ?config-file options? Because so called core plugins (ml2, ovn, ?) are neutron-server only thingies. They have nothing to do on agent side. > Is the ml2_conf.ini obsolete? It is not obsolete. It is still used by neutron-server (thru plugin.ini symlink) in case ml2 is the core plugin for the setup. > Is the plugin.ini obsolete? > No, it?s not obsolete. It is still used by neutron-server to get access to core plugin specific configuration (f.e. [ml2] section from ml2_conf.ini). I hope it clarifies the matter. Ihar From amarkelov at yandex.ru Thu Nov 5 11:37:09 2015 From: amarkelov at yandex.ru (Markelov Andrey) Date: Thu, 05 Nov 2015 14:37:09 +0300 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty In-Reply-To: <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> References: <681331446710574@web8h.yandex.ru> <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> Message-ID: <1175611446723429@web8h.yandex.ru> Thanks for clarification! No, as I wrote before it is not documented in Liberty, but Kilo & Juno: "Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:" http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-compute-node.html Andrey 05.11.2015, 13:56, "Ihar Hrachyshka" : > Markelov Andrey wrote: > >> ?Hi guys, >> >> ?If we will see at Juno and Kilo installation guide for CentOS at >> ?docs.openstack.org we can see documented workaround about >> ?/etc/neutron/plugin.ini (symbolic link to ml2_conf.ini) . We need to edit >> ?start script for openswitch-agent as documented. > > Please report the bug against upstream docs, OVS agent should NOT read from > plugin.ini or ml2_conf.ini. What it reads is merely openvswitch_agent.ini. > I thought it was made clear enough once we renamed ovs_neutron_plugin.ini > into openvswitch_agent.ini in Liberty, but apparently upstream docs team > still assumes plugin.ini is meant to be read by the agent. > > The fact that devstack still (wrongfully) configures the agent with > ml2_conf.ini does not help to clarify the intended setup either: > > https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/openvswitch_agent#L52 > > Can you btw provide a link to Liberty docs where it?s documented? > >> ?With Liberty that workaround does not works because >> ?/usr/lib/systemd/system/neutron-openvswitch-agent.service was changed. >> ??Any? plugin.ini now not in ?config-file options for Openswitch-agent. >> >> ?And it not documented in Liberty install guide. >> >> ?As solution you can rename /etc/neutron.plugins/ml2_conf.ini to >> ?/etc/neutron.plugins/openswitch_agent.ini and it will work. > > The proper solution is to stop configuring the agent using either > ml2_conf.ini or plugin.ini, and instead put all agent configuration into > openvswitch_agent.ini. > >> ?Time by time I lead OpenStack Training cources and I want to explain >> ?config files and procidures in ?right way?. >> >> ?My questions are >> ?What the idea behind deleting plugin.ini from ?config-file options? > > Because so called core plugins (ml2, ovn, ?) are neutron-server only > thingies. They have nothing to do on agent side. > >> ?Is the ml2_conf.ini obsolete? > > It is not obsolete. It is still used by neutron-server (thru plugin.ini > symlink) in case ml2 is the core plugin for the setup. > >> ?Is the plugin.ini obsolete? > > No, it?s not obsolete. It is still used by neutron-server to get access to > core plugin specific configuration (f.e. [ml2] section from ml2_conf.ini). > > I hope it clarifies the matter. > > Ihar -- Best regards, Andrey Markelov From ihrachys at redhat.com Thu Nov 5 12:47:46 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 5 Nov 2015 13:47:46 +0100 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty In-Reply-To: <1175611446723429@web8h.yandex.ru> References: <681331446710574@web8h.yandex.ru> <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> <1175611446723429@web8h.yandex.ru> Message-ID: <11196315-3487-4564-A277-ABA8391FB587@redhat.com> Markelov Andrey wrote: > Thanks for clarification! > > No, as I wrote before it is not documented in Liberty, > > but Kilo & Juno: > > "Due to a packaging bug, the Open vSwitch agent initialization script > explicitly looks for the Open vSwitch plug-in configuration file rather > than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in > configuration file. Run the following commands to resolve this issue:" > > http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-compute-node.html Yeah. So long story short, the text you quoted was always wrong, and they finally fixed it by removing the notion of ?a packaging bug? which was never a packaging bug in RDO but a documentation bug in upstream docs. I am glad to hear it?s not there anymore. That said, I would make sure that they actually have proper instructions on how to configure the agent (using openvswitch_agent.ini). [For linuxbridge, they are correct: http://docs.openstack.org/liberty/install-guide-rdo/neutron-compute-install-option1.html] Ihar From ibravo at ltgfederal.com Thu Nov 5 14:41:25 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 5 Nov 2015 09:41:25 -0500 Subject: [Rdo-list] [rdo-manager] where is horizon? In-Reply-To: <371711984.3915447.1446707381660.JavaMail.zimbra@tubitak.gov.tr> References: <371711984.3915447.1446707381660.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <3500ECF2-5695-4F77-A6EB-6F502E66819E@ltgfederal.com> Check these mails from the mailing list for clues. https://www.redhat.com/archives/rdo-list/2015-October/msg00287.html __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com > On Nov 5, 2015, at 2:09 AM, Esra Celik wrote: > > > > Hi Ignacio, > > As Tuskar is not available anymore, will the new GUI cover the installation of Overcloud only? or will we able to install the Undercloud over the GUI as well? > And could you send the git repository link of the RESTful GUI project please? > > Thanks in advance > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > > > Kimden: "Ignacio Bravo" > Kime: "Mohammed Arafa" > Kk: rdo-list at redhat.com > G?nderilenler: 5 Kas?m Per?embe 2015 6:21:01 > Konu: Re: [Rdo-list] [rdo-manager] where is horizon? > > Mohammed, > > The undercloud used to install with Tuskar UI. It no longer does, as folks are now working on another GUI based on the REST APIs. So, for the time being, there is no Horizon on the undercloud. > > Now, when you deploy the overcloud, that is a different history. After you do openstack deploy, and it goes through the installation, if you have a successful install, it will finalize with a message and an overcloudrc file where you can find the location of Horizon and the credentials for the overcloud environment. > > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > > On Nov 4, 2015, at 9:38 PM, Mohammed Arafa > wrote: > > i have .. finally .. installed rdo-manager / undercloud on baremetal on centos > > my undercloud.conf is at the bottom of this message. > > my question is what happened to horizon? why wasnt it installed with > the rest of the undercloud? (i wont talk about how i had a semi heart > attack when i found mysql disabled in openstack-status: false alert as > it is now called mariadb = re bugzilla) > > i worked around this by installing openstack-dashboard and enabling > the ip in /etc/openstack-dashboard/local_settings but it did throw me > for a bit. > > [DEFAULT] > image_path = ~/images > local_ip = 10.200.3.2/24 > undercloud_public_vip = 10.200.3.3 > undercloud_admin_vip = 10.200.3.4 > local_interface = eno1 > masquerade_network = 10.200.3.0/24 > dhcp_start = 10.200.3.10 > dhcp_end = 10.200.3.99 > network_cidr = 10.200.3.0/24 > network_gateway = 10.200.3.1 > discovery_interface = br-ctlplane > discovery_iprange = 10.200.3.100,10.200.3.199 > [auth] > undercloud_admin_token = password > undercloud_admin_password = password > > > -- > > > > > > *805010942448935*> > * * > > *GR750055912MA*> > > *Link to me on LinkedIn >* > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Thu Nov 5 15:04:15 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 5 Nov 2015 10:04:15 -0500 Subject: [Rdo-list] [RDO-Manager] Working with the overcloud Message-ID: After jumping some hoops and with the help of the usual suspects on IRC, I was able to deploy an overcloud with HA, Network isolation and Ceph. Great! Now I want to focus on what?s next, or how to manage this environment going forward. Let me give you a couple of examples: After the installation of the overcloud, I was hit with the cinder bug described here: https://bugzilla.redhat.com/show_bug.cgi?id=1272572 The issue is that the cinder.conf file needs to replace ?localhost? with the ip of the public keystone. What I did, based on the bugzilla, was to log in to each controller node and then update the value inside the cinder.conf file. Is this one off the proper way to patch and keep the environment updated? I mean, one week from now we will find that a particular RPM needs to be updated, how do you handle this? I thought that the proper way was to recreate the TripleO image and redeploy. Or another example is Ceph. Currently Tripleo installs version 0.8 and I want to install version 9 Inferno. What is the correct path to achieve this? Additionally, let?s say that I want to install, say: CloudKitty (choose your alternate, non mainstream openstack project here) Do we recreate the images and redeploy, or do a puppet run after they have been installed with a tool like Foreman/Katello? Regards, IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From amarkelov at yandex.ru Thu Nov 5 14:47:24 2015 From: amarkelov at yandex.ru (Markelov Andrey) Date: Thu, 05 Nov 2015 17:47:24 +0300 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty In-Reply-To: <11196315-3487-4564-A277-ABA8391FB587@redhat.com> References: <681331446710574@web8h.yandex.ru> <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> <1175611446723429@web8h.yandex.ru> <11196315-3487-4564-A277-ABA8391FB587@redhat.com> Message-ID: <443391446734844@web21h.yandex.ru> Ok. Thanks! And if we are talking about plugin.ini/ml2_conf.ini according to network node in Kilo doc - http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-network-node.html For whom plugin.ini/ml2_conf.ini is needed? Neutron-server is started only on control node. So it seems the only who used plugin.ini/ml2_conf.ini is neutron-openvswitch-agent (but in wrong manner after "bug fixing")? Andrey 05.11.2015, 15:47, "Ihar Hrachyshka" : > Markelov Andrey wrote: > >> ?Thanks for clarification! >> >> ?No, as I wrote before it is not documented in Liberty, >> >> ?but Kilo & Juno: >> >> ?"Due to a packaging bug, the Open vSwitch agent initialization script >> ?explicitly looks for the Open vSwitch plug-in configuration file rather >> ?than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in >> ?configuration file. Run the following commands to resolve this issue:" >> >> ?http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-compute-node.html > > Yeah. So long story short, the text you quoted was always wrong, and they > finally fixed it by removing the notion of ?a packaging bug? which was > never a packaging bug in RDO but a documentation bug in upstream docs. > > I am glad to hear it?s not there anymore. That said, I would make sure that > they actually have proper instructions on how to configure the agent (using > openvswitch_agent.ini). [For linuxbridge, they are correct: > http://docs.openstack.org/liberty/install-guide-rdo/neutron-compute-install-option1.html] > > Ihar -- Best regards, Andrey Markelov From ihrachys at redhat.com Thu Nov 5 15:19:49 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 5 Nov 2015 16:19:49 +0100 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty In-Reply-To: <443391446734844@web21h.yandex.ru> References: <681331446710574@web8h.yandex.ru> <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> <1175611446723429@web8h.yandex.ru> <11196315-3487-4564-A277-ABA8391FB587@redhat.com> <443391446734844@web21h.yandex.ru> Message-ID: <891512FE-ADE6-4741-8E72-43203098FCBD@redhat.com> Markelov Andrey wrote: > Ok. Thanks! > > And if we are talking about plugin.ini/ml2_conf.ini according to network > node in Kilo doc - > > http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-network-node.html > > For whom plugin.ini/ml2_conf.ini is needed? Neutron-server is started > only on control node. > So it seems the only who used plugin.ini/ml2_conf.ini is > neutron-openvswitch-agent (but in wrong manner after "bug fixing")? > > Andrey I believe it?s the same as for compute nodes. Since network nodes also run L2 agent, that?s how the docs suggested to configure the L2 agent (which was, as I wrote before, always wrong). Now that in Liberty we renamed the config file of the agent in a way that did not leave any space for misinterpretation, they fixed the docs. Ihar From mkassawara at gmail.com Thu Nov 5 15:22:38 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Thu, 5 Nov 2015 08:22:38 -0700 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty In-Reply-To: <11196315-3487-4564-A277-ABA8391FB587@redhat.com> References: <681331446710574@web8h.yandex.ru> <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> <1175611446723429@web8h.yandex.ru> <11196315-3487-4564-A277-ABA8391FB587@redhat.com> Message-ID: Several releases ago, neutron deprecated and removed the monolithic OVS and Linux bridge plug-ins. During the deprecation cycle, the installation guide changed the neutron instructions to use ML2 instead of the monolithic OVS plug-in. However, the defunct configuration files (plugins/openvswitch/ovs_neutron_plugin.ini and plugins/linuxbridge/linuxbridge_conf.ini) lingered until the Liberty release. The combination of migrating to ML2 and referencing configuration files from the monolithic plug-ins (especially OVS with "plugin" in the file name) caused significant confusion with our audience that already struggles with neutron and installations in general. Some distributions and deployment tools moved configuration for the OVS and Linux bridge agents into the ml2_conf.ini file and changed the agent init scripts to read it. At the time, implementing this approach in the installation guide seemed like the best solution until neutron resolved the file name/location problem in Liberty. On Thu, Nov 5, 2015 at 5:47 AM, Ihar Hrachyshka wrote: > Markelov Andrey wrote: > > Thanks for clarification! >> >> No, as I wrote before it is not documented in Liberty, >> >> but Kilo & Juno: >> >> "Due to a packaging bug, the Open vSwitch agent initialization script >> explicitly looks for the Open vSwitch plug-in configuration file rather >> than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in >> configuration file. Run the following commands to resolve this issue:" >> >> >> http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-compute-node.html >> > > Yeah. So long story short, the text you quoted was always wrong, and they > finally fixed it by removing the notion of ?a packaging bug? which was > never a packaging bug in RDO but a documentation bug in upstream docs. > > I am glad to hear it?s not there anymore. That said, I would make sure > that they actually have proper instructions on how to configure the agent > (using openvswitch_agent.ini). [For linuxbridge, they are correct: > http://docs.openstack.org/liberty/install-guide-rdo/neutron-compute-install-option1.html > ] > > Ihar > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Thu Nov 5 15:27:30 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 5 Nov 2015 16:27:30 +0100 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty In-Reply-To: References: <681331446710574@web8h.yandex.ru> <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> <1175611446723429@web8h.yandex.ru> <11196315-3487-4564-A277-ABA8391FB587@redhat.com> Message-ID: <6F139055-625F-4781-B862-A6E261407355@redhat.com> Matt Kassawara wrote: > Several releases ago, neutron deprecated and removed the monolithic OVS > and Linux bridge plug-ins. During the deprecation cycle, the installation > guide changed the neutron instructions to use ML2 instead of the > monolithic OVS plug-in. However, the defunct configuration files > (plugins/openvswitch/ovs_neutron_plugin.ini and > plugins/linuxbridge/linuxbridge_conf.ini) lingered until the Liberty > release. The combination of migrating to ML2 and referencing > configuration files from the monolithic plug-ins (especially OVS with > "plugin" in the file name) caused significant confusion with our audience > that already struggles with neutron and installations in general. Some > distributions and deployment tools moved configuration for the OVS and > Linux bridge agents into the ml2_conf.ini file and changed the agent init > scripts to read it. At the time, implementing this approach in the > installation guide seemed like the best solution until neutron resolved > the file name/location problem in Liberty. Agreed the naming in neutron was unfortunate. I believe it stayed the way it was for a while because it was not clear on first sight how to handle it in a backwards compatible way [in the end, we just renamed and left compatibility considerations to distributions.] Ihar From ayoung at redhat.com Thu Nov 5 15:54:46 2015 From: ayoung at redhat.com (Adam Young) Date: Thu, 5 Nov 2015 10:54:46 -0500 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: <563B7BC6.7020809@redhat.com> On 11/03/2015 02:21 PM, Matt Kassawara wrote: > I agree that *-paste.ini files should remain static. Keystone contains > the only one that we need to edit (for security reasons) and the patch > to move this configuration out of keystone-paste.ini needs attention > from the keystone project. As for the installation guide, I prefer to > unify the documentation for editing keystone-paste.ini for all > distributions. Furthermore, our audience (mostly new users) likely > feels more confident about editing files that reside in a less > "intimidating" location such as /etc/$service. Upstream is aware of the issue. We want to replace the SERVICE_TOKEN approach to initializing Keystone to one that uses local CLI calls direct to keystone_manage. Has anyone investigated whether paste files can be latyered or included? Might be the better approach. From mkassawara at gmail.com Thu Nov 5 15:46:40 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Thu, 5 Nov 2015 08:46:40 -0700 Subject: [Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty In-Reply-To: <6F139055-625F-4781-B862-A6E261407355@redhat.com> References: <681331446710574@web8h.yandex.ru> <52216CC3-80BD-413A-AE4E-D680C806BF7B@redhat.com> <1175611446723429@web8h.yandex.ru> <11196315-3487-4564-A277-ABA8391FB587@redhat.com> <6F139055-625F-4781-B862-A6E261407355@redhat.com> Message-ID: The installation guide has a small number of contributors, none of whom belong to the packaging projects it supports. Furthermore, at least until recently, none of the packaging projects respond in a timely fashion to potential issues that we find... and we usually find them while updating the installation guide at least a month prior to release which would also help packagers fix issues prior to release. Given the popularity of the installation guide, we would really appreciate each packaging project contributing some resources that can help us address potential bugs and avoid implementation of workarounds. On Thu, Nov 5, 2015 at 8:27 AM, Ihar Hrachyshka wrote: > Matt Kassawara wrote: > > Several releases ago, neutron deprecated and removed the monolithic OVS >> and Linux bridge plug-ins. During the deprecation cycle, the installation >> guide changed the neutron instructions to use ML2 instead of the monolithic >> OVS plug-in. However, the defunct configuration files >> (plugins/openvswitch/ovs_neutron_plugin.ini and >> plugins/linuxbridge/linuxbridge_conf.ini) lingered until the Liberty >> release. The combination of migrating to ML2 and referencing configuration >> files from the monolithic plug-ins (especially OVS with "plugin" in the >> file name) caused significant confusion with our audience that already >> struggles with neutron and installations in general. Some distributions and >> deployment tools moved configuration for the OVS and Linux bridge agents >> into the ml2_conf.ini file and changed the agent init scripts to read it. >> At the time, implementing this approach in the installation guide seemed >> like the best solution until neutron resolved the file name/location >> problem in Liberty. >> > > Agreed the naming in neutron was unfortunate. I believe it stayed the way > it was for a while because it was not clear on first sight how to handle it > in a backwards compatible way [in the end, we just renamed and left > compatibility considerations to distributions.] > > Ihar > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Thu Nov 5 18:20:45 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 5 Nov 2015 13:20:45 -0500 (EST) Subject: [Rdo-list] [delorean] Planned Delorean upgrade on November 5 In-Reply-To: <1480718606.1062116.1446466538471.JavaMail.zimbra@redhat.com> References: <1480718606.1062116.1446466538471.JavaMail.zimbra@redhat.com> Message-ID: <1556696364.7563811.1446747645623.JavaMail.zimbra@redhat.com> > Dear rdo-list, > > We are planning to update the current Delorean instance next Thursday, > November 5. The upgrade should bring a bigger spec VM and several > improvements on the instance configuration. > > During the upgrade, the Delorean repos will still be available through the > backup instance, but new packages will not be processed until the upgrade is > completed. > And the new instance is now running and processing the pending packages. The new Fedora Rawhide worker (http://trunk.rdoproject.org/f24/status_report.html) is currently bootstraping, it should be fully available tomorrow. If you find any issue with the new instance, please let us know. Regards, Javier From morazi at redhat.com Thu Nov 5 22:31:52 2015 From: morazi at redhat.com (Mike Orazi) Date: Thu, 5 Nov 2015 17:31:52 -0500 Subject: [Rdo-list] How to access the 192.0.2.1:8004 URL to get the deployment failure logs In-Reply-To: References: Message-ID: <563BD8D8.1040808@redhat.com> On 10/29/2015 12:27 PM, Ramkumar GOWRISHANKAR wrote: > Hi, > > My virtual test bed deployment with just one controller and no computes > is failing at ControllerNodesPostDeployment. The debug steps when a > deployment fails tells to run the following command: "heat resource-show > overcloud ControllerNodesPostDeployment". When I run the command, I see > 3 URL starting with http://192.0.2.1:8004. > How do I access these URLs? When I try a wget on these URLs or when I > create a ssh tunnel from the base machine and try to access the URLs I > get permission denied message. When I try to access just the base URL > (http://192.0.2.1:8004 mapped to http://localhost:8005) via a tunnel, I > get the following message: > {"versions": [{"status":"CURRENT", "id": "v1.0", "links": > [{"href":"http://localhost:8005/v1/","rel":"self"}]}]} > > I have looked through the /var/log/heat/ folder for any error messages > but I cannot find any more detailed error message other than deployment > failed at step 1 LoadBalancer. > > Any pointers on how to debug a deployment? > > Thanks, > > Ramkumar > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > Ram, Sorry for the slow response. You might want to try determining the the IP of the controller node that seems to be in error and ssh-ing into that node via the heat-user. If you are able to access the failed node, you can typically look at the os-collect-config log via: sudo journalctl -u os-collect-config A lot of times this will provide at least a good start on what the root cause is. It sounds like you already have some of the basics[1] down for debugging heat issues, but I wanted to re-paste a very handy link in case you had not seen it before. Thanks, - Mike [1] http://hardysteven.blogspot.com/2015/04/debugging-tripleo-heat-templates.html From pgsousa at gmail.com Fri Nov 6 11:24:05 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 6 Nov 2015 11:24:05 +0000 Subject: [Rdo-list] issue with numa and cpu pinning using SRIOV ports Message-ID: Hi all, I have a rdo kilo deployment, using sr-iov ports to my instances. I'm trying to configure NUMA topology and CPU pinning for some telco based workloads based on this doc: http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/ I have 3 compute nodes, I'm trying to use one of them to use cpu pinning. I've configured it like this: *Compute Node (total 24 cpus)* */etc/nova/nova.conf* vcpu_pin_set=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23 Changed grub to isolate my cpus: #grubby --update-kernel=ALL --args="isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23" #grub2-install /dev/sda *Controller Nodes:* */etc/nova/nova.conf* scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter scheduler_available_filters = nova.scheduler.filters.all_filters scheduler_available_filters = nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter *Created host aggregate performance * #nova aggregate-create performance #nova aggregate-set-metadata 1 pinned=true #nova aggregate-add-host 1 compute03 *Created host aggregate normal* #nova aggregate-create normal #nova aggregate-set-metadata 2 pinned=false #nova aggregate-add-host 2 compute01 #nova aggregate-add-host 2 compute02 *Created the flavor with cpu pinning* #nova flavor-create m1.performance 6 2048 20 4 #nova flavor-key 6 set hw:cpu_policy=dedicated #nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true *The issue is:* With SR-IOV ports it only let's me create instances with 6 vcpus in total with the conf described above. Without SR-IOV, using OVS, I don't have that limitation. Is this a bug or something? I've seen this: https://bugs.launchpad.net/nova/+bug/1441169, however I have the patch, and as I said it works for the first 6 vcpus with my configuration. *Some relevant logs:* */var/log/nova/nova-scheduler.log* 2015-11-06 11:18:17.955 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Starting with 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:70 2015-11-06 11:18:17.955 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter RetryFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84 2015-11-06 11:18:17.955 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter AvailabilityZoneFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84 2015-11-06 11:18:17.955 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter RamFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84 2015-11-06 11:18:17.956 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ComputeFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84 2015-11-06 11:18:17.956 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ComputeCapabilitiesFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84 2015-11-06 11:18:17.956 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ImagePropertiesFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84 2015-11-06 11:18:17.956 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ServerGroupAntiAffinityFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84 2015-11-06 11:18:17.956 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ServerGroupAffinityFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84 2015-11-06 11:18:17.957 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter PciPassthroughFilter returned 3 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84*2015-11-06 11:18:17.959 59494 DEBUG nova.filters [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Filter NUMATopologyFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84* Any help would be appreciated. Thanks, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Fri Nov 6 16:12:49 2015 From: marius at remote-lab.net (Marius Cornea) Date: Fri, 6 Nov 2015 17:12:49 +0100 Subject: [Rdo-list] [RDO-Manager] Working with the overcloud In-Reply-To: References: Message-ID: On Thu, Nov 5, 2015 at 4:04 PM, Ignacio Bravo wrote: > After jumping some hoops and with the help of the usual suspects on IRC, I > was able to deploy an overcloud with HA, Network isolation and Ceph. Great! > > Now I want to focus on what?s next, or how to manage this environment going > forward. Let me give you a couple of examples: > > After the installation of the overcloud, I was hit with the cinder bug > described here: https://bugzilla.redhat.com/show_bug.cgi?id=1272572 The > issue is that the cinder.conf file needs to replace ?localhost? with the ip > of the public keystone. What I did, based on the bugzilla, was to log in to > each controller node and then update the value inside the cinder.conf file. > Is this one off the proper way to patch and keep the environment updated? I > mean, one week from now we will find that a particular RPM needs to be > updated, how do you handle this? I thought that the proper way was to > recreate the TripleO image and redeploy. Updating the overcloud nodes packages should be done via update stack: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/post_deployment/package_update.html > Or another example is Ceph. Currently Tripleo installs version 0.8 and I > want to install version 9 Inferno. What is the correct path to achieve this? > > Additionally, let?s say that I want to install, say: CloudKitty (choose your > alternate, non mainstream openstack project here) > Do we recreate the images and redeploy, or do a puppet run after they have > been installed with a tool like Foreman/Katello? You could try doing it via a post deploy script: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/extra_config.html#post-deploy-extra-configuration > Regards, > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From sgordon at redhat.com Fri Nov 6 20:52:14 2015 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 6 Nov 2015 15:52:14 -0500 (EST) Subject: [Rdo-list] issue with numa and cpu pinning using SRIOV ports In-Reply-To: References: Message-ID: <844571690.9579776.1446843134482.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Pedro Sousa" > To: "rdo-list" > > Hi all, > > I have a rdo kilo deployment, using sr-iov ports to my instances. I'm > trying to configure NUMA topology and CPU pinning for some telco based > workloads based on this doc: > http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/ > > I have 3 compute nodes, I'm trying to use one of them to use cpu pinning. > > I've configured it like this: > > *Compute Node (total 24 cpus)* > */etc/nova/nova.conf* > vcpu_pin_set=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23 > > Changed grub to isolate my cpus: > #grubby --update-kernel=ALL > --args="isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23" > > #grub2-install /dev/sda > > *Controller Nodes:* */etc/nova/nova.conf* > scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter > scheduler_available_filters = nova.scheduler.filters.all_filters > scheduler_available_filters = > nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter *Created > host aggregate performance * #nova aggregate-create performance #nova > aggregate-set-metadata 1 pinned=true > > #nova aggregate-add-host 1 compute03 > > *Created host aggregate normal* > #nova aggregate-create normal > #nova aggregate-set-metadata 2 pinned=false > > #nova aggregate-add-host 2 compute01 > > #nova aggregate-add-host 2 compute02 > > *Created the flavor with cpu pinning* #nova flavor-create m1.performance 6 > 2048 20 4 #nova flavor-key 6 set hw:cpu_policy=dedicated #nova flavor-key 6 > set aggregate_instance_extra_specs:pinned=true *The issue is:* With SR-IOV > ports it only let's me create instances with 6 vcpus in total with the conf > described above. Without SR-IOV, using OVS, I don't have that limitation. > Is this a bug or something? I've seen this: > https://bugs.launchpad.net/nova/+bug/1441169, however I have the patch, and > as I said it works for the first 6 vcpus with my configuration. Adding Nikola and Brent. Do you happen to know if your motherboard chipset supports NUMA locality of the PCIe devices and if so which NUMA nodes the SR-IOV cards are associated with? I *believe* numactl --hardware will tell you if this is the case (I don't presently have a machine in front of me with support for this). I'm wondering if or how the device locality code copes at the moment if the instance spans two nodes (obviously the device is only local to one of them). > *Some relevant logs:* > > */var/log/nova/nova-scheduler.log* > > 2015-11-06 11:18:17.955 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Starting with 3 host(s) get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:70 > > 2015-11-06 11:18:17.955 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter RetryFilter returned 3 host(s) get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84 > 2015-11-06 11:18:17.955 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter AvailabilityZoneFilter returned 3 host(s) > get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84 > 2015-11-06 11:18:17.955 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter RamFilter returned 3 host(s) get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84 > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter ComputeFilter returned 3 host(s) get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84 > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter ComputeCapabilitiesFilter returned 3 host(s) > get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84 > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter ImagePropertiesFilter returned 3 host(s) > get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84 > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter ServerGroupAntiAffinityFilter returned 3 host(s) > get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84 > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter ServerGroupAffinityFilter returned 3 host(s) > get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84 > 2015-11-06 11:18:17.957 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter PciPassthroughFilter returned 3 host(s) get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84*2015-11-06 > 11:18:17.959 59494 DEBUG nova.filters > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > -] Filter NUMATopologyFilter returned 2 host(s) get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:84* > > Any help would be appreciated. This looks like a successful run (still 2 hosts returned after NUMATopologyFilter)? Or did were you expecting the host filtered out by PciPassthroughFilter to still be in scope? Thanks, -- Steve Gordon, Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From jweber at cofront.net Fri Nov 6 22:29:23 2015 From: jweber at cofront.net (Jeff Weber) Date: Fri, 6 Nov 2015 17:29:23 -0500 Subject: [Rdo-list] shade client library Message-ID: Are there any plans to package the shade client library as part of RDO? We are heavy users of Ansible and RDO and the Ansible OpenStack modules for the upcoming 2.0 release have been being rewritten to depend on shade. Having a version of shade packaged along with the other RDO packages in the release would be quite useful. If this is something where community participation would be helpful I'd be happy to try to help out if someone had details on what would need to be done. -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Sat Nov 7 07:48:02 2015 From: apevec at gmail.com (Alan Pevec) Date: Sat, 7 Nov 2015 08:48:02 +0100 Subject: [Rdo-list] shade client library In-Reply-To: References: Message-ID: > If this is something where community participation would be helpful I'd be > happy to try to help out if someone had details on what would need to be > done. For clients and libraries starting point is Fedora package review: https://fedoraproject.org/wiki/Package_Review_Process For python-shade review is in progress https://bugzilla.redhat.com/show_bug.cgi?id=1271768 Cheers, Alan From hguemar at fedoraproject.org Sat Nov 7 07:07:42 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sat, 7 Nov 2015 08:07:42 +0100 Subject: [Rdo-list] shade client library In-Reply-To: References: Message-ID: 2015-11-06 23:29 GMT+01:00 Jeff Weber : > Are there any plans to package the shade client library as part of RDO? We > are heavy users of Ansible and RDO and the Ansible OpenStack modules for the > upcoming 2.0 release have been being rewritten to depend on shade. Having a > version of shade packaged along with the other RDO packages in the release > would be quite useful. > > If this is something where community participation would be helpful I'd be > happy to try to help out if someone had details on what would need to be > done. > Hi Jeff, it's currently under review: https://bugzilla.redhat.com/show_bug.cgi?id=1271768 I've been catching up with package reviews after I returned from summit. I think Lars would be fine with having comaintainers, you should try to reach him. Regards, H. From yeylon at redhat.com Sat Nov 7 17:09:09 2015 From: yeylon at redhat.com (Yaniv Eylon) Date: Sat, 7 Nov 2015 19:09:09 +0200 Subject: [Rdo-list] =?utf-8?q?Hi=EF=BC=8CI_need_your_help-about_packstack_?= =?utf-8?q?--allinone?= In-Reply-To: <2015110713095448487433@netbric.com> References: <2015110713095448487433@netbric.com> Message-ID: adding rdo-list xiaoguang, it is better to share your findings on the mailing list. On Sat, Nov 7, 2015 at 7:09 AM, xiaoguang.fan at netbric.com wrote: > Hi, > I want study RDO, when I meet this bug? > https://bugzilla.redhat.com/show_bug.cgi?id=1254447 > > cannot start httpd when do packstack --allinone > > howto fix this bug?now I cannot deploy rdo in my centos 7 (vm machine)? > thanks > > ________________________________ > /******************************************** > * Name? fanxiaoguang > * Add: > * E-Mail: solar_ambitious at 126.com; > * fanxiaoguang008 at gmail.com > * Cel: 13716563304 > * > ********************************************/ > -- Yaniv. From abregman at redhat.com Sat Nov 7 17:35:52 2015 From: abregman at redhat.com (Arie Bregman) Date: Sat, 7 Nov 2015 19:35:52 +0200 Subject: [Rdo-list] Blueprint: Delorean & Khaleesi In-Reply-To: References: <20151106162119.GB2555@redhat.com> Message-ID: On Sat, Nov 7, 2015 at 12:07 AM, David Moreau Simard wrote: > Can we extend this to rdo-list ? Sounds relevant to the community. > Sure, good idea. Adding rdo-list. > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Fri, Nov 6, 2015 at 4:36 PM, Wesley Hayutin > wrote: > > Arie, > > This looks good. > > Who is going to maintain the delorean ci job? > Who maintains it today? we'll have to discuss it. It might be good idea to add 'job owner/maintainer' info as we have in rhos CI. > Have you reached out to Derek Higgins about writing a replacement for the > > current delorean ci? > No. The work on this has just begun. Adding Derek to this mail. > > > Thanks > > > > On Fri, Nov 6, 2015 at 11:21 AM, Steve Linabery > wrote: > >> > >> On Fri, Nov 06, 2015 at 05:49:05PM +0200, Arie Bregman wrote: > >> > Hi everyone, > >> > > >> > Not sure if you all familiar with Delorean project[1]. Quick > >> > introduction > >> > (CI related) for those who are not: > >> > > >> > Delorean is an upstream project that builds and maintains yum > >> > repositories. > >> > It builds repository every time patch submitted to one of the upstream > >> > openstack-packages projects. > >> > The delorean job is located here: > >> > https://prod-rdojenkins.rhcloud.com/job/delorean-ci > >> > > >> > How the job works at the moment: > >> > It runs delorean directly on the slave and if the build process > >> > succeeded, > >> > it votes with +1 > >> > > >> > What I suggest: > >> > - Move delorean installation and run to khaleesi by creating > 'delorean' > >> > role > >> > - Extend the job to run tests using the rpms delorean built > >> > > >> > Why: > >> > - Main reason: It's important for developers to get immediate feedback > >> > on > >> > whether the new packages are good or not. simply run delorean and see > if > >> > build is ok, is not enough. We need to extend the current job. > >> > > >> > - Users can use khaleesi to test specs they wrote. This is actually > >> > pretty > >> > amazing. users write specs and run khaleesi. khaleesi then handles > >> > everything - it building the rpms using delorean and run the tests. > >> > > >> > - We can use delorean to replace our current way to build rpms and > >> > creating > >> > repos. delorean doing it in a smart way, using docker and by that it > >> > creates rpms for several distributions in isolated environment. > >> > >> Delorean no longer uses docker. > >> > >> > >> > https://github.com/openstack-packages/delorean/commit/66571fce45a007bcf49fd54ad7db622fd737874f > Interesting. any idea why this change? adding Alan. > > >> > >> > > >> > - Khaleesi awesomeness will increase > >> > > >> > There is also no need to add/maintain settings in khaleesi for that. > >> > delorean properties (version, url, etc) will be provided by extra-vars > >> > (unless you are in favor of maintaining general settings for delorean > in > >> > khaleesi) > >> > > >> > The new job work flow: > >> > 1. Run delorean on slave and save rpms from delorean build process. > >> > 2. Run provision playbook > >> > 3. Copy delorean rpms to provisioned nodes and create repo for them on > >> > each > >> > node > >> > 4. run installer playbooks (installer will use delorean rpms) > >> > 5. Run Tests =D > >> > 6. Vote +1/-1 according to build process + tests. > >> > > >> > step 3 can be replaced. Since delorean creates repository, we can > simply > >> > reference each node to the new repository on the slave. > >> > > >> > Would love to hear your opinion on that. > >> > > >> > Cheers, > >> > > >> > Arie > >> > > >> > P.S > >> > started to work on that: https://review.gerrithub.io/#/c/251464 > >> > > >> > [1] https://github.com/openstack-packages/delorean > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashraf.hassan at t-mobile.nl Sat Nov 7 23:04:31 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 8 Nov 2015 00:04:31 +0100 Subject: [Rdo-list] Trying to install the RDO Message-ID: Hi Experts, I am a system analyst, I worked a lot with VMware virtualization, and now I am trying to have a small installation of a private cloud for the sake of demonstration for my manager, I have 6 nodes, 3 will be domain controller, all the nodes HP BL460c G6 with Virtuall connect attached to the blade center. I have configured the 1 LOM in first NIC to be for the PXE boot. The installation for all OS will be Centos7. For the first RDO, I have installed Centos 7 on the first node, and I have configured bonding and VLANs for the other 2 LOMs. I followed the steps in this link () literally - I ran the "openstack undercloud install" using the stack user with nod sudo. My undercloud.conf is as follow: local_ip = 192.168.1.1/24 local_interface = enp2s0f0 masquerade_network = 192.168.1.0/24 dhcp_start = 192.168.1.50 dhcp_end = 192.168.1.151 network_cidr = 192.168.1.0/24 network_gateway = 192.168.1.1 inspection_iprange = 192.168.1.152,192.168.1.252 First I need to know if I need to configure any DNS server on the PXE boot network, honestly I do not see it is needed. Secondly, the installation has failed, where during the installation I am getting these errors: Notice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/Staging::File[rabbitmqadmin]/Exec[/var/lib/rabbitmq/rabbitmqadmin]/returns: Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed connect to localhost:15672; Connection refused Error: curl -k --noproxy localhost --retry 30 --retry-delay 6 -f -L -o /var/lib/rabbitmq/rabbitmqadmin http://guest:guest at localhost:15672/cli/rabbitmqadmin returned 7 instead of one of [0] Error: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/Staging::File[rabbitmqadmin]/Exec[/var/lib/rabbitmq/rabbitmqadmin]/returns: change from notrun to 0 failed: curl -k --noproxy localhost --retry 30 --retry-delay 6 -f -L -o /var/lib/rabbitmq/rabbitmqadmin http://guest:guest at localhost:15672/cli/rabbitmqadmin returned 7 instead of one of [0] Notice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]: Dependency Exec[/var/lib/rabbitmq/rabbitmqadmin] has failures: true Warning: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]: Skipping because of failed dependencies And at the end I am getting this error: 1. Notice: Finished catalog run in 606.34 seconds 2. + rc=6 3. + set -e 4. + echo 'puppet apply exited with exit code 6' 5. puppet apply exited with exit code 6 6. + '[' 6 '!=' 2 -a 6 '!=' 0 ']' 7. + exit 6 8. [2015-11-07 12:56:55,814] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 6] 9. 10.[2015-11-07 12:56:55,815] (os-refresh-config) [ERROR] Aborting... 11.Traceback (most recent call last): 12. File "", line 1, in 13. File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 562, in install 14. _run_orc(instack_env) 15. File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 494, in _run_orc 16. _run_live_command(args, instack_env, 'os-refresh-config') 17. File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 325, in _run_live_command 18. raise RuntimeError('%s failed. See log for details.' % name) 19.RuntimeError: os-refresh-config failed. See log for details. 20.Command 'instack-install-undercloud' returned non-zero exit status 1 I checked and I cannot find any error log for the puppet which means it has failed in very early stage, so I do not know where to look and what to check, and of course how to solve it, can anyone please guide me? Remark, below are the errors I found in the logs were in the following errors: central.log:2015-11-07 12:56:45.816 22965 ERROR ceilometer.nova_client central.log:2015-11-07 12:56:45.825 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.828 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.830 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.832 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.835 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.837 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.840 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.842 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.844 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.847 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.849 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.852 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.855 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.857 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.860 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.863 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.865 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.867 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.870 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.872 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.874 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.877 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.880 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.882 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.884 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.887 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 12:56:45.889 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) central.log:2015-11-07 13:06:45.775 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client central.log:2015-11-07 13:06:45.797 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.800 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.802 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.804 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.806 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.809 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.812 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.814 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.816 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.819 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.821 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.823 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.826 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.828 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.831 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.833 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.836 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.838 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.841 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.843 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.845 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.848 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.850 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.853 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.855 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.857 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:06:45.860 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) central.log:2015-11-07 13:16:45.700 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client central.log:2015-11-07 13:16:45.720 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.723 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.725 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.727 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.729 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.732 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.734 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.737 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.739 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.741 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.743 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.746 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.748 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.751 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.753 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.756 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.758 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.760 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.762 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.765 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.767 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.769 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.772 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.774 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.776 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.779 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:16:45.781 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) central.log:2015-11-07 13:26:45.698 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client central.log:2015-11-07 13:26:45.717 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.719 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.721 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.723 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.726 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.728 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.731 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.733 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.735 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.738 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.740 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.745 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.747 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.750 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.752 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.755 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.757 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.759 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.762 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.764 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.766 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.769 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.771 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.773 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.776 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:26:45.778 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) central.log:2015-11-07 13:36:45.698 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client central.log:2015-11-07 13:36:45.718 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.721 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.723 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.725 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.728 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.731 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.733 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.735 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.738 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.740 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.745 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.748 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.751 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.753 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.756 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.759 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.761 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.763 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.766 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.768 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.771 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.773 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.776 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.778 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.780 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:36:45.783 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) central.log:2015-11-07 13:46:45.699 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client central.log:2015-11-07 13:46:45.719 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.721 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.724 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.726 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.728 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.731 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.733 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.735 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.738 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.740 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.745 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.747 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.750 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.752 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.755 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.757 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.760 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.762 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.764 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.766 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.769 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.771 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.774 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.776 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.778 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:46:45.780 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) central.log:2015-11-07 13:56:45.776 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client central.log:2015-11-07 13:56:45.796 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.798 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.800 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.803 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.805 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.807 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.810 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.812 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.814 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.817 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.819 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.821 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.824 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.826 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.829 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.831 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.834 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.836 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.838 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.841 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.843 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.845 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.848 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.850 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.852 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.855 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 13:56:45.857 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) central.log:2015-11-07 14:06:45.701 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client central.log:2015-11-07 14:06:45.721 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.723 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.726 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.728 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.730 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.733 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.735 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.737 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.739 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.744 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.746 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.749 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.751 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.754 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.756 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.759 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.761 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.763 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.765 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.767 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.770 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.773 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.775 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.777 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.779 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:06:45.781 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) central.log:2015-11-07 14:16:45.702 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client central.log:2015-11-07 14:16:45.722 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.725 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.727 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.729 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.732 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.735 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.737 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.739 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.741 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.743 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.746 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.748 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.751 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.753 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.756 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.758 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.761 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.763 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.765 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.767 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.769 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.772 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.775 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.777 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.779 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.781 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:16:45.783 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) central.log:2015-11-07 14:26:45.701 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client central.log:2015-11-07 14:26:45.721 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.724 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.727 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.729 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.731 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.734 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.737 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.739 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.744 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.746 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.749 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.752 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.754 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.757 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.760 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.762 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.764 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.767 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.769 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.772 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.775 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.777 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.780 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.782 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.784 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:26:45.787 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) central.log:2015-11-07 14:36:45.701 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client central.log:2015-11-07 14:36:45.722 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.724 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.726 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.729 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.732 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.735 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.737 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.740 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.744 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.746 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.749 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.752 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.754 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.757 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.759 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.761 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.763 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.766 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.768 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.770 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.773 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.775 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.778 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.780 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.782 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:36:45.784 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) central.log:2015-11-07 14:46:45.705 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client central.log:2015-11-07 14:46:45.726 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.728 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.731 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.733 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.735 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.738 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.740 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.744 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.747 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.749 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.752 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.755 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.757 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.759 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.762 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.764 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.766 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.768 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.771 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.773 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.776 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.778 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.780 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.782 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.785 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:46:45.787 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) central.log:2015-11-07 14:56:45.705 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client central.log:2015-11-07 14:56:45.724 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.727 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.729 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.731 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.733 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.736 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.739 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.741 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.743 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.745 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.748 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.750 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.753 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.755 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.758 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.760 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.763 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.765 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.767 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.769 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.771 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.774 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.777 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.779 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.781 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.784 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 14:56:45.786 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) central.log:2015-11-07 15:06:45.705 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client central.log:2015-11-07 15:06:45.727 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.730 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.732 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.734 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.737 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.740 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.744 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.746 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.749 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.751 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.753 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.756 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.758 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.761 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.763 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.766 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.768 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.770 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.772 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.775 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.778 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.780 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.782 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.784 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.787 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:06:45.789 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) central.log:2015-11-07 15:16:45.707 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client central.log:2015-11-07 15:16:45.727 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.730 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.732 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.734 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.737 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.739 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.744 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.746 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.748 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.751 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.753 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.756 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.758 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.761 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.763 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.766 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.768 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.770 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.772 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.774 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.777 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.779 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.782 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.784 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.786 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:16:45.788 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) central.log:2015-11-07 15:26:45.709 22965 ERROR ceilometer.agent.manager [-] Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client [-] Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client Traceback (most recent call last): central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, in with_logging central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client return func(*args, **kwargs) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, in instance_get_all central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client search_opts=search_opts) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, in list central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client "servers") central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in _list central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client _resp, body = self.api.client.get(url) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in get central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client return self._cs_request(url, 'GET', **kwargs) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in _cs_request central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client self.authenticate() central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in authenticate central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client auth_url = self._v2_auth(auth_url) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in _v2_auth central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client return self._authenticate(url, body) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in _authenticate central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client **kwargs) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in _time_request central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client resp, body = self.request(url, method, **kwargs) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in request central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client raise exceptions.from_response(resp, body, url, method) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client Unauthorized: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client central.log:2015-11-07 15:26:45.728 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.730 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.733 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.735 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.737 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.740 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.742 22965 ERROR ceilometer.agent.manager [-] Skipping lb_health_probes, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.745 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.747 22965 ERROR ceilometer.agent.manager [-] Skipping lb_members, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.749 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.751 22965 ERROR ceilometer.agent.manager [-] Skipping vpn_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.755 22965 ERROR ceilometer.agent.manager [-] Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.758 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.760 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.763 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.765 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.768 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.770 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.772 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.774 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.777 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.780 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.782 22965 ERROR ceilometer.agent.manager [-] Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.784 22965 ERROR ceilometer.agent.manager [-] Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.787 22965 ERROR ceilometer.agent.manager [-] Skipping fw_services, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.789 22965 ERROR ceilometer.agent.manager [-] Skipping ipsec_connections, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) central.log:2015-11-07 15:26:45.791 22965 ERROR ceilometer.agent.manager [-] Skipping tenant, keystone issue: Could not find user: ceilometer (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) [root at rdo01 ceilometer]# grep -i error *|more alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service [-] alarm evaluation cycle failed alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service Traceback (most recent call last): alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometer/alarm/service.py", line 93, in _eva luate_assigned_alarms alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service alarms = self._assigned_alarms() alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometer/alarm/service.py", line 149, in _as signed_alarms alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service all_alarms = self._client.alarms.list(q=[{'field': 'enabled', alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometer/alarm/service.py", line 88, in _cli ent alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service self.api_client = ceiloclient.get_client(2, **creds) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometerclient/client.py", line 395, in get_ client alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service return Client(version, endpoint, **kwargs) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometerclient/client.py", line 359, in Clie nt alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service return client_class(*args, **client_kwargs) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometerclient/v2/client.py", line 68, in __ init__ alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service self.alarm_client, aodh_enabled = self._get_alarm_client(**kwargs) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometerclient/v2/client.py", line 106, in _ get_alarm_client alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service kwargs.get('timeout')) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometerclient/client.py", line 271, in redi rect_to_aodh_endpoint alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service self.opts['endpoint'] = _get_endpoint(ks_session, **ks_kwargs) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/ceilometerclient/client.py", line 201, in _get _endpoint alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service region_name=kwargs.get('region_name')) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 660, in get_e ndpoint alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service return auth.get_endpoint(self, **kwargs) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 31 5, in get_endpoint alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service service_catalog = self.get_access(session).service_catalog alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 24 0, in get_access alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service self.auth_ref = self.get_auth_ref(session) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v2.py", line 88, in get_auth_ref alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service authenticated=False, log=False) alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 501, in post alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR ceilometer.alarm.service return self.request(url, 'POST', **kwargs) Thanks, Ashraf Hassan ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Sun Nov 8 06:55:37 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 8 Nov 2015 07:55:37 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Hi Assan, Please make sure that you follow the Liberty docs here[1] and use the CentOS RDO release RPM. Logs of the undercloud installation should be in /home/stack/.instack/install-undercloud.log. In a later step[2] you should configure a DNS server that will be used by the overcloud nodes but I believe your issue doesn't relate to that. Please check the log to see if you can find anything relevant and also paste it on some paste service so we can have a look. [1] https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html [2] https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html#configure-a-nameserver-for-the-overcloud On Sun, Nov 8, 2015 at 12:04 AM, Hassan, Ashraf wrote: > Hi Experts, > > I am a system analyst, I worked a lot with VMware virtualization, and > now I am trying to have a small installation of a private cloud for the sake > of demonstration for my manager, I have 6 nodes, 3 will be domain > controller, all the nodes HP BL460c G6 with Virtuall connect attached to > the blade center. > > I have configured the 1 LOM in first NIC to be for the PXE boot. > > The installation for all OS will be Centos7. > > For the first RDO, I have installed Centos 7 on the first node, and I > have configured bonding and VLANs for the other 2 LOMs. > > I followed the steps in this link () literally ? I ran the ?openstack > undercloud install? using the stack user with nod sudo. > > My undercloud.conf is as follow: > > local_ip = 192.168.1.1/24 > > local_interface = enp2s0f0 > > masquerade_network = 192.168.1.0/24 > > dhcp_start = 192.168.1.50 > > dhcp_end = 192.168.1.151 > > network_cidr = 192.168.1.0/24 > > network_gateway = 192.168.1.1 > > inspection_iprange = 192.168.1.152,192.168.1.252 > > > > First I need to know if I need to configure any DNS server on the PXE > boot network, honestly I do not see it is needed. > > Secondly, the installation has failed, where during the installation > I am getting these errors: > > > > Notice: > /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/Staging::File[rabbitmqadmin]/Exec[/var/lib/rabbitmq/rabbitmqadmin]/returns: > Dload Upload Total Spent Left Speed > > 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- > 0curl: (7) Failed connect to localhost:15672; Connection refused > > Error: curl -k --noproxy localhost --retry 30 --retry-delay 6 -f -L -o > /var/lib/rabbitmq/rabbitmqadmin > http://guest:guest at localhost:15672/cli/rabbitmqadmin returned 7 instead of > one of [0] > > Error: > /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/Staging::File[rabbitmqadmin]/Exec[/var/lib/rabbitmq/rabbitmqadmin]/returns: > change from notrun to 0 failed: curl -k --noproxy localhost --retry 30 > --retry-delay 6 -f -L -o /var/lib/rabbitmq/rabbitmqadmin > http://guest:guest at localhost:15672/cli/rabbitmqadmin returned 7 instead of > one of [0] > > Notice: > /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]: > Dependency Exec[/var/lib/rabbitmq/rabbitmqadmin] has failures: true > > Warning: > /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]: > Skipping because of failed dependencies > > > > And at the end I am getting this error: > > > > 1. Notice: Finished catalog run in 606.34 seconds > > 2. + rc=6 > > 3. + set -e > > 4. + echo 'puppet apply exited with exit code 6' > > 5. puppet apply exited with exit code 6 > > 6. + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > > 7. + exit 6 > > 8. [2015-11-07 12:56:55,814] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status > 6] > > 9. > > 10.[2015-11-07 12:56:55,815] (os-refresh-config) [ERROR] Aborting... > > 11.Traceback (most recent call last): > > 12. File "", line 1, in > > 13. File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 562, in install > > 14. _run_orc(instack_env) > > 15. File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 494, in _run_orc > > 16. _run_live_command(args, instack_env, 'os-refresh-config') > > 17. File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 325, in _run_live_command > > 18. raise RuntimeError('%s failed. See log for details.' % name) > > 19.RuntimeError: os-refresh-config failed. See log for details. > > 20.Command 'instack-install-undercloud' returned non-zero exit status 1 > > > > > > I checked and I cannot find any error log for the puppet which means > it has failed in very early stage, so I do not know where to look and what > to check, and of course how to solve it, can anyone please guide me? > > Remark, below are the errors I found in the logs were in the > following errors: > > > > > > central.log:2015-11-07 12:56:45.816 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 12:56:45.825 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.828 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.830 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.832 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.835 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.837 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.840 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.842 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.844 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.847 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.849 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.852 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.855 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.857 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.860 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.863 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.865 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.867 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.870 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.872 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.874 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.877 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.880 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.882 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.884 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.887 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 12:56:45.889 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-246a49b9-db5a-4d7d-a28d-b8b63183324d) (HTTP 401) > > central.log:2015-11-07 13:06:45.775 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 13:06:45.794 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 13:06:45.797 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.800 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.802 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.804 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.806 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.809 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.812 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.814 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.816 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.819 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.821 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.823 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.826 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.828 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.831 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.833 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.836 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.838 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.841 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.843 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.845 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.848 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.850 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.853 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.855 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.857 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:06:45.860 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-82dc33e0-6c98-40b1-b8c8-b5a051737f10) (HTTP 401) > > central.log:2015-11-07 13:16:45.700 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 13:16:45.718 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 13:16:45.720 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.723 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.725 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.727 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.729 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.732 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.734 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.737 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.739 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.741 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.743 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.746 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.748 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.751 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.753 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.756 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.758 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.760 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.762 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.765 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.767 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.769 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.772 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.774 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.776 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.779 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:16:45.781 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-2ead0188-bca6-440c-9b01-f5e45851f281) (HTTP 401) > > central.log:2015-11-07 13:26:45.698 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 13:26:45.714 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 13:26:45.717 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.719 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.721 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.723 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.726 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.728 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.731 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.733 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.735 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.738 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.740 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.745 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.747 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.750 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.752 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.755 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.757 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.759 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.762 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.764 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.766 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.769 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.771 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.773 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.776 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:26:45.778 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-81f660f4-b8b1-47b9-9dd8-1f01cded3c91) (HTTP 401) > > central.log:2015-11-07 13:36:45.698 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 13:36:45.715 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 13:36:45.718 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.721 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.723 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.725 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.728 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.731 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.733 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.735 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.738 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.740 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.745 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.748 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.751 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.753 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.756 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.759 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.761 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.763 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.766 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.768 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.771 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.773 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.776 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.778 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.780 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:36:45.783 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-0ea03219-9625-44c3-825d-cb721205ccff) (HTTP 401) > > central.log:2015-11-07 13:46:45.699 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 13:46:45.717 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 13:46:45.719 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.721 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.724 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.726 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.728 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.731 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.733 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.735 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.738 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.740 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.745 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.747 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.750 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.752 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.755 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.757 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.760 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.762 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.764 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.766 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.769 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.771 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.774 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.776 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.778 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:46:45.780 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-c06b597e-1751-4e90-84c0-ff07d82cac90) (HTTP 401) > > central.log:2015-11-07 13:56:45.776 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 13:56:45.793 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 13:56:45.796 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.798 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.800 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.803 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.805 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.807 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.810 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.812 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.814 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.817 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.819 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.821 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.824 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.826 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.829 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.831 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.834 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.836 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.838 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.841 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.843 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.845 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.848 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.850 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.852 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.855 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 13:56:45.857 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8a9b09fd-47cf-425c-8599-d51faf7976b6) (HTTP 401) > > central.log:2015-11-07 14:06:45.701 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 14:06:45.719 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 14:06:45.721 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.723 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.726 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.728 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.730 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.733 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.735 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.737 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.739 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.744 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.746 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.749 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.751 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.754 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.756 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.759 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.761 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.763 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.765 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.767 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.770 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.773 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.775 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.777 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.779 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:06:45.781 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-b20ecfdf-340b-4367-b4de-6c659074a5ca) (HTTP 401) > > central.log:2015-11-07 14:16:45.702 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 14:16:45.720 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 14:16:45.722 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.725 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.727 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.729 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.732 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.735 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.737 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.739 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.741 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.743 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.746 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.748 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.751 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.753 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.756 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.758 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.761 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.763 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.765 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.767 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.769 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.772 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.775 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.777 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.779 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.781 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:16:45.783 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-204c29d8-133f-4f26-a080-9139b9d9edad) (HTTP 401) > > central.log:2015-11-07 14:26:45.701 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 14:26:45.719 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 14:26:45.721 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.724 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.727 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.729 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.731 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.734 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.737 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.739 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.744 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.746 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.749 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.752 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.754 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.757 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.760 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.762 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.764 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.767 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.769 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.772 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.775 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.777 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.780 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.782 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.784 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:26:45.787 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-56b9f4d7-1056-4f20-afdb-e7a661f46c71) (HTTP 401) > > central.log:2015-11-07 14:36:45.701 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 14:36:45.719 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 14:36:45.722 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.724 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.726 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.729 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.732 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.735 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.737 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.740 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.744 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.746 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.749 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.752 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.754 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.757 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.759 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.761 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.763 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.766 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.768 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.770 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.773 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.775 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.778 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.780 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.782 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:36:45.784 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e891a57b-998f-4953-9217-2e0c2ad5f04a) (HTTP 401) > > central.log:2015-11-07 14:46:45.705 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 14:46:45.723 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 14:46:45.726 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.728 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.731 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.733 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.735 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.738 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.740 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.744 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.747 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.749 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.752 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.755 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.757 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.759 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.762 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.764 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.766 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.768 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.771 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.773 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.776 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.778 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.780 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.782 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.785 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:46:45.787 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-e0c7f342-e61d-4751-b593-b1fc5b4bf02a) (HTTP 401) > > central.log:2015-11-07 14:56:45.705 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 14:56:45.722 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 14:56:45.724 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.727 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.729 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.731 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.733 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.736 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.739 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.741 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.743 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.745 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.748 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.750 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.753 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.755 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.758 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.760 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.763 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.765 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.767 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.769 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.771 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.774 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.777 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.779 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.781 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.784 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 14:56:45.786 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-61eebec7-00b1-4a40-87dd-8e7cc7122c27) (HTTP 401) > > central.log:2015-11-07 15:06:45.705 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 15:06:45.725 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 15:06:45.727 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.730 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.732 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.734 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.737 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.740 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.744 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.746 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.749 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.751 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.753 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.756 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.758 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.761 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.763 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.766 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.768 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.770 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.772 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.775 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.778 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.780 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.782 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.784 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.787 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:06:45.789 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-8bb50b4d-3714-4b19-add4-d327d37ff561) (HTTP 401) > > central.log:2015-11-07 15:16:45.707 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 15:16:45.725 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 15:16:45.727 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.730 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.732 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.734 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.737 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.739 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.744 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.746 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.748 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.751 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.753 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.756 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.758 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.761 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.763 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.766 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.768 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.770 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.772 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.774 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.777 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.779 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.782 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.784 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.786 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:16:45.788 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-48f96628-ea5b-4410-b125-1de6c9658bae) (HTTP 401) > > central.log:2015-11-07 15:26:45.709 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_policy, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client [-] > Could not find user: ceilometer (Disable debug mode to suppress these > details.) (HTTP 401) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > Traceback (most recent call last): > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 47, > in with_logging > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > return func(*args, **kwargs) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/ceilometer/nova_client.py", line 164, > in instance_get_all > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > search_opts=search_opts) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 608, > in list > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > "servers") > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 72, in > _list > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > _resp, body = self.api.client.get(url) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 446, in > get > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > return self._cs_request(url, 'GET', **kwargs) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 402, in > _cs_request > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > self.authenticate() > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 555, in > authenticate > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > auth_url = self._v2_auth(auth_url) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 646, in > _v2_auth > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > return self._authenticate(url, body) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 659, in > _authenticate > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > **kwargs) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in > _time_request > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > resp, body = self.request(url, method, **kwargs) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in > request > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > raise exceptions.from_response(resp, body, url, method) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > Unauthorized: Could not find user: ceilometer (Disable debug mode to > suppress these details.) (HTTP 401) > > central.log:2015-11-07 15:26:45.725 22965 ERROR ceilometer.nova_client > > central.log:2015-11-07 15:26:45.728 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.730 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.733 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.735 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.737 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.740 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.742 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_health_probes, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.745 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.747 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_members, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.749 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.751 22965 ERROR ceilometer.agent.manager [-] > Skipping vpn_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.755 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_vips, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.758 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.760 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.763 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.765 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.768 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.770 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.772 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.774 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.777 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.780 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.782 22965 ERROR ceilometer.agent.manager [-] > Skipping lb_pools, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.784 22965 ERROR ceilometer.agent.manager [-] > Skipping endpoint, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.787 22965 ERROR ceilometer.agent.manager [-] > Skipping fw_services, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.789 22965 ERROR ceilometer.agent.manager [-] > Skipping ipsec_connections, keystone issue: Could not find user: ceilometer > (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > central.log:2015-11-07 15:26:45.791 22965 ERROR ceilometer.agent.manager [-] > Skipping tenant, keystone issue: Could not find user: ceilometer (Disable > debug mode to suppress these details.) (HTTP 401) (Request-ID: > req-df87af8a-c8e7-41b5-9d52-8d5776dde602) (HTTP 401) > > [root at rdo01 ceilometer]# grep -i error *|more > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service [-] alarm evaluation cycle failed > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service Traceback (most recent call last): > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometer/alarm/service.py", line 93, in > _eva > > luate_assigned_alarms > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service alarms = self._assigned_alarms() > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometer/alarm/service.py", line 149, in > _as > > signed_alarms > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service all_alarms = > self._client.alarms.list(q=[{'field': 'enabled', > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometer/alarm/service.py", line 88, in > _cli > > ent > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service self.api_client = ceiloclient.get_client(2, > **creds) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometerclient/client.py", line 395, in > get_ > > client > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service return Client(version, endpoint, **kwargs) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometerclient/client.py", line 359, in > Clie > > nt > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service return client_class(*args, **client_kwargs) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometerclient/v2/client.py", line 68, > in __ > > init__ > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service self.alarm_client, aodh_enabled = > self._get_alarm_client(**kwargs) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometerclient/v2/client.py", line 106, > in _ > > get_alarm_client > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service kwargs.get('timeout')) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometerclient/client.py", line 271, in > redi > > rect_to_aodh_endpoint > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service self.opts['endpoint'] = > _get_endpoint(ks_session, **ks_kwargs) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/ceilometerclient/client.py", line 201, in > _get > > _endpoint > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service region_name=kwargs.get('region_name')) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 660, in > get_e > > ndpoint > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service return auth.get_endpoint(self, **kwargs) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py", > line 31 > > 5, in get_endpoint > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service service_catalog = > self.get_access(session).service_catalog > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py", > line 24 > > 0, in get_access > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service self.auth_ref = self.get_auth_ref(session) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v2.py", line > 88, > > in get_auth_ref > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service authenticated=False, log=False) > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service File > "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 501, in > post > > alarm-evaluator.log:2015-11-07 12:56:46.541 23068 ERROR > ceilometer.alarm.service return self.request(url, 'POST', **kwargs) > > > > Thanks, > > Ashraf Hassan > > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke > VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important > RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ashraf.hassan at t-mobile.nl Sun Nov 8 07:57:15 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 8 Nov 2015 08:57:15 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Hi Marius, Thanks for the DNS info, that is inline with my expectation that you need it in the overcloud. For the installation of the undercloud, actually I followed literally the procedure in the link you sent earlier, but I got stuck at the end in the installation puppet, when I check the log I find only error in the rabbitmq, when I google this error , I do not find much information about it, however, in one of the public mailing list it says you should not bother yourself much with this error because it is only the script!!!, however regarding the puppet exit error, I do not see any information at all. For my undercloud.conf: http://pastebin.com/wNb55tSC For my underlcoud installation log: http://pastebin.com/ZPwvX4EB For the extracted errors in the undercloud installation log: http://pastebin.com/ZPwvX4EB System Enviroment variables: http://pastebin.com/rndYa1wi Hosts File: http://pastebin.com/kaXeRsRr Enabled Services: http://pastebin.com/WYeZstsv Network Interfaces: http://pastebin.com/BUBuJmA5 How can fix the error? Do I need a fresh installation for the Centos? Thanks, Ashraf Hassan ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 8 08:17:12 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 8 Nov 2015 09:17:12 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Can you append the following line to your /etc/hosts? I think your system cannot resolve 'localhost': 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 Also try to see if you get any output for: curl -k --noproxy localhost --retry 30 --retry-delay 6 -f -L http://guest:guest at localhost:15672/cli/rabbitmqadmin before and after On Sun, Nov 8, 2015 at 8:57 AM, Hassan, Ashraf wrote: > Hi Marius, > Thanks for the DNS info, that is inline with my expectation that you need it in the overcloud. > For the installation of the undercloud, actually I followed literally the procedure in the link you sent earlier, but I got stuck at the end in the installation puppet, when I check the log I find only error in the rabbitmq, when I google this error , I do not find much information about it, however, in one of the public mailing list it says you should not bother yourself much with this error because it is only the script!!!, however regarding the puppet exit error, I do not see any information at all. > For my undercloud.conf: http://pastebin.com/wNb55tSC > For my underlcoud installation log: http://pastebin.com/ZPwvX4EB > For the extracted errors in the undercloud installation log: http://pastebin.com/ZPwvX4EB > System Enviroment variables: http://pastebin.com/rndYa1wi > Hosts File: http://pastebin.com/kaXeRsRr > Enabled Services: http://pastebin.com/WYeZstsv > Network Interfaces: http://pastebin.com/BUBuJmA5 > > How can fix the error? Do I need a fresh installation for the Centos? > > > Thanks, > > Ashraf Hassan > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 8 08:57:32 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 8 Nov 2015 09:57:32 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Ok no it appears it is working, shall I rerun the openstack installation script or I need a fresh Centos installation? Curl Before: http://pastebin.com/3LrrQX8X Curl After: http://pastebin.com/ur7CbWzj New Hosts file: http://pastebin.com/aVmvGFeF Thanks, Ashraf Hassan ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 8 09:03:21 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 8 Nov 2015 10:03:21 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: OK, try to rerun the openstack undercloud install and see how it goes. On Sun, Nov 8, 2015 at 9:57 AM, Hassan, Ashraf wrote: > Ok no it appears it is working, shall I rerun the openstack installation script or I need a fresh Centos installation? > Curl Before: http://pastebin.com/3LrrQX8X > Curl After: http://pastebin.com/ur7CbWzj > New Hosts file: http://pastebin.com/aVmvGFeF > > > Thanks, > > Ashraf Hassan > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 8 09:38:43 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 8 Nov 2015 10:38:43 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: No wit went further, but exit with a different error, I have reran it twice, and the exit error. Exit Error: http://pastebin.com/FExNAZcL First Rerun: http://pastebin.com/Zsh5wvpE Second Rerun: http://pastebin.com/YQdtcAZc Met vriendelijke groet, Ashraf Hassan Sr. VAS System Analyst Telefoon mobiel: +316 2409 5907 E-mail: ashraf.hassan at t-mobile.nl T-Mobile Netherlands BV ??????????????????????????????????????????????????????????????????????????????? Waldorpstraat 60 2521 CC Den Haag http://www.t-mobile.nl? http://www.facebook.com/tmobilenl https://twitter.com/TMobile_NL ? Life is for sharing. ? PLEASE CONSIDER THE ENVIRONMENT?BEFORE PRINTING THIS EMAIL -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Sunday, November 08, 2015 10:03 AM To: Hassan, Ashraf Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Trying to install the RDO OK, try to rerun the openstack undercloud install and see how it goes. On Sun, Nov 8, 2015 at 9:57 AM, Hassan, Ashraf wrote: > Ok no it appears it is working, shall I rerun the openstack installation script or I need a fresh Centos installation? > Curl Before: http://pastebin.com/3LrrQX8X Curl After: > http://pastebin.com/ur7CbWzj New Hosts file: > http://pastebin.com/aVmvGFeF > > > Thanks, > > Ashraf Hassan > > ********************************************************************** > ********** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke > VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > > This e-mail and its contents are subject to a DISCLAIMER with > important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > > ********************************************************************** > ********** ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 8 10:04:11 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 8 Nov 2015 11:04:11 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: I'm not sure what's going on here. Are you by any chance using any proxy (192.168.3.153) for the http requests? [01;31m [K2015-11-08 10 [m [K:10:57,849 INFO: Starting new HTTP connection (1): 192.168.3.153 [01;31m [K2015-11-08 10 [m [K:12:00,958 DEBUG: "POST http://192.168.1.1:5000/v2.0/tokens HTTP/1.1" 504 1209 -> this looks like a timeout. On Sun, Nov 8, 2015 at 10:38 AM, Hassan, Ashraf wrote: > No wit went further, but exit with a different error, I have reran it twice, and the exit error. > Exit Error: http://pastebin.com/FExNAZcL > First Rerun: http://pastebin.com/Zsh5wvpE > Second Rerun: http://pastebin.com/YQdtcAZc > > > > > Met vriendelijke groet, > > Ashraf Hassan > Sr. VAS System Analyst > Telefoon mobiel: +316 2409 5907 > E-mail: ashraf.hassan at t-mobile.nl > > T-Mobile Netherlands BV > > Waldorpstraat 60 > 2521 CC Den Haag > http://www.t-mobile.nl > http://www.facebook.com/tmobilenl > https://twitter.com/TMobile_NL > > Life is for sharing. > > ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Sunday, November 08, 2015 10:03 AM > To: Hassan, Ashraf > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Trying to install the RDO > > OK, try to rerun the openstack undercloud install and see how it goes. > > On Sun, Nov 8, 2015 at 9:57 AM, Hassan, Ashraf wrote: >> Ok no it appears it is working, shall I rerun the openstack installation script or I need a fresh Centos installation? >> Curl Before: http://pastebin.com/3LrrQX8X Curl After: >> http://pastebin.com/ur7CbWzj New Hosts file: >> http://pastebin.com/aVmvGFeF >> >> >> Thanks, >> >> Ashraf Hassan >> >> ********************************************************************** >> ********** >> >> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke >> VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer >> >> >> This e-mail and its contents are subject to a DISCLAIMER with >> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >> >> >> >> ********************************************************************** >> ********** > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 8 10:08:58 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 8 Nov 2015 11:08:58 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Yes, my server is connected the internet via a proxy, so I have exported http://192.168.3.153:3128, do I need to remove it and rerun the installation script? Met vriendelijke groet, Ashraf Hassan Sr. VAS System Analyst Telefoon mobiel: +316 2409 5907 E-mail: ashraf.hassan at t-mobile.nl T-Mobile Netherlands BV ??????????????????????????????????????????????????????????????????????????????? Waldorpstraat 60 2521 CC Den Haag http://www.t-mobile.nl? http://www.facebook.com/tmobilenl https://twitter.com/TMobile_NL ? Life is for sharing. ? PLEASE CONSIDER THE ENVIRONMENT?BEFORE PRINTING THIS EMAIL -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Sunday, November 08, 2015 11:04 AM To: Hassan, Ashraf Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Trying to install the RDO I'm not sure what's going on here. Are you by any chance using any proxy (192.168.3.153) for the http requests? [01;31m [K2015-11-08 10 [m [K:10:57,849 INFO: Starting new HTTP connection (1): 192.168.3.153 [01;31m [K2015-11-08 10 [m [K:12:00,958 DEBUG: "POST http://192.168.1.1:5000/v2.0/tokens HTTP/1.1" 504 1209 -> this looks like a timeout. On Sun, Nov 8, 2015 at 10:38 AM, Hassan, Ashraf wrote: > No wit went further, but exit with a different error, I have reran it twice, and the exit error. > Exit Error: http://pastebin.com/FExNAZcL > First Rerun: http://pastebin.com/Zsh5wvpE > Second Rerun: http://pastebin.com/YQdtcAZc > > > > > Met vriendelijke groet, > > Ashraf Hassan > Sr. VAS System Analyst > Telefoon mobiel: +316 2409 5907 > E-mail: ashraf.hassan at t-mobile.nl > > T-Mobile Netherlands BV > > Waldorpstraat 60 > 2521 CC Den Haag > http://www.t-mobile.nl > http://www.facebook.com/tmobilenl > https://twitter.com/TMobile_NL > > Life is for sharing. > > ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Sunday, November 08, 2015 10:03 AM > To: Hassan, Ashraf > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Trying to install the RDO > > OK, try to rerun the openstack undercloud install and see how it goes. > > On Sun, Nov 8, 2015 at 9:57 AM, Hassan, Ashraf wrote: >> Ok no it appears it is working, shall I rerun the openstack installation script or I need a fresh Centos installation? >> Curl Before: http://pastebin.com/3LrrQX8X Curl After: >> http://pastebin.com/ur7CbWzj New Hosts file: >> http://pastebin.com/aVmvGFeF >> >> >> Thanks, >> >> Ashraf Hassan >> >> ********************************************************************** >> ********** >> >> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke >> VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer >> >> >> This e-mail and its contents are subject to a DISCLAIMER with >> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >> >> >> >> ********************************************************************** >> ********** > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 8 10:21:51 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 8 Nov 2015 11:21:51 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Yes, try removing that and use the yum config to set the proxy server: https://docs.fedoraproject.org/en-US/Fedora_Core/5/html/Software_Management_Guide/sn-yum-proxy-server.html On Sun, Nov 8, 2015 at 11:08 AM, Hassan, Ashraf wrote: > Yes, my server is connected the internet via a proxy, so I have exported http://192.168.3.153:3128, do I need to remove it and rerun the installation script? > > > Met vriendelijke groet, > > Ashraf Hassan > Sr. VAS System Analyst > Telefoon mobiel: +316 2409 5907 > E-mail: ashraf.hassan at t-mobile.nl > > T-Mobile Netherlands BV > > Waldorpstraat 60 > 2521 CC Den Haag > http://www.t-mobile.nl > http://www.facebook.com/tmobilenl > https://twitter.com/TMobile_NL > > Life is for sharing. > > ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Sunday, November 08, 2015 11:04 AM > To: Hassan, Ashraf > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Trying to install the RDO > > I'm not sure what's going on here. Are you by any chance using any > proxy (192.168.3.153) for the http requests? > > [01;31m [K2015-11-08 10 [m [K:10:57,849 INFO: Starting new HTTP > connection (1): 192.168.3.153 > [01;31m [K2015-11-08 10 [m [K:12:00,958 DEBUG: "POST > http://192.168.1.1:5000/v2.0/tokens HTTP/1.1" 504 1209 -> this looks > like a timeout. > > > On Sun, Nov 8, 2015 at 10:38 AM, Hassan, Ashraf > wrote: >> No wit went further, but exit with a different error, I have reran it twice, and the exit error. >> Exit Error: http://pastebin.com/FExNAZcL >> First Rerun: http://pastebin.com/Zsh5wvpE >> Second Rerun: http://pastebin.com/YQdtcAZc >> >> >> >> >> Met vriendelijke groet, >> >> Ashraf Hassan >> Sr. VAS System Analyst >> Telefoon mobiel: +316 2409 5907 >> E-mail: ashraf.hassan at t-mobile.nl >> >> T-Mobile Netherlands BV >> >> Waldorpstraat 60 >> 2521 CC Den Haag >> http://www.t-mobile.nl >> http://www.facebook.com/tmobilenl >> https://twitter.com/TMobile_NL >> >> Life is for sharing. >> >> ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL >> >> -----Original Message----- >> From: Marius Cornea [mailto:marius at remote-lab.net] >> Sent: Sunday, November 08, 2015 10:03 AM >> To: Hassan, Ashraf >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Trying to install the RDO >> >> OK, try to rerun the openstack undercloud install and see how it goes. >> >> On Sun, Nov 8, 2015 at 9:57 AM, Hassan, Ashraf wrote: >>> Ok no it appears it is working, shall I rerun the openstack installation script or I need a fresh Centos installation? >>> Curl Before: http://pastebin.com/3LrrQX8X Curl After: >>> http://pastebin.com/ur7CbWzj New Hosts file: >>> http://pastebin.com/aVmvGFeF >>> >>> >>> Thanks, >>> >>> Ashraf Hassan >>> >>> ********************************************************************** >>> ********** >>> >>> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke >>> VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer >>> >>> >>> This e-mail and its contents are subject to a DISCLAIMER with >>> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >>> >>> >>> >>> ********************************************************************** >>> ********** >> >> ******************************************************************************** >> >> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer >> >> This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >> >> >> ******************************************************************************** > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 8 10:50:26 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 8 Nov 2015 11:50:26 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Thank you so so much, now it appaears it worked, below is the log for the last rerun, what do you think? Also I understood that for the RDO I need 3 managers, so do you think which option is better: 1- Deploy the over cloud , and use it to deploy the rest of the baremetal and then install deploy on 2 of the baremetals the other 2 managers? 2- Install 2 additional 2 baremetals as the procedure I used and start then deploy the rdo managers on the 3 underclouds together? Last undecloud.log: http://pastebin.com/aJtQP17J Met vriendelijke groet, Ashraf Hassan Sr. VAS System Analyst Telefoon mobiel: +316 2409 5907 E-mail: ashraf.hassan at t-mobile.nl T-Mobile Netherlands BV ??????????????????????????????????????????????????????????????????????????????? Waldorpstraat 60 2521 CC Den Haag http://www.t-mobile.nl? http://www.facebook.com/tmobilenl https://twitter.com/TMobile_NL ? Life is for sharing. ? PLEASE CONSIDER THE ENVIRONMENT?BEFORE PRINTING THIS EMAIL -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Sunday, November 08, 2015 11:22 AM To: Hassan, Ashraf Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Trying to install the RDO Yes, try removing that and use the yum config to set the proxy server: https://docs.fedoraproject.org/en-US/Fedora_Core/5/html/Software_Management_Guide/sn-yum-proxy-server.html On Sun, Nov 8, 2015 at 11:08 AM, Hassan, Ashraf wrote: > Yes, my server is connected the internet via a proxy, so I have exported http://192.168.3.153:3128, do I need to remove it and rerun the installation script? > > > Met vriendelijke groet, > > Ashraf Hassan > Sr. VAS System Analyst > Telefoon mobiel: +316 2409 5907 > E-mail: ashraf.hassan at t-mobile.nl > > T-Mobile Netherlands BV > > Waldorpstraat 60 > 2521 CC Den Haag > http://www.t-mobile.nl > http://www.facebook.com/tmobilenl > https://twitter.com/TMobile_NL > > Life is for sharing. > > ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Sunday, November 08, 2015 11:04 AM > To: Hassan, Ashraf > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Trying to install the RDO > > I'm not sure what's going on here. Are you by any chance using any > proxy (192.168.3.153) for the http requests? > > [01;31m [K2015-11-08 10 [m [K:10:57,849 INFO: Starting new HTTP > connection (1): 192.168.3.153 [01;31m [K2015-11-08 10 [m [K:12:00,958 > DEBUG: "POST http://192.168.1.1:5000/v2.0/tokens HTTP/1.1" 504 1209 -> > this looks like a timeout. > > > On Sun, Nov 8, 2015 at 10:38 AM, Hassan, Ashraf > wrote: >> No wit went further, but exit with a different error, I have reran it twice, and the exit error. >> Exit Error: http://pastebin.com/FExNAZcL First Rerun: >> http://pastebin.com/Zsh5wvpE Second Rerun: >> http://pastebin.com/YQdtcAZc >> >> >> >> >> Met vriendelijke groet, >> >> Ashraf Hassan >> Sr. VAS System Analyst >> Telefoon mobiel: +316 2409 5907 >> E-mail: ashraf.hassan at t-mobile.nl >> >> T-Mobile Netherlands BV >> >> Waldorpstraat 60 >> 2521 CC Den Haag >> http://www.t-mobile.nl >> http://www.facebook.com/tmobilenl >> https://twitter.com/TMobile_NL >> >> Life is for sharing. >> >> ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL >> >> -----Original Message----- >> From: Marius Cornea [mailto:marius at remote-lab.net] >> Sent: Sunday, November 08, 2015 10:03 AM >> To: Hassan, Ashraf >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Trying to install the RDO >> >> OK, try to rerun the openstack undercloud install and see how it goes. >> >> On Sun, Nov 8, 2015 at 9:57 AM, Hassan, Ashraf wrote: >>> Ok no it appears it is working, shall I rerun the openstack installation script or I need a fresh Centos installation? >>> Curl Before: http://pastebin.com/3LrrQX8X Curl After: >>> http://pastebin.com/ur7CbWzj New Hosts file: >>> http://pastebin.com/aVmvGFeF >>> >>> >>> Thanks, >>> >>> Ashraf Hassan >>> >>> ******************************************************************** >>> ** >>> ********** >>> >>> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met >>> belangrijke VOORBEHOUDEN van toepassing: zie >>> http://www.t-mobile.nl/disclaimer >>> >>> >>> This e-mail and its contents are subject to a DISCLAIMER with >>> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >>> >>> >>> >>> ******************************************************************** >>> ** >>> ********** >> >> ********************************************************************* >> *********** >> >> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met >> belangrijke VOORBEHOUDEN van toepassing: zie >> http://www.t-mobile.nl/disclaimer >> >> This e-mail and its contents are subject to a DISCLAIMER with >> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >> >> >> >> ********************************************************************* >> *********** > > ********************************************************************** > ********** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke > VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > > This e-mail and its contents are subject to a DISCLAIMER with > important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > > ********************************************************************** > ********** ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 8 11:16:59 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 8 Nov 2015 12:16:59 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: At the end of a successful installation you should end up with a stackrc file containing the credentials for the undercloud Openstack. You only need 1 undercloud node to proceed with the overcloud deployment. Steps on what to do after undercloud installation are described in this link: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html On Sun, Nov 8, 2015 at 11:50 AM, Hassan, Ashraf wrote: > Thank you so so much, now it appaears it worked, below is the log for the last rerun, what do you think? > Also I understood that for the RDO I need 3 managers, so do you think which option is better: > 1- Deploy the over cloud , and use it to deploy the rest of the baremetal and then install deploy on 2 of the baremetals the other 2 managers? > 2- Install 2 additional 2 baremetals as the procedure I used and start then deploy the rdo managers on the 3 underclouds together? > Last undecloud.log: http://pastebin.com/aJtQP17J > > > Met vriendelijke groet, > > Ashraf Hassan > Sr. VAS System Analyst > Telefoon mobiel: +316 2409 5907 > E-mail: ashraf.hassan at t-mobile.nl > > T-Mobile Netherlands BV > > Waldorpstraat 60 > 2521 CC Den Haag > http://www.t-mobile.nl > http://www.facebook.com/tmobilenl > https://twitter.com/TMobile_NL > > Life is for sharing. > > ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Sunday, November 08, 2015 11:22 AM > To: Hassan, Ashraf > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Trying to install the RDO > > Yes, try removing that and use the yum config to set the proxy server: > https://docs.fedoraproject.org/en-US/Fedora_Core/5/html/Software_Management_Guide/sn-yum-proxy-server.html > > On Sun, Nov 8, 2015 at 11:08 AM, Hassan, Ashraf wrote: >> Yes, my server is connected the internet via a proxy, so I have exported http://192.168.3.153:3128, do I need to remove it and rerun the installation script? >> >> >> Met vriendelijke groet, >> >> Ashraf Hassan >> Sr. VAS System Analyst >> Telefoon mobiel: +316 2409 5907 >> E-mail: ashraf.hassan at t-mobile.nl >> >> T-Mobile Netherlands BV >> >> Waldorpstraat 60 >> 2521 CC Den Haag >> http://www.t-mobile.nl >> http://www.facebook.com/tmobilenl >> https://twitter.com/TMobile_NL >> >> Life is for sharing. >> >> ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL >> >> -----Original Message----- >> From: Marius Cornea [mailto:marius at remote-lab.net] >> Sent: Sunday, November 08, 2015 11:04 AM >> To: Hassan, Ashraf >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Trying to install the RDO >> >> I'm not sure what's going on here. Are you by any chance using any >> proxy (192.168.3.153) for the http requests? >> >> [01;31m [K2015-11-08 10 [m [K:10:57,849 INFO: Starting new HTTP >> connection (1): 192.168.3.153 [01;31m [K2015-11-08 10 [m [K:12:00,958 >> DEBUG: "POST http://192.168.1.1:5000/v2.0/tokens HTTP/1.1" 504 1209 -> >> this looks like a timeout. >> >> >> On Sun, Nov 8, 2015 at 10:38 AM, Hassan, Ashraf >> wrote: >>> No wit went further, but exit with a different error, I have reran it twice, and the exit error. >>> Exit Error: http://pastebin.com/FExNAZcL First Rerun: >>> http://pastebin.com/Zsh5wvpE Second Rerun: >>> http://pastebin.com/YQdtcAZc >>> >>> >>> >>> >>> Met vriendelijke groet, >>> >>> Ashraf Hassan >>> Sr. VAS System Analyst >>> Telefoon mobiel: +316 2409 5907 >>> E-mail: ashraf.hassan at t-mobile.nl >>> >>> T-Mobile Netherlands BV >>> >>> Waldorpstraat 60 >>> 2521 CC Den Haag >>> http://www.t-mobile.nl >>> http://www.facebook.com/tmobilenl >>> https://twitter.com/TMobile_NL >>> >>> Life is for sharing. >>> >>> ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL >>> >>> -----Original Message----- >>> From: Marius Cornea [mailto:marius at remote-lab.net] >>> Sent: Sunday, November 08, 2015 10:03 AM >>> To: Hassan, Ashraf >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] Trying to install the RDO >>> >>> OK, try to rerun the openstack undercloud install and see how it goes. >>> >>> On Sun, Nov 8, 2015 at 9:57 AM, Hassan, Ashraf wrote: >>>> Ok no it appears it is working, shall I rerun the openstack installation script or I need a fresh Centos installation? >>>> Curl Before: http://pastebin.com/3LrrQX8X Curl After: >>>> http://pastebin.com/ur7CbWzj New Hosts file: >>>> http://pastebin.com/aVmvGFeF >>>> >>>> >>>> Thanks, >>>> >>>> Ashraf Hassan >>>> >>>> ******************************************************************** >>>> ** >>>> ********** >>>> >>>> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met >>>> belangrijke VOORBEHOUDEN van toepassing: zie >>>> http://www.t-mobile.nl/disclaimer >>>> >>>> >>>> This e-mail and its contents are subject to a DISCLAIMER with >>>> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >>>> >>>> >>>> >>>> ******************************************************************** >>>> ** >>>> ********** >>> >>> ********************************************************************* >>> *********** >>> >>> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met >>> belangrijke VOORBEHOUDEN van toepassing: zie >>> http://www.t-mobile.nl/disclaimer >>> >>> This e-mail and its contents are subject to a DISCLAIMER with >>> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >>> >>> >>> >>> ********************************************************************* >>> *********** >> >> ********************************************************************** >> ********** >> >> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke >> VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer >> >> >> This e-mail and its contents are subject to a DISCLAIMER with >> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >> >> >> >> ********************************************************************** >> ********** > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 8 14:28:59 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 8 Nov 2015 15:28:59 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Thank you so much for your help, I am trying now to deploy the overcloud, I am not sure if can be in the same mailing thread or I need to create another. I have followed the command in the link "https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html" and in the first trial it failed because if failed to resolve "cloud.centos.org" so I configured the DNS1 in ifcfg-external.3 and rerun it again, now I can see the resolution is correct but it is timing out, what do you think? Overcloud First Trial: http://pastebin.com/BkrBvfdS Overcloud Second Trial: http://pastebin.com/uT4y3EgH First Dib-agent-ramdisk Log: http://pastebin.com/J6D4PWMA Second Dib-agent-ramdisk Log: http://pastebin.com/PQTGMXtb First Deploy Log: http://pastebin.com/tLHBwYBD Second Deploy Log: http://pastebin.com/e5ZguFtK First Deploy Full Log: http://pastebin.com/PktYcs3g Second Deploy Full Log: http://pastebin.com/11V9pR0p Thanks Ashraf Hassan ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 8 14:46:55 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 8 Nov 2015 15:46:55 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: I think the timeout is caused by the lack of proxy this time. Can you export the http_proxy variable with your proxy details and rerun the build step? On Sun, Nov 8, 2015 at 3:28 PM, Hassan, Ashraf wrote: > Thank you so much for your help, I am trying now to deploy the overcloud, I am not sure if can be in the same mailing thread or I need to create another. > I have followed the command in the link "https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html" and in the first trial it failed because if failed to resolve "cloud.centos.org" so I configured the DNS1 in ifcfg-external.3 and rerun it again, now I can see the resolution is correct but it is timing out, what do you think? > Overcloud First Trial: http://pastebin.com/BkrBvfdS > Overcloud Second Trial: http://pastebin.com/uT4y3EgH > First Dib-agent-ramdisk Log: http://pastebin.com/J6D4PWMA > Second Dib-agent-ramdisk Log: http://pastebin.com/PQTGMXtb > First Deploy Log: http://pastebin.com/tLHBwYBD > Second Deploy Log: http://pastebin.com/e5ZguFtK > First Deploy Full Log: http://pastebin.com/PktYcs3g > Second Deploy Full Log: http://pastebin.com/11V9pR0p > > Thanks > > Ashraf Hassan > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 8 21:15:24 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 8 Nov 2015 22:15:24 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: Now I have managed to build the images , but I got 2 errors, one is "Grubby fatal error" and the other if the failure to download the delta packages, these errors occurred many times during the building process, and I am not sure how serious they are, what do you? Is it safe to upload the images? As I can from the next steps it will be difficult to make the rerun. Ending of the build process: http://pastebin.com/ZLmWhzxb Grubby error: http://pastebin.com/gDckQW9D Delta packages errors: http://pastebin.com/ViTYbibr Deploy overcloud log: http://pastebin.com/ujFakbvW Thanks, Ashraf Hassan ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 8 21:22:57 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 8 Nov 2015 22:22:57 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: Message-ID: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> If the overcloud-full, deploy and ironic python agent images were created I'd say you're ready for the next steps. > On 08 Nov 2015, at 22:15, Hassan, Ashraf wrote: > > Now I have managed to build the images , but I got 2 errors, one is "Grubby fatal error" and the other if the failure to download the delta packages, these errors occurred many times during the building process, and I am not sure how serious they are, what do you? Is it safe to upload the images? As I can from the next steps it will be difficult to make the rerun. > Ending of the build process: http://pastebin.com/ZLmWhzxb > Grubby error: http://pastebin.com/gDckQW9D > Delta packages errors: http://pastebin.com/ViTYbibr > Deploy overcloud log: http://pastebin.com/ujFakbvW > > Thanks, > > Ashraf Hassan > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From pgsousa at gmail.com Mon Nov 9 10:38:13 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 9 Nov 2015 10:38:13 +0000 Subject: [Rdo-list] issue with numa and cpu pinning using SRIOV ports In-Reply-To: <844571690.9579776.1446843134482.JavaMail.zimbra@redhat.com> References: <844571690.9579776.1446843134482.JavaMail.zimbra@redhat.com> Message-ID: Hi Steve, thank you for you reply. Concerning your first question I really don't know if supports it, I have a Dell PowerEdge R430. Running the command I see this: [root at compute03 ~]# numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 node 0 size: 32543 MB node 0 free: 193 MB node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 node 1 size: 32768 MB node 1 free: 238 MB node distances: node 0 1 0: 10 21 1: 21 10 Concerning the second question, that 2 Nodes there shouldn't be used, as they are configured as "normal" flavor for nova and don't have vcpu_pin_set configured. I would expect that other node to appear there, but as I said, after I use a vm with 6 cpus doesn't allow me to launch more vms, so it can be something related with my topology/configuration. Thanks, Pedro Sousa Regards, Pedro Sousa On Fri, Nov 6, 2015 at 8:52 PM, Steve Gordon wrote: > ----- Original Message ----- > > From: "Pedro Sousa" > > To: "rdo-list" > > > > Hi all, > > > > I have a rdo kilo deployment, using sr-iov ports to my instances. I'm > > trying to configure NUMA topology and CPU pinning for some telco based > > workloads based on this doc: > > > http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/ > > > > I have 3 compute nodes, I'm trying to use one of them to use cpu pinning. > > > > I've configured it like this: > > > > *Compute Node (total 24 cpus)* > > */etc/nova/nova.conf* > > vcpu_pin_set=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23 > > > > Changed grub to isolate my cpus: > > #grubby --update-kernel=ALL > > --args="isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23" > > > > #grub2-install /dev/sda > > > > *Controller Nodes:* */etc/nova/nova.conf* > > > scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter > > scheduler_available_filters = nova.scheduler.filters.all_filters > > scheduler_available_filters = > > nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter > *Created > > host aggregate performance * #nova aggregate-create performance #nova > > aggregate-set-metadata 1 pinned=true > > > > #nova aggregate-add-host 1 compute03 > > > > *Created host aggregate normal* > > #nova aggregate-create normal > > #nova aggregate-set-metadata 2 pinned=false > > > > #nova aggregate-add-host 2 compute01 > > > > #nova aggregate-add-host 2 compute02 > > > > *Created the flavor with cpu pinning* #nova flavor-create m1.performance > 6 > > 2048 20 4 #nova flavor-key 6 set hw:cpu_policy=dedicated #nova > flavor-key 6 > > set aggregate_instance_extra_specs:pinned=true *The issue is:* With > SR-IOV > > ports it only let's me create instances with 6 vcpus in total with the > conf > > described above. Without SR-IOV, using OVS, I don't have that limitation. > > Is this a bug or something? I've seen this: > > https://bugs.launchpad.net/nova/+bug/1441169, however I have the patch, > and > > as I said it works for the first 6 vcpus with my configuration. > > Adding Nikola and Brent. Do you happen to know if your motherboard chipset > supports NUMA locality of the PCIe devices and if so which NUMA nodes the > SR-IOV cards are associated with? I *believe* numactl --hardware will tell > you if this is the case (I don't presently have a machine in front of me > with support for this). I'm wondering if or how the device locality code > copes at the moment if the instance spans two nodes (obviously the device > is only local to one of them). > > > *Some relevant logs:* > > > > */var/log/nova/nova-scheduler.log* > > > > 2015-11-06 11:18:17.955 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Starting with 3 host(s) get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:70 > > > > 2015-11-06 11:18:17.955 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter RetryFilter returned 3 host(s) get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84 > > 2015-11-06 11:18:17.955 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter AvailabilityZoneFilter returned 3 host(s) > > get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84 > > 2015-11-06 11:18:17.955 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter RamFilter returned 3 host(s) get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84 > > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter ComputeFilter returned 3 host(s) get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84 > > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter ComputeCapabilitiesFilter returned 3 host(s) > > get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84 > > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter ImagePropertiesFilter returned 3 host(s) > > get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84 > > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter ServerGroupAntiAffinityFilter returned 3 host(s) > > get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84 > > 2015-11-06 11:18:17.956 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter ServerGroupAffinityFilter returned 3 host(s) > > get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84 > > 2015-11-06 11:18:17.957 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter PciPassthroughFilter returned 3 host(s) get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84*2015-11-06 > > 11:18:17.959 59494 DEBUG nova.filters > > [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d > > 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - > > -] Filter NUMATopologyFilter returned 2 host(s) get_filtered_objects > > /usr/lib/python2.7/site-packages/nova/filters.py:84* > > > > Any help would be appreciated. > > This looks like a successful run (still 2 hosts returned after > NUMATopologyFilter)? Or did were you expecting the host filtered out by > PciPassthroughFilter to still be in scope? > > Thanks, > > -- > Steve Gordon, > Sr. Technical Product Manager, > Red Hat Enterprise Linux OpenStack Platform > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Mon Nov 9 13:38:51 2015 From: dms at redhat.com (David Moreau Simard) Date: Mon, 9 Nov 2015 08:38:51 -0500 Subject: [Rdo-list] =?utf-8?q?Hi=EF=BC=8CI_need_your_help-about_packstack_?= =?utf-8?q?--allinone?= In-Reply-To: References: <2015110713095448487433@netbric.com> Message-ID: Hi, I'm not going to say there's no problems in Kilo but is there any particular reasons why you would like to install Kilo right now ? The Openstack Liberty version was released in October and since it is the latest release, you might want to use this instead if you're just trying out Openstack. I haven't come across this problem in Liberty. Installing the release RPM should provide you with that version as explained in the quickstart: https://www.rdoproject.org/install/quickstart/ If you're still experiencing issues, can you please provide some insight on how we can reproduce it ? Thanks ! David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Sat, Nov 7, 2015 at 12:09 PM, Yaniv Eylon wrote: > adding rdo-list > xiaoguang, it is better to share your findings on the mailing list. > > > > On Sat, Nov 7, 2015 at 7:09 AM, xiaoguang.fan at netbric.com > wrote: >> Hi, >> I want study RDO, when I meet this bug? >> https://bugzilla.redhat.com/show_bug.cgi?id=1254447 >> >> cannot start httpd when do packstack --allinone >> >> howto fix this bug?now I cannot deploy rdo in my centos 7 (vm machine)? >> thanks >> >> ________________________________ >> /******************************************** >> * Name? fanxiaoguang >> * Add: >> * E-Mail: solar_ambitious at 126.com; >> * fanxiaoguang008 at gmail.com >> * Cel: 13716563304 >> * >> ********************************************/ >> > > > > -- > Yaniv. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ayoung at redhat.com Mon Nov 9 14:50:14 2015 From: ayoung at redhat.com (Adam Young) Date: Mon, 9 Nov 2015 09:50:14 -0500 Subject: [Rdo-list] shade client library In-Reply-To: References: Message-ID: <5640B2A6.2000204@redhat.com> On 11/06/2015 05:29 PM, Jeff Weber wrote: > Are there any plans to package the shade client library as part of > RDO? We are heavy users of Ansible and RDO and the Ansible OpenStack > modules for the upcoming 2.0 release have been being rewritten to > depend on shade. Having a version of shade packaged along with the > other RDO packages in the release would be quite useful. Ansible is already part of Fedora. Shade should be a Fedora-first library. RDO can then make use of it. In general, client tools are going to go into Fedora, but not necessarily all of the server pieces. This makes sense from a Desktop perspective; to interact with an RDO deployment, you need the OpenStack client on your workstation. RDO will be responsbility for only handling the server pieces; the amount of effort it has taken to keep OpenStack services running on Fedora has been deemed a bridge too far; RDO will focus on CentOS based deployments. RDO will then pull whatever client pieces it requires from Fedora. Shade does not necessarily have be part of RDO; it could be pulled in via EPEL, but having Shade in and Ansible Openstack modules in RDO seems like a natural fit. > > If this is something where community participation would be helpful > I'd be happy to try to help out if someone had details on what would > need to be done. > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsimmons at krozak.com Mon Nov 9 14:45:47 2015 From: dsimmons at krozak.com (David Simmons) Date: Mon, 9 Nov 2015 06:45:47 -0800 Subject: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal Message-ID: Hi, I am installing RDO via RDO-Manager on a set of IBM H-Chassis blades. I need to configure the networking to perform the IPMI operations and PXE boot the overcloud nodes. The problem I am trying to solve is to get the chassis AMM to correctly route the IPMI/PXE operations to the blades. The IBM chassis requires all IPMI networking to use a VLAN that has been specified as the Chassis Internal Network. I have installed the undercloud on on blade on CentOS 7 and have been able to interact with the AMM and the blade IMMs on a vlan. Where I am running into problems is when the undercloud configuration occurs I am no longer able to interact on the specified VLAN and lose access to the Chassis AMM. On each blade I have 4 network ports. 2 1G and 2 10G network switches. I have installed CentOS on the undercloud and use 1 1G port as the management. I have specified a 10G switch as the PXE/Management network. In the undercloud.conf file the 10G port is the port I have specified as the local interface. To use the IPMI interface it has to tag all packets with the Chassis Internal Network VLAN. Are there any configuration tips that I can follow? Thanks, David Simmons -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Nov 9 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 9 Nov 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20151109150003.1CDC260A3FD9@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2015-11-11 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Packaging ](https://etherpad.openstack.org/p/RDO-Packaging) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From marius at remote-lab.net Mon Nov 9 16:25:56 2015 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 9 Nov 2015 17:25:56 +0100 Subject: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal In-Reply-To: References: Message-ID: Hi, If I understand it correctly you could set up the undercloud 1G port as trunk, use untagged frames for management and tagged for the Chassis Internal Network VLAN. Here are some instructions to set up a vlan tagged interface: https://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administrators_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html Thanks On Mon, Nov 9, 2015 at 3:45 PM, David Simmons wrote: > Hi, > > I am installing RDO via RDO-Manager on a set of IBM H-Chassis blades. I > need to configure the networking to perform the IPMI operations and PXE boot > the overcloud nodes. The problem I am trying to solve is to get the chassis > AMM to correctly route the IPMI/PXE operations to the blades. The IBM > chassis requires all IPMI networking to use a VLAN that has been specified > as the Chassis Internal Network. I have installed the undercloud on on > blade on CentOS 7 and have been able to interact with the AMM and the blade > IMMs on a vlan. Where I am running into problems is when the undercloud > configuration occurs I am no longer able to interact on the specified VLAN > and lose access to the Chassis AMM. > > On each blade I have 4 network ports. 2 1G and 2 10G network switches. I > have installed CentOS on the undercloud and use 1 1G port as the > management. I have specified a 10G switch as the PXE/Management network. > In the undercloud.conf file the 10G port is the port I have specified as the > local interface. > > To use the IPMI interface it has to tag all packets with the Chassis > Internal Network VLAN. Are there any configuration tips that I can follow? > > Thanks, > > David Simmons > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From alessandro at namecheap.com Mon Nov 9 16:32:39 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Mon, 9 Nov 2015 17:32:39 +0100 Subject: [Rdo-list] Cinder multi-backend (NetApp) Message-ID: HI list how would you go about implementing multi backends in cinder via tripleo-templates? I know that upstream puppet modules supports it, but how to specify a second backend (in particular, a NetApp+Ceph deployment)? Any hint appreciated. P.S. I did find https://github.com/openstack/tripleo-heat-templates/blob/master/environments/cinder-netapp-config.yaml , but hot to enable dual backends?) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Mon Nov 9 16:53:03 2015 From: mcornea at redhat.com (Marius Cornea) Date: Mon, 9 Nov 2015 11:53:03 -0500 (EST) Subject: [Rdo-list] Cinder multi-backend (NetApp) In-Reply-To: References: Message-ID: <1451309445.6752609.1447087983535.JavaMail.zimbra@redhat.com> Hi Alessandro, Here are the steps for configuring Netapp as Cinder backend: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/cinder_netapp.html For deploying it together with Ceph I'd use the same steps for /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml and then deploy the overcloud by passing both the cinder-netapp-config.yaml and puppet-ceph-external.yaml environment files: openstack overcloud deploy --templates -e ~/cinder-netapp-config.yaml -e ~/puppet-ceph-external.yaml I haven't tried this scenario myself but it would be great to get some feedback on it. Thanks, Marius ----- Original Message ----- > From: "Alessandro Vozza" > To: "rdo-list" > Sent: Monday, November 9, 2015 5:32:39 PM > Subject: [Rdo-list] Cinder multi-backend (NetApp) > > > HI list > > how would you go about implementing multi backends in cinder via > tripleo-templates? I know that upstream puppet modules supports it, but how > to specify a second backend (in particular, a NetApp+Ceph deployment)? Any > hint appreciated. > > P.S. > > I did find > https://github.com/openstack/tripleo-heat-templates/blob/master/environments/cinder-netapp-config.yaml > , but hot to enable dual backends?) > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dsimmons at krozak.com Mon Nov 9 16:45:56 2015 From: dsimmons at krozak.com (David Simmons) Date: Mon, 9 Nov 2015 08:45:56 -0800 Subject: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal In-Reply-To: References: Message-ID: Thanks for responding. That part I understand and have that working right. The issue I am having is configuring RDO Manager/Ironic to work. In the undercloud.conf file, you specify the local interface. This seems to be used for both PXE boot and IPMI operations. When the undercloud is installed, the networking is not being configured correctly. If the tagged interface isn't set up correctly, I lose access to the AMM and subsequently the IMM of the individual blade. If I could get Ironic to not perform the IPMI operations and I have to reboot manually, that would be fine but I don't seem to see how to do that in the documentation. Dave -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Monday, November 9, 2015 11:26 AM To: David Simmons Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal Hi, If I understand it correctly you could set up the undercloud 1G port as trunk, use untagged frames for management and tagged for the Chassis Internal Network VLAN. Here are some instructions to set up a vlan tagged interface: https://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administrators_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html Thanks On Mon, Nov 9, 2015 at 3:45 PM, David Simmons wrote: > Hi, > > I am installing RDO via RDO-Manager on a set of IBM H-Chassis blades. > I need to configure the networking to perform the IPMI operations and > PXE boot the overcloud nodes. The problem I am trying to solve is to > get the chassis AMM to correctly route the IPMI/PXE operations to the > blades. The IBM chassis requires all IPMI networking to use a VLAN > that has been specified as the Chassis Internal Network. I have > installed the undercloud on on blade on CentOS 7 and have been able to interact with the AMM and the blade > IMMs on a vlan. Where I am running into problems is when the undercloud > configuration occurs I am no longer able to interact on the specified > VLAN and lose access to the Chassis AMM. > > On each blade I have 4 network ports. 2 1G and 2 10G network > switches. I have installed CentOS on the undercloud and use 1 1G > port as the management. I have specified a 10G switch as the PXE/Management network. > In the undercloud.conf file the 10G port is the port I have specified > as the local interface. > > To use the IPMI interface it has to tag all packets with the Chassis > Internal Network VLAN. Are there any configuration tips that I can follow? > > Thanks, > > David Simmons > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hbrock at redhat.com Mon Nov 9 16:58:55 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Mon, 9 Nov 2015 17:58:55 +0100 Subject: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal In-Reply-To: References: Message-ID: <20151109165854.GL3911@redhat.com> On Mon, Nov 09, 2015 at 08:45:56AM -0800, David Simmons wrote: > Thanks for responding. > > That part I understand and have that working right. The issue I am having is configuring RDO Manager/Ironic to work. > > In the undercloud.conf file, you specify the local interface. This seems to be used for both PXE boot and IPMI operations. When the undercloud is installed, the networking is not being configured correctly. If the tagged interface isn't set up correctly, I lose access to the AMM and subsequently the IMM of the individual blade. > > If I could get Ironic to not perform the IPMI operations and I have to reboot manually, that would be fine but I don't seem to see how to do that in the documentation. > > Dave There is a "fake_pxe" driver for Ironic that does exactly this -- basically, no-ops all the power management stuff. I don't know exactly how you turn it on but a google should help. --Hugh > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Monday, November 9, 2015 11:26 AM > To: David Simmons > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal > > Hi, > > If I understand it correctly you could set up the undercloud 1G port as trunk, use untagged frames for management and tagged for the Chassis Internal Network VLAN. > > Here are some instructions to set up a vlan tagged interface: > https://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administrators_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html > > Thanks > > On Mon, Nov 9, 2015 at 3:45 PM, David Simmons wrote: > > Hi, > > > > I am installing RDO via RDO-Manager on a set of IBM H-Chassis blades. > > I need to configure the networking to perform the IPMI operations and > > PXE boot the overcloud nodes. The problem I am trying to solve is to > > get the chassis AMM to correctly route the IPMI/PXE operations to the > > blades. The IBM chassis requires all IPMI networking to use a VLAN > > that has been specified as the Chassis Internal Network. I have > > installed the undercloud on on blade on CentOS 7 and have been able to interact with the AMM and the blade > > IMMs on a vlan. Where I am running into problems is when the undercloud > > configuration occurs I am no longer able to interact on the specified > > VLAN and lose access to the Chassis AMM. > > > > On each blade I have 4 network ports. 2 1G and 2 10G network > > switches. I have installed CentOS on the undercloud and use 1 1G > > port as the management. I have specified a 10G switch as the PXE/Management network. > > In the undercloud.conf file the 10G port is the port I have specified > > as the local interface. > > > > To use the IPMI interface it has to tag all packets with the Chassis > > Internal Network VLAN. Are there any configuration tips that I can follow? > > > > Thanks, > > > > David Simmons > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == RDO Manager: Install, configure, and scale OpenStack == == http://rdoproject.org == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From marius at remote-lab.net Mon Nov 9 17:12:39 2015 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 9 Nov 2015 18:12:39 +0100 Subject: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal In-Reply-To: References: Message-ID: On Mon, Nov 9, 2015 at 5:45 PM, David Simmons wrote: > Thanks for responding. > > That part I understand and have that working right. The issue I am having is configuring RDO Manager/Ironic to work. > > In the undercloud.conf file, you specify the local interface. This seems to be used for both PXE boot and IPMI operations. When the undercloud is installed, the networking is not being configured correctly. If the tagged interface isn't set up correctly, I lose access to the AMM and subsequently the IMM of the individual blade. The local_interface is the one used for PXE booting. After undercloud installation you should see the br-ctlplane bridge created, containing the local_interface and the local_ip set on top of the bridge. Regarding the IPMI operations - you should see which interface is used for reaching the IPMI address by checking the routing table ( ip r get $IPMI_ADDRESS). If you're using the PXE nic for reaching the IPMI network via tagged frames then you could try the following workaround after undercloud installation: add a tagged ovs port to the br-ctlplane bridge with the right vlan tag and then assign it an IP address: ovs-vsctl add-port br-ctlplane vlanX tag=X -- set interface vlanX type=internal ip link set dev vlanX up ip addr add x.x.x.x/x dev vlanX > If I could get Ironic to not perform the IPMI operations and I have to reboot manually, that would be fine but I don't seem to see how to do that in the documentation. > > Dave > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Monday, November 9, 2015 11:26 AM > To: David Simmons > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal > > Hi, > > If I understand it correctly you could set up the undercloud 1G port as trunk, use untagged frames for management and tagged for the Chassis Internal Network VLAN. > > Here are some instructions to set up a vlan tagged interface: > https://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administrators_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html > > Thanks > > On Mon, Nov 9, 2015 at 3:45 PM, David Simmons wrote: >> Hi, >> >> I am installing RDO via RDO-Manager on a set of IBM H-Chassis blades. >> I need to configure the networking to perform the IPMI operations and >> PXE boot the overcloud nodes. The problem I am trying to solve is to >> get the chassis AMM to correctly route the IPMI/PXE operations to the >> blades. The IBM chassis requires all IPMI networking to use a VLAN >> that has been specified as the Chassis Internal Network. I have >> installed the undercloud on on blade on CentOS 7 and have been able to interact with the AMM and the blade >> IMMs on a vlan. Where I am running into problems is when the undercloud >> configuration occurs I am no longer able to interact on the specified >> VLAN and lose access to the Chassis AMM. >> >> On each blade I have 4 network ports. 2 1G and 2 10G network >> switches. I have installed CentOS on the undercloud and use 1 1G >> port as the management. I have specified a 10G switch as the PXE/Management network. >> In the undercloud.conf file the 10G port is the port I have specified >> as the local interface. >> >> To use the IPMI interface it has to tag all packets with the Chassis >> Internal Network VLAN. Are there any configuration tips that I can follow? >> >> Thanks, >> >> David Simmons >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From dtantsur at redhat.com Mon Nov 9 17:13:11 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 9 Nov 2015 18:13:11 +0100 Subject: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal In-Reply-To: <20151109165854.GL3911@redhat.com> References: <20151109165854.GL3911@redhat.com> Message-ID: <5640D427.3090505@redhat.com> On 11/09/2015 05:58 PM, Hugh O. Brock wrote: > On Mon, Nov 09, 2015 at 08:45:56AM -0800, David Simmons wrote: >> Thanks for responding. >> >> That part I understand and have that working right. The issue I am having is configuring RDO Manager/Ironic to work. >> >> In the undercloud.conf file, you specify the local interface. This seems to be used for both PXE boot and IPMI operations. When the undercloud is installed, the networking is not being configured correctly. If the tagged interface isn't set up correctly, I lose access to the AMM and subsequently the IMM of the individual blade. >> >> If I could get Ironic to not perform the IPMI operations and I have to reboot manually, that would be fine but I don't seem to see how to do that in the documentation. >> >> Dave > > There is a "fake_pxe" driver for Ironic that does exactly this -- > basically, no-ops all the power management stuff. I don't know exactly > how you turn it on but a google should help. didn't try it, but should be something like: 1. add it to 'enabled_drivers' in ironic.conf 2. restart conductor(s) 3. create nodes with this driver and don't forget to reboot them when needed (look at the virtual console to figure out) > > --Hugh > > >> -----Original Message----- >> From: Marius Cornea [mailto:marius at remote-lab.net] >> Sent: Monday, November 9, 2015 11:26 AM >> To: David Simmons >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal >> >> Hi, >> >> If I understand it correctly you could set up the undercloud 1G port as trunk, use untagged frames for management and tagged for the Chassis Internal Network VLAN. >> >> Here are some instructions to set up a vlan tagged interface: >> https://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administrators_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html >> >> Thanks >> >> On Mon, Nov 9, 2015 at 3:45 PM, David Simmons wrote: >>> Hi, >>> >>> I am installing RDO via RDO-Manager on a set of IBM H-Chassis blades. >>> I need to configure the networking to perform the IPMI operations and >>> PXE boot the overcloud nodes. The problem I am trying to solve is to >>> get the chassis AMM to correctly route the IPMI/PXE operations to the >>> blades. The IBM chassis requires all IPMI networking to use a VLAN >>> that has been specified as the Chassis Internal Network. I have >>> installed the undercloud on on blade on CentOS 7 and have been able to interact with the AMM and the blade >>> IMMs on a vlan. Where I am running into problems is when the undercloud >>> configuration occurs I am no longer able to interact on the specified >>> VLAN and lose access to the Chassis AMM. >>> >>> On each blade I have 4 network ports. 2 1G and 2 10G network >>> switches. I have installed CentOS on the undercloud and use 1 1G >>> port as the management. I have specified a 10G switch as the PXE/Management network. >>> In the undercloud.conf file the 10G port is the port I have specified >>> as the local interface. >>> >>> To use the IPMI interface it has to tag all packets with the Chassis >>> Internal Network VLAN. Are there any configuration tips that I can follow? >>> >>> Thanks, >>> >>> David Simmons >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > From jweber at cofront.net Mon Nov 9 18:37:00 2015 From: jweber at cofront.net (Jeff Weber) Date: Mon, 9 Nov 2015 13:37:00 -0500 Subject: [Rdo-list] shade client library In-Reply-To: <5640B2A6.2000204@redhat.com> References: <5640B2A6.2000204@redhat.com> Message-ID: The benefit I saw of shade packaged with RDO is that it depends on the openstack python libs. The integrated configuration provided by RDO where I don't have to depend on EPEL is an advantage from a package management point of view. It is much easier to trace provenance of our build when we're not trying to keep extra stuff from EPEL sneaking in as we do upgrades where version's may not match up on all sources. On Nov 9, 2015 9:50 AM, "Adam Young" wrote: > On 11/06/2015 05:29 PM, Jeff Weber wrote: > > Are there any plans to package the shade client library as part of RDO? We > are heavy users of Ansible and RDO and the Ansible OpenStack modules for > the upcoming 2.0 release have been being rewritten to depend on shade. > Having a version of shade packaged along with the other RDO packages in the > release would be quite useful. > > > > Ansible is already part of Fedora. Shade should be a Fedora-first > library. RDO can then make use of it. > > In general, client tools are going to go into Fedora, but not necessarily > all of the server pieces. This makes sense from a Desktop perspective; to > interact with an RDO deployment, you need the OpenStack client on your > workstation. RDO will be responsbility for only handling the server > pieces; the amount of effort it has taken to keep OpenStack services > running on Fedora has been deemed a bridge too far; RDO will focus on > CentOS based deployments. > > RDO will then pull whatever client pieces it requires from Fedora. Shade > does not necessarily have be part of RDO; it could be pulled in via EPEL, > but having Shade in and Ansible Openstack modules in RDO seems like a > natural fit. > > > > > > > If this is something where community participation would be helpful I'd be > happy to try to help out if someone had details on what would need to be > done. > > > _______________________________________________ > Rdo-list mailing listRdo-list at redhat.comhttps://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Nov 9 19:30:45 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 9 Nov 2015 14:30:45 -0500 Subject: [Rdo-list] RDO blogs roundup, week of November 9 Message-ID: <5640F465.3000909@redhat.com> Here's what RDO enthusiasts were blogging about this week. If you were at OpenStack Summit in Tokyo last month, don't forget to write up your experience. RDO Liberty released in CentOS Cloud SIG, by Alan Pevec We are pleased to announce the general availability of the RDO build for OpenStack Liberty for CentOS Linux 7 x86_64, suitable for building private, public and hybrid clouds. OpenStack Liberty is the 12th release of the open source software collaboratively built by a large number of contributors around the OpenStack.org project space. ? read more at http://tm3.org/3d Neutron HA Talk in Tokyo Summit by Assaf Muller Florian Haas, Adam Spiers and myself presented a session in Tokyo about Neutron high availability. We talked about: ? read more at http://tm3.org/3s OpenStack Summit Tokyo by Matthias Runge Over the past week, I attended the OpenStack Summit in Tokyo. My primary focus was on Horizon sessions. Nevertheless, I was lucky to have one or two glimpses at more touristic spots in Tokyo. ? read more at http://tm3.org/3t OpenStack Summit Tokyo ? Final Summary by Gordon Tillmore As I flew home from OpenStack Summit Tokyo last week, I had plenty of time to reflect on what proved to be a truly special event. OpenStack gains more and more traction and maturity with each community release and corresponding Summit, and the 11th semi-annual OpenStack Summit certainly did not disappoint. With more than 5,000 attendees, it was the largest ever OpenStack Summit outside of North America, and there were so many high quality keynotes, session, and industry announcements, I thought it made sense to put together a final trip overview, detailing all of the noteworthy news, Red Hat press releases, and more. ? read more at http://tm3.org/3u Community Meetup at OpenStack Tokyo by Rich Bowen A week ago in Tokyo, we held the third RDO Community Meetup at the OpenStack Summit. We had about 70 people in attendance, and some good discussion. The full agenda for the meeting is at https://etherpad.openstack.org/p/rdo-tokyo and below are some of the things that were discussed. (The complete recording is at the bottom of this post.) ? read more at http://tm3.org/3v Mike Perez: Cinder in Liberty by Rich Bowen Before OpenStack Summit, I interviewed Mike Perez about what's new in Cinder in the LIberty release, and what's coming in Mitaka. Unfortunately, life got a little busy and I didn't get it posted before Summit. However, with Liberty still fresh, this is still very timely content. ? read more at http://tm3.org/3w -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From abeekhof at redhat.com Tue Nov 10 01:12:30 2015 From: abeekhof at redhat.com (Andrew Beekhof) Date: Tue, 10 Nov 2015 12:12:30 +1100 Subject: [Rdo-list] RDO-Manager HA Pacemaker in Compute Nodes In-Reply-To: References: Message-ID: <425F6DE3-C250-48EC-9AED-D8DE91E67756@redhat.com> > On 30 Oct 2015, at 10:47 PM, Pedro Sousa wrote: > > Hi all, > > I would like to be able to recover automatically the VMS when a compute node dies as described here: http://blog.clusterlabs.org/blog/2015/openstack-ha-compute/ > > I've checked that I have pacemaker_remote.service and NovaCompute/NovaEvacuate pacemaker resources on my compute nodes, but it's doesn't seem to be configured/running: > > [root at overcloud-novacompute-0 openstack]# systemctl list-unit-files | grep pacemaker > pacemaker.service disabled > pacemaker_remote.service disabled Did you forget this part? chkconfig pacemaker_remote on service pacemaker_remote start And don?t forget to populate /etc/pacemaker/authkey as described just above those commands in that post > > [root at overcloud-novacompute-0 openstack]# pcs status > Error: cluster is not currently running on this node > > Is there a way to activate this on stack deployment? Or do I have to customize it? > > Thanks, > Pedro Sousa > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mcornea at redhat.com Tue Nov 10 10:33:33 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 10 Nov 2015 05:33:33 -0500 (EST) Subject: [Rdo-list] Cinder multi-backend (NetApp) In-Reply-To: References: <1451309445.6752609.1447087983535.JavaMail.zimbra@redhat.com> Message-ID: <981475608.7198901.1447151613674.JavaMail.zimbra@redhat.com> Adding the list. ----- Original Message ----- > From: "Yogev Rabl" > To: "Marius Cornea" > Sent: Tuesday, November 10, 2015 11:11:45 AM > Subject: Re: [Rdo-list] Cinder multi-backend (NetApp) > > Hi, > > If you're testing whether the second back end is ready for use, check the > parameter enabled_backends in /etc/cinder/cinder.conf, its values are the > section name of each back end. > In addition, check whether a type was created for the second back end with > the command > # cinder extra-specs-list > > Cheers > > > On Mon, Nov 9, 2015 at 6:53 PM, Marius Cornea wrote: > > > Hi Alessandro, > > > > Here are the steps for configuring Netapp as Cinder backend: > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/cinder_netapp.html > > > > For deploying it together with Ceph I'd use the same steps for > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml > > and then deploy the overcloud by passing both the cinder-netapp-config.yaml > > and puppet-ceph-external.yaml environment files: > > > > openstack overcloud deploy --templates -e ~/cinder-netapp-config.yaml -e > > ~/puppet-ceph-external.yaml > > > > I haven't tried this scenario myself but it would be great to get some > > feedback on it. > > > > Thanks, > > Marius > > > > ----- Original Message ----- > > > From: "Alessandro Vozza" > > > To: "rdo-list" > > > Sent: Monday, November 9, 2015 5:32:39 PM > > > Subject: [Rdo-list] Cinder multi-backend (NetApp) > > > > > > > > > HI list > > > > > > how would you go about implementing multi backends in cinder via > > > tripleo-templates? I know that upstream puppet modules supports it, but > > how > > > to specify a second backend (in particular, a NetApp+Ceph deployment)? > > Any > > > hint appreciated. > > > > > > P.S. > > > > > > I did find > > > > > https://github.com/openstack/tripleo-heat-templates/blob/master/environments/cinder-netapp-config.yaml > > > , but hot to enable dual backends?) > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -- > Yogev Rabl > Quality Engineer, Red Hat OSP - Storage > +972-52-4534729 > From alessandro at namecheap.com Tue Nov 10 10:49:08 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Tue, 10 Nov 2015 11:49:08 +0100 Subject: [Rdo-list] Cinder multi-backend (NetApp) In-Reply-To: <981475608.7198901.1447151613674.JavaMail.zimbra@redhat.com> References: <1451309445.6752609.1447087983535.JavaMail.zimbra@redhat.com> <981475608.7198901.1447151613674.JavaMail.zimbra@redhat.com> Message-ID: <2D8B8E7E-748C-423C-A2F6-B865C472338F@namecheap.com> Thank you all, we?ll make sure to test this and report back. One question, how to pass that array to cinder/manifests/backends.pp manifest? Something like: CinderEnabledBackends: ?netapp?,?rbd? ? > On 10 Nov 2015, at 11:33, Marius Cornea wrote: > > Adding the list. > > ----- Original Message ----- >> From: "Yogev Rabl" >> To: "Marius Cornea" >> Sent: Tuesday, November 10, 2015 11:11:45 AM >> Subject: Re: [Rdo-list] Cinder multi-backend (NetApp) >> >> Hi, >> >> If you're testing whether the second back end is ready for use, check the >> parameter enabled_backends in /etc/cinder/cinder.conf, its values are the >> section name of each back end. >> In addition, check whether a type was created for the second back end with >> the command >> # cinder extra-specs-list >> >> Cheers >> >> >> On Mon, Nov 9, 2015 at 6:53 PM, Marius Cornea wrote: >> >>> Hi Alessandro, >>> >>> Here are the steps for configuring Netapp as Cinder backend: >>> >>> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/cinder_netapp.html >>> >>> For deploying it together with Ceph I'd use the same steps for >>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml >>> and then deploy the overcloud by passing both the cinder-netapp-config.yaml >>> and puppet-ceph-external.yaml environment files: >>> >>> openstack overcloud deploy --templates -e ~/cinder-netapp-config.yaml -e >>> ~/puppet-ceph-external.yaml >>> >>> I haven't tried this scenario myself but it would be great to get some >>> feedback on it. >>> >>> Thanks, >>> Marius >>> >>> ----- Original Message ----- >>>> From: "Alessandro Vozza" >>>> To: "rdo-list" >>>> Sent: Monday, November 9, 2015 5:32:39 PM >>>> Subject: [Rdo-list] Cinder multi-backend (NetApp) >>>> >>>> >>>> HI list >>>> >>>> how would you go about implementing multi backends in cinder via >>>> tripleo-templates? I know that upstream puppet modules supports it, but >>> how >>>> to specify a second backend (in particular, a NetApp+Ceph deployment)? >>> Any >>>> hint appreciated. >>>> >>>> P.S. >>>> >>>> I did find >>>> >>> https://github.com/openstack/tripleo-heat-templates/blob/master/environments/cinder-netapp-config.yaml >>>> , but hot to enable dual backends?) >>>> >>>> >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> >> >> -- >> Yogev Rabl >> Quality Engineer, Red Hat OSP - Storage >> +972-52-4534729 >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mcornea at redhat.com Tue Nov 10 11:12:18 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 10 Nov 2015 06:12:18 -0500 (EST) Subject: [Rdo-list] Cinder multi-backend (NetApp) In-Reply-To: <2D8B8E7E-748C-423C-A2F6-B865C472338F@namecheap.com> References: <1451309445.6752609.1447087983535.JavaMail.zimbra@redhat.com> <981475608.7198901.1447151613674.JavaMail.zimbra@redhat.com> <2D8B8E7E-748C-423C-A2F6-B865C472338F@namecheap.com> Message-ID: <981154968.7212389.1447153938355.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alessandro Vozza" > To: "Marius Cornea" > Cc: "Yogev Rabl" , "rdo-list" , tbeckers at kangaroot.net > Sent: Tuesday, November 10, 2015 11:49:08 AM > Subject: Re: [Rdo-list] Cinder multi-backend (NetApp) > > Thank you all, we?ll make sure to test this and report back. > > One question, how to pass that array to cinder/manifests/backends.pp > manifest? Something like: > > CinderEnabledBackends: ?netapp?,?rbd? > > ? I think you should use the CinderEnableNetappBackend and CinderEnableRbdBackend parameters in cinder-netapp-config.yaml and puppet-ceph-external.yaml environment files. > > > > On 10 Nov 2015, at 11:33, Marius Cornea wrote: > > > > Adding the list. > > > > ----- Original Message ----- > >> From: "Yogev Rabl" > >> To: "Marius Cornea" > >> Sent: Tuesday, November 10, 2015 11:11:45 AM > >> Subject: Re: [Rdo-list] Cinder multi-backend (NetApp) > >> > >> Hi, > >> > >> If you're testing whether the second back end is ready for use, check the > >> parameter enabled_backends in /etc/cinder/cinder.conf, its values are the > >> section name of each back end. > >> In addition, check whether a type was created for the second back end with > >> the command > >> # cinder extra-specs-list > >> > >> Cheers > >> > >> > >> On Mon, Nov 9, 2015 at 6:53 PM, Marius Cornea wrote: > >> > >>> Hi Alessandro, > >>> > >>> Here are the steps for configuring Netapp as Cinder backend: > >>> > >>> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/cinder_netapp.html > >>> > >>> For deploying it together with Ceph I'd use the same steps for > >>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml > >>> and then deploy the overcloud by passing both the > >>> cinder-netapp-config.yaml > >>> and puppet-ceph-external.yaml environment files: > >>> > >>> openstack overcloud deploy --templates -e ~/cinder-netapp-config.yaml -e > >>> ~/puppet-ceph-external.yaml > >>> > >>> I haven't tried this scenario myself but it would be great to get some > >>> feedback on it. > >>> > >>> Thanks, > >>> Marius > >>> > >>> ----- Original Message ----- > >>>> From: "Alessandro Vozza" > >>>> To: "rdo-list" > >>>> Sent: Monday, November 9, 2015 5:32:39 PM > >>>> Subject: [Rdo-list] Cinder multi-backend (NetApp) > >>>> > >>>> > >>>> HI list > >>>> > >>>> how would you go about implementing multi backends in cinder via > >>>> tripleo-templates? I know that upstream puppet modules supports it, but > >>> how > >>>> to specify a second backend (in particular, a NetApp+Ceph deployment)? > >>> Any > >>>> hint appreciated. > >>>> > >>>> P.S. > >>>> > >>>> I did find > >>>> > >>> https://github.com/openstack/tripleo-heat-templates/blob/master/environments/cinder-netapp-config.yaml > >>>> , but hot to enable dual backends?) > >>>> > >>>> > >>>> > >>>> _______________________________________________ > >>>> Rdo-list mailing list > >>>> Rdo-list at redhat.com > >>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>> > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >>> _______________________________________________ > >>> Rdo-list mailing list > >>> Rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >> > >> > >> > >> -- > >> Yogev Rabl > >> Quality Engineer, Red Hat OSP - Storage > >> +972-52-4534729 > >> > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From xzhao at bnl.gov Tue Nov 10 15:09:28 2015 From: xzhao at bnl.gov (Zhao, Xin) Date: Tue, 10 Nov 2015 10:09:28 -0500 Subject: [Rdo-list] swift-ceph-backend Message-ID: <564208A8.5090109@bnl.gov> Hello, I wonder if anyone has successfully used the ceph backend for swift ? The github instruction (https://github.com/openstack/swift-ceph-backend) is too brief ... Thanks, Xin From trown at redhat.com Tue Nov 10 15:39:50 2015 From: trown at redhat.com (John Trowbridge) Date: Tue, 10 Nov 2015 10:39:50 -0500 Subject: [Rdo-list] Blueprint: Delorean & Khaleesi In-Reply-To: References: <20151106162119.GB2555@redhat.com> Message-ID: <56420FC6.2040101@redhat.com> Removed internal list from CC. On 11/07/2015 12:35 PM, Arie Bregman wrote: > On Sat, Nov 7, 2015 at 12:07 AM, David Moreau Simard wrote: > >> Can we extend this to rdo-list ? Sounds relevant to the community. >> > > Sure, good idea. Adding rdo-list. > > >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Fri, Nov 6, 2015 at 4:36 PM, Wesley Hayutin >> wrote: >>> Arie, >>> This looks good. >>> Who is going to maintain the delorean ci job? >> > > Who maintains it today? we'll have to discuss it. It might be good idea to > add 'job owner/maintainer' info as we have in rhos CI. > >> Have you reached out to Derek Higgins about writing a replacement for the >>> current delorean ci? >> > > No. The work on this has just begun. Adding Derek to this mail. > >> >>> Thanks >>> >>> On Fri, Nov 6, 2015 at 11:21 AM, Steve Linabery >> wrote: >>>> >>>> On Fri, Nov 06, 2015 at 05:49:05PM +0200, Arie Bregman wrote: >>>>> Hi everyone, >>>>> >>>>> Not sure if you all familiar with Delorean project[1]. Quick >>>>> introduction >>>>> (CI related) for those who are not: >>>>> >>>>> Delorean is an upstream project that builds and maintains yum >>>>> repositories. >>>>> It builds repository every time patch submitted to one of the upstream >>>>> openstack-packages projects. >>>>> The delorean job is located here: >>>>> https://prod-rdojenkins.rhcloud.com/job/delorean-ci >>>>> >>>>> How the job works at the moment: >>>>> It runs delorean directly on the slave and if the build process >>>>> succeeded, >>>>> it votes with +1 >>>>> >>>>> What I suggest: >>>>> - Move delorean installation and run to khaleesi by creating >> 'delorean' >>>>> role >>>>> - Extend the job to run tests using the rpms delorean built >>>>> >>>>> Why: >>>>> - Main reason: It's important for developers to get immediate feedback >>>>> on >>>>> whether the new packages are good or not. simply run delorean and see My main contention is that this actually would lower the "immediateness" of the feedback. The fastest rdo-manager job we have using tempest smoke is the non-ha job, and it currently takes 90 minutes. So instead of a 6min job to check the general sanity of the package we would have a 90min job running against every packaging change. I am not really convinced that packaging changes break us often enough to warrant that. Maybe that happens more on the core service side than on the RDO-Manager side, but on the RDO-Manager side I do not think I have seen a bad package change break everything in the year I have been working on it. (and there have been plenty of total breakages in that time to constitute a decent sample size) >> if >>>>> build is ok, is not enough. We need to extend the current job. >>>>> >>>>> - Users can use khaleesi to test specs they wrote. This is actually >>>>> pretty >>>>> amazing. users write specs and run khaleesi. khaleesi then handles >>>>> everything - it building the rpms using delorean and run the tests. >>>>> I am not opposed to adding this functionality to khaleesi, as it seems generally useful from a developer perspective. I am opposed to using it to gate packaging changes, unless there is some evidence that we are breaking everything with our packaging changes often enough to warrant a 15x or higher increase in the gate job. >>>>> - We can use delorean to replace our current way to build rpms and >>>>> creating >>>>> repos. delorean doing it in a smart way, using docker and by that it >>>>> creates rpms for several distributions in isolated environment. >>>> >>>> Delorean no longer uses docker. >>>> >>>> >>>> >> https://github.com/openstack-packages/delorean/commit/66571fce45a007bcf49fd54ad7db622fd737874f >> > > Interesting. any idea why this change? adding Alan. > We essentially rewrote mock in a docker container. Just using mock made more sense from a maintainability standpoint. > >> >>>> >>>>> >>>>> - Khaleesi awesomeness will increase >>>>> >>>>> There is also no need to add/maintain settings in khaleesi for that. >>>>> delorean properties (version, url, etc) will be provided by extra-vars >>>>> (unless you are in favor of maintaining general settings for delorean >> in >>>>> khaleesi) >>>>> >>>>> The new job work flow: >>>>> 1. Run delorean on slave and save rpms from delorean build process. >>>>> 2. Run provision playbook >>>>> 3. Copy delorean rpms to provisioned nodes and create repo for them on >>>>> each >>>>> node >>>>> 4. run installer playbooks (installer will use delorean rpms) >>>>> 5. Run Tests =D >>>>> 6. Vote +1/-1 according to build process + tests. >>>>> >>>>> step 3 can be replaced. Since delorean creates repository, we can >> simply >>>>> reference each node to the new repository on the slave. >>>>> >>>>> Would love to hear your opinion on that. >>>>> >>>>> Cheers, >>>>> >>>>> Arie >>>>> >>>>> P.S >>>>> started to work on that: https://review.gerrithub.io/#/c/251464 >>>>> >>>>> [1] https://github.com/openstack-packages/delorean >>>> >>> >> > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From dsimmons at krozak.com Tue Nov 10 15:46:01 2015 From: dsimmons at krozak.com (David Simmons) Date: Tue, 10 Nov 2015 07:46:01 -0800 Subject: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal In-Reply-To: <5640D427.3090505@redhat.com> References: <20151109165854.GL3911@redhat.com> <5640D427.3090505@redhat.com> Message-ID: So I got the fake-pxe driver to do the inspection run and the results are reported back to the undercloud node. I see the information has been updated with ironic node-show. I was able to set the provision status and the node is now available. Thanks for the help. David Simmons -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Dmitry Tantsur Sent: Monday, November 9, 2015 12:13 PM To: rdo-list at redhat.com Subject: Re: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare metal On 11/09/2015 05:58 PM, Hugh O. Brock wrote: > On Mon, Nov 09, 2015 at 08:45:56AM -0800, David Simmons wrote: >> Thanks for responding. >> >> That part I understand and have that working right. The issue I am having is configuring RDO Manager/Ironic to work. >> >> In the undercloud.conf file, you specify the local interface. This seems to be used for both PXE boot and IPMI operations. When the undercloud is installed, the networking is not being configured correctly. If the tagged interface isn't set up correctly, I lose access to the AMM and subsequently the IMM of the individual blade. >> >> If I could get Ironic to not perform the IPMI operations and I have to reboot manually, that would be fine but I don't seem to see how to do that in the documentation. >> >> Dave > > There is a "fake_pxe" driver for Ironic that does exactly this -- > basically, no-ops all the power management stuff. I don't know exactly > how you turn it on but a google should help. didn't try it, but should be something like: 1. add it to 'enabled_drivers' in ironic.conf 2. restart conductor(s) 3. create nodes with this driver and don't forget to reboot them when needed (look at the virtual console to figure out) > > --Hugh > > >> -----Original Message----- >> From: Marius Cornea [mailto:marius at remote-lab.net] >> Sent: Monday, November 9, 2015 11:26 AM >> To: David Simmons >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] [RDO-Manager] IBM H-Chassis install bare >> metal >> >> Hi, >> >> If I understand it correctly you could set up the undercloud 1G port as trunk, use untagged frames for management and tagged for the Chassis Internal Network VLAN. >> >> Here are some instructions to set up a vlan tagged interface: >> https://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administra >> tors_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html >> >> Thanks >> >> On Mon, Nov 9, 2015 at 3:45 PM, David Simmons wrote: >>> Hi, >>> >>> I am installing RDO via RDO-Manager on a set of IBM H-Chassis blades. >>> I need to configure the networking to perform the IPMI operations >>> and PXE boot the overcloud nodes. The problem I am trying to solve >>> is to get the chassis AMM to correctly route the IPMI/PXE operations >>> to the blades. The IBM chassis requires all IPMI networking to use >>> a VLAN that has been specified as the Chassis Internal Network. I >>> have installed the undercloud on on blade on CentOS 7 and have been able to interact with the AMM and the blade >>> IMMs on a vlan. Where I am running into problems is when the undercloud >>> configuration occurs I am no longer able to interact on the >>> specified VLAN and lose access to the Chassis AMM. >>> >>> On each blade I have 4 network ports. 2 1G and 2 10G network >>> switches. I have installed CentOS on the undercloud and use 1 1G >>> port as the management. I have specified a 10G switch as the PXE/Management network. >>> In the undercloud.conf file the 10G port is the port I have >>> specified as the local interface. >>> >>> To use the IPMI interface it has to tag all packets with the Chassis >>> Internal Network VLAN. Are there any configuration tips that I can follow? >>> >>> Thanks, >>> >>> David Simmons >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Wed Nov 11 12:27:07 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 11 Nov 2015 07:27:07 -0500 Subject: [Rdo-list] Proposed Mitaka test days Message-ID: <5643341B.3040704@redhat.com> I'd like to propose the following test days for Mitaka. These are each roughly a week after the various milestones. At each test day, we'd be testing the Mitaka packages, if they are available by that milestone. We'd also be doing "installfest" style testing of the existing Liberty packages. And we'd also be doing documentation testing/hacking alongside this. Here's the dates. Please let me know whether any of these have any major conflicts with national holidays or whatever. I've not put a November date in here, but we can add one if desired. Dec 10-11 Jan 27-28 Feb 18-19 Mar 10-11 Apr 13-14 Thoughts? -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From dms at redhat.com Wed Nov 11 13:00:00 2015 From: dms at redhat.com (David Moreau Simard) Date: Wed, 11 Nov 2015 08:00:00 -0500 Subject: [Rdo-list] Proposed Mitaka test days In-Reply-To: <5643341B.3040704@redhat.com> References: <5643341B.3040704@redhat.com> Message-ID: We should have proper CI for Mitaka within two weeks or so. There is little incentive in doing a test day before that happens. Ideally we'd love people to provide feedback on issues that are non-trivial - read: not detected by the CI. I think the Dec 10-11 date is a good first target. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Nov 11, 2015 7:27 AM, "Rich Bowen" wrote: > I'd like to propose the following test days for Mitaka. These are each > roughly a week after the various milestones. > > At each test day, we'd be testing the Mitaka packages, if they are > available by that milestone. We'd also be doing "installfest" style > testing of the existing Liberty packages. And we'd also be doing > documentation testing/hacking alongside this. > > Here's the dates. Please let me know whether any of these have any major > conflicts with national holidays or whatever. I've not put a November > date in here, but we can add one if desired. > > Dec 10-11 > Jan 27-28 > Feb 18-19 > Mar 10-11 > Apr 13-14 > > Thoughts? > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Nov 11 15:35:21 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 11 Nov 2015 10:35:21 -0500 Subject: [Rdo-list] Proposed Mitaka test days In-Reply-To: <5643341B.3040704@redhat.com> References: <5643341B.3040704@redhat.com> Message-ID: <56436039.4080608@redhat.com> On 11/11/2015 07:27 AM, Rich Bowen wrote: > I'd like to propose the following test days for Mitaka. These are each > roughly a week after the various milestones. > > At each test day, we'd be testing the Mitaka packages, if they are > available by that milestone. We'd also be doing "installfest" style > testing of the existing Liberty packages. And we'd also be doing > documentation testing/hacking alongside this. > > Here's the dates. Please let me know whether any of these have any major > conflicts with national holidays or whatever. I've not put a November > date in here, but we can add one if desired. > > Dec 10-11 > Jan 27-28 Haikel points out that this is the day before the RDO meetup at FOSDEM, which would be difficult for many of us. We could move this a week later, or we could plan to actually try to do something in-person in Brussels? > Feb 18-19 > Mar 10-11 > Apr 13-14 > > Thoughts? > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ihrachys at redhat.com Wed Nov 11 15:41:08 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 11 Nov 2015 16:41:08 +0100 Subject: [Rdo-list] Proposed Mitaka test days In-Reply-To: <56436039.4080608@redhat.com> References: <5643341B.3040704@redhat.com> <56436039.4080608@redhat.com> Message-ID: Rich Bowen wrote: > > > On 11/11/2015 07:27 AM, Rich Bowen wrote: >> I'd like to propose the following test days for Mitaka. These are each >> roughly a week after the various milestones. >> >> At each test day, we'd be testing the Mitaka packages, if they are >> available by that milestone. We'd also be doing "installfest" style >> testing of the existing Liberty packages. And we'd also be doing >> documentation testing/hacking alongside this. >> >> Here's the dates. Please let me know whether any of these have any major >> conflicts with national holidays or whatever. I've not put a November >> date in here, but we can add one if desired. >> >> Dec 10-11 >> Jan 27-28 > > Haikel points out that this is the day before the RDO meetup at FOSDEM, > which would be difficult for many of us. We could move this a week > later, or we could plan to actually try to do something in-person in > Brussels? Note that a week later we have Devconf.cz in Brno that I believe will be visited by some RDO folks. Overall, those two weeks are usually quite busy as seen from where I live. :) Ihar From hguemar at fedoraproject.org Wed Nov 11 11:32:48 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 11 Nov 2015 12:32:48 +0100 Subject: [Rdo-list] shade client library In-Reply-To: References: <5640B2A6.2000204@redhat.com> Message-ID: 2015-11-09 19:37 GMT+01:00 Jeff Weber : > The benefit I saw of shade packaged with RDO is that it depends on the > openstack python libs. The integrated configuration provided by RDO where I > don't have to depend on EPEL is an advantage from a package management point > of view. It is much easier to trace provenance of our build when we're not > trying to keep extra stuff from EPEL sneaking in as we do upgrades where > version's may not match up on all sources. > These libs and clients are on Fedora, actually, I'm considering updating clients there at a faster rate than the one in RDO repositories. I'll probably set up a separate target for clients in CBS too. Regards, H. From hguemar at fedoraproject.org Thu Nov 12 10:42:58 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 12 Nov 2015 11:42:58 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services Message-ID: Hi, this has been discussed for a while, so I'd like to set automatic restart for OpenStack services as a goal for Mitaka - milestone 1. These are easy fixes, and by setting it to milestone 1, we'll have more than enough time to test this and optimize. I already created a trello card to track that effort, but I'd like to get your feedback first. https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service Your thoughts? Regards, H. From javier.pena at redhat.com Thu Nov 12 12:07:56 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 12 Nov 2015 07:07:56 -0500 (EST) Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: References: Message-ID: <757527244.14040850.1447330076581.JavaMail.zimbra@redhat.com> > Hi, > > this has been discussed for a while, so I'd like to set automatic > restart for OpenStack services as a goal for Mitaka - milestone 1. > These are easy fixes, and by setting it to milestone 1, we'll have > more than enough time to test this and optimize. > > I already created a trello card to track that effort, but I'd like to > get your feedback first. > https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service > > Your thoughts? > +1, for me this is a good idea. Javier > Regards, > H. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From pmyers at redhat.com Thu Nov 12 13:00:29 2015 From: pmyers at redhat.com (Perry Myers) Date: Thu, 12 Nov 2015 08:00:29 -0500 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: References: Message-ID: <56448D6D.90704@redhat.com> On 11/12/2015 05:42 AM, Ha?kel wrote: > Hi, > > this has been discussed for a while, so I'd like to set automatic > restart for OpenStack services as a goal for Mitaka - milestone 1. > These are easy fixes, and by setting it to milestone 1, we'll have > more than enough time to test this and optimize. > > I already created a trello card to track that effort, but I'd like to > get your feedback first. > https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service can you describe what you mean here by automatic restart for services? I'm worried what the implications would be for things that are running under Pacemaker control in an HA environment, so I think we need more details here From apevec at gmail.com Thu Nov 12 13:06:20 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 12 Nov 2015 14:06:20 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <56448D6D.90704@redhat.com> References: <56448D6D.90704@redhat.com> Message-ID: 2015-11-12 14:00 GMT+01:00 Perry Myers : > can you describe what you mean here by automatic restart for services? > > I'm worried what the implications would be for things that are running > under Pacemaker control in an HA environment, so I think we need more > details here Can we have more details what "under Pacemaker control" means? Haikel's proposal is about modifying systemd service unit files but afaik Pacemaker is not using them but what they call "resource agents" ? Cheers, Alan From pmyers at redhat.com Thu Nov 12 13:30:07 2015 From: pmyers at redhat.com (Perry Myers) Date: Thu, 12 Nov 2015 08:30:07 -0500 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: References: <56448D6D.90704@redhat.com> Message-ID: <5644945F.6060500@redhat.com> On 11/12/2015 08:06 AM, Alan Pevec wrote: > 2015-11-12 14:00 GMT+01:00 Perry Myers : >> can you describe what you mean here by automatic restart for services? >> >> I'm worried what the implications would be for things that are running >> under Pacemaker control in an HA environment, so I think we need more >> details here > > Can we have more details what "under Pacemaker control" means? > Haikel's proposal is about modifying systemd service unit files but > afaik Pacemaker is not using them but what they call "resource agents" > ? Not correct... not every service _has_ a resource agent. Mariadb/galera and rabbitmq do, but all other services are controlled via Pacemaker via their systemd scripts So when you tell pacemaker to stop nova-api, it does so via systemd If something outside of pacemaker knowledge (like user from command line, or rpm %post or whatever) uses systemd directly to stop nova-api, pacemaker says "OMG the service died" and tries to recover Perry From fdinitto at redhat.com Thu Nov 12 13:38:37 2015 From: fdinitto at redhat.com (Fabio M. Di Nitto) Date: Thu, 12 Nov 2015 14:38:37 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: References: <56448D6D.90704@redhat.com> Message-ID: <5644965D.5090306@redhat.com> On 11/12/2015 2:06 PM, Alan Pevec wrote: > 2015-11-12 14:00 GMT+01:00 Perry Myers : >> can you describe what you mean here by automatic restart for services? >> >> I'm worried what the implications would be for things that are running >> under Pacemaker control in an HA environment, so I think we need more >> details here > > Can we have more details what "under Pacemaker control" means? > Haikel's proposal is about modifying systemd service unit files but > afaik Pacemaker is not using them but what they call "resource agents" > ? Pacemaker doesn?t use resource agents for everything. Some services as managed/monitored by pacemaker by using systemd unit files. You can have a random application that?s part of the OS (via systemd unit file) and still manage it in cluster mode via pacemaker. Fabio From mcornea at redhat.com Thu Nov 12 13:46:17 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 12 Nov 2015 08:46:17 -0500 (EST) Subject: [Rdo-list] Horizon crashed on my Kilo environment In-Reply-To: <966267949.8668316.1447335775296.JavaMail.zimbra@redhat.com> Message-ID: <612007878.8673067.1447335977849.JavaMail.zimbra@redhat.com> Hi everyone, I'm running a Kilo environment on Centos and after recent updates Horizon seems to not load anymore: http://paste.openstack.org/show/478661/ openstack-dashboard-2015.1.1-1.el7.noarch python-django-1.8.3-1.el7.noarch Any suggestions to workaround it? Thanks, Marius From chkumar246 at gmail.com Thu Nov 12 12:00:53 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Thu, 12 Nov 2015 17:30:53 +0530 Subject: [Rdo-list] RDO bug statistics [2015-11-12] Message-ID: # RDO Bugs on 2015-11-12 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 334 - Fixed (MODIFIED, POST, ON_QA): 200 ## Number of open bugs by component diskimage-builder [ 4] ++ distribution [ 14] ++++++++++ dnsmasq [ 1] Documentation [ 4] ++ instack [ 4] ++ instack-undercloud [ 28] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 5] +++ openstack-cinder [ 14] ++++++++++ openstack-foreman-inst... [ 2] + openstack-glance [ 3] ++ openstack-heat [ 3] ++ openstack-horizon [ 2] + openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 8] +++++ openstack-manila [ 12] ++++++++ openstack-neutron [ 10] +++++++ openstack-nova [ 19] +++++++++++++ openstack-packstack [ 56] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] +++++++ openstack-selinux [ 10] +++++++ openstack-swift [ 3] ++ openstack-tripleo [ 26] ++++++++++++++++++ openstack-tripleo-heat... [ 5] +++ openstack-tripleo-imag... [ 2] + openstack-tuskar [ 3] ++ openstack-utils [ 4] ++ openvswitch [ 1] Package Review [ 4] ++ python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 3] ++ python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 1] rdo-manager [ 48] ++++++++++++++++++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (334 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-11-10 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-10-07 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (5 bugs) [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (3 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1278962 ] http://bugzilla.redhat.com/1278962 (NEW) Component: openstack-glance Last change: 2015-11-09 Summary: python-cryptography requires pyasn1>=0.1.8 but only 0.1.6 is available in Centos [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (8 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1280530 ] http://bugzilla.redhat.com/1280530 (NEW) Component: openstack-keystone Last change: 2015-11-12 Summary: Fernet tokens cannot read key files with SELInuxz enabeld [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-manila (12 bugs) [1278918 ] http://bugzilla.redhat.com/1278918 (NEW) Component: openstack-manila Last change: 2015-11-06 Summary: manila-api fails to start without updates from upstream stable/liberty [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1277787 ] http://bugzilla.redhat.com/1277787 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: Glusterfs_driver: Export location for Glusterfs NFS- Ganesha is incorrect [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: puppet module for manila should include service type - shareV2 [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1277792 ] http://bugzilla.redhat.com/1277792 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: glusterfs_driver: Access-deny for glusterfs driver should be dynamic [1278919 ] http://bugzilla.redhat.com/1278919 (NEW) Component: openstack-manila Last change: 2015-11-06 Summary: AvailabilityZoneFilter is not working in manila- scheduler [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272954 ] http://bugzilla.redhat.com/1272954 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterFS_native_driver: snapshot delete doesn't delete snapshot entries that are in error state [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (10 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-11-05 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-11-12 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1281308 ] http://bugzilla.redhat.com/1281308 (NEW) Component: openstack-neutron Last change: 2015-11-12 Summary: QoS policy is not enforced when using a previously used port [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1280258 ] http://bugzilla.redhat.com/1280258 (NEW) Component: openstack-neutron Last change: 2015-11-11 Summary: tenants seem like they are able to detach admin enforced QoS policies from ports or networks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (19 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2015-10-19 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1278808 ] http://bugzilla.redhat.com/1278808 (NEW) Component: openstack-nova Last change: 2015-11-06 Summary: Guest fails to use more than 1 vCPU with smpboot: do_boot_cpu failed(-1) to wakeup [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Installation of openstack-nova-compute fails on PowerKVM ### openstack-packstack (56 bugs) [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-11-06 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-10-27 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-10-08 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1279642 ] http://bugzilla.redhat.com/1279642 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run fails when running with DEMO [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1279641 ] http://bugzilla.redhat.com/1279641 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run does not install keystoneauth1 [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1276277 ] http://bugzilla.redhat.com/1276277 (NEW) Component: openstack-packstack Last change: 2015-10-31 Summary: packstack --allinone fails on CentOS 7 x86_64 1503-01 [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (10 bugs) [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-29 Summary: Nova rootwrap-daemon requires a selinux exception [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (3 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-10-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (26 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: missing python-proliantutils [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (4 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1270615 ] http://bugzilla.redhat.com/1270615 (NEW) Component: openstack-utils Last change: 2015-10-11 Summary: openstack status still checking mysql not mariadb [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (4 bugs) [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2015-11-05 Summary: Review Request: Mistral - workflow Service for OpenStack cloud [1268372 ] http://bugzilla.redhat.com/1268372 (ASSIGNED) Component: Package Review Last change: 2015-11-09 Summary: Review Request: openstack-app-catalog-ui - openstack horizon plugin for the openstack app-catalog [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2015-11-05 Summary: Review Request: Murano - is an application catalog for OpenStack [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2015-11-09 Summary: New Package: python-dracclient ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (3 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1281352 ] http://bugzilla.redhat.com/1281352 (NEW) Component: python-neutronclient Last change: 2015-11-12 Summary: Internal server error when running qos-bandwidth-limit- rule-update as a tenant Edit ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (48 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support IPv6 [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-10-20 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1271335 ] http://bugzilla.redhat.com/1271335 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support explicit configuration of L2 population [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-11-10 Summary: [RFE] Support enabling the port security extension [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-11-10 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-31 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (200 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (8 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: package ceilometermiddleware missing [1265721 ] http://bugzilla.redhat.com/1265721 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2015-10-23 Summary: Ceilometer dbsync failing during HA deployment [1265818 ] http://bugzilla.redhat.com/1265818 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: ceilometer polling agent does not start ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-11-09 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2015-10-26 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (61 bugs) [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1259354 ] http://bugzilla.redhat.com/1259354 (MODIFIED) Component: openstack-packstack Last change: 2015-11-10 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (13 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1219406 ] http://bugzilla.redhat.com/1219406 (MODIFIED) Component: openstack-selinux Last change: 2015-11-06 Summary: Glance over nfs fails due to selinux [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (2 bugs) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account [1272572 ] http://bugzilla.redhat.com/1272572 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-10-28 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon ### openstack-trove (3 bugs) [1278608 ] http://bugzilla.redhat.com/1278608 (MODIFIED) Component: openstack-trove Last change: 2015-11-06 Summary: trove-api fails to start [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies [1219069 ] http://bugzilla.redhat.com/1219069 (POST) Component: openstack-trove Last change: 2015-11-05 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ON_QA) Component: Package Review Last change: 2015-10-09 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (9 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1272180 ] http://bugzilla.redhat.com/1272180 (MODIFIED) Component: rdo-manager Last change: 2015-10-19 Summary: Horizon doesn't load when deploying without pacemaker [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-11-04 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (10 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2015-11-08 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu Nov 12 15:28:07 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 12 Nov 2015 16:28:07 +0100 Subject: [Rdo-list] Horizon crashed on my Kilo environment In-Reply-To: <612007878.8673067.1447335977849.JavaMail.zimbra@redhat.com> References: <966267949.8668316.1447335775296.JavaMail.zimbra@redhat.com> <612007878.8673067.1447335977849.JavaMail.zimbra@redhat.com> Message-ID: Please enable CloudSIG kilo testing repo, there are updates for both horizon and python-d-o-a, and report back if they fix your issue. Cheers, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Thu Nov 12 15:55:05 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 12 Nov 2015 10:55:05 -0500 (EST) Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <56448D6D.90704@redhat.com> References: <56448D6D.90704@redhat.com> Message-ID: <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On 11/12/2015 05:42 AM, Ha?kel wrote: > > Hi, > > > > this has been discussed for a while, so I'd like to set automatic > > restart for OpenStack services as a goal for Mitaka - milestone 1. > > These are easy fixes, and by setting it to milestone 1, we'll have > > more than enough time to test this and optimize. > > > > I already created a trello card to track that effort, but I'd like to > > get your feedback first. > > https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service > > can you describe what you mean here by automatic restart for services? > > I'm worried what the implications would be for things that are running > under Pacemaker control in an HA environment, so I think we need more > details here > FWIW, this option is already enabled in some services, such as nova-api (https://github.com/openstack-packages/nova/blob/rpm-master/openstack-nova-api.service) where Restart=always. I've checked a Juno deployment, and that option was already set back then. Regards, Javier > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From fdinitto at redhat.com Thu Nov 12 15:56:09 2015 From: fdinitto at redhat.com (Fabio M. Di Nitto) Date: Thu, 12 Nov 2015 16:56:09 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> Message-ID: <5644B699.7060605@redhat.com> On 11/12/2015 4:55 PM, Javier Pena wrote: > > > ----- Original Message ----- >> On 11/12/2015 05:42 AM, Ha?kel wrote: >>> Hi, >>> >>> this has been discussed for a while, so I'd like to set automatic >>> restart for OpenStack services as a goal for Mitaka - milestone 1. >>> These are easy fixes, and by setting it to milestone 1, we'll have >>> more than enough time to test this and optimize. >>> >>> I already created a trello card to track that effort, but I'd like to >>> get your feedback first. >>> https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service >> >> can you describe what you mean here by automatic restart for services? >> >> I'm worried what the implications would be for things that are running >> under Pacemaker control in an HA environment, so I think we need more >> details here >> > > FWIW, this option is already enabled in some services, such as nova-api (https://github.com/openstack-packages/nova/blob/rpm-master/openstack-nova-api.service) where Restart=always. I've checked a Juno deployment, and that option was already set back then. > It shouldn?t affect pacemaker deployment, because we do override some of those values via dbus config. Fabio > Regards, > Javier > >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From ihrachys at redhat.com Thu Nov 12 16:01:51 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 12 Nov 2015 17:01:51 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <5644B699.7060605@redhat.com> References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <5644B699.7060605@redhat.com> Message-ID: <03C72FED-B93F-40EC-9DF1-694E45ED3764@redhat.com> Fabio M. Di Nitto wrote: > > > On 11/12/2015 4:55 PM, Javier Pena wrote: >> ----- Original Message ----- >>> On 11/12/2015 05:42 AM, Ha?kel wrote: >>>> Hi, >>>> >>>> this has been discussed for a while, so I'd like to set automatic >>>> restart for OpenStack services as a goal for Mitaka - milestone 1. >>>> These are easy fixes, and by setting it to milestone 1, we'll have >>>> more than enough time to test this and optimize. >>>> >>>> I already created a trello card to track that effort, but I'd like to >>>> get your feedback first. >>>> https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service >>> >>> can you describe what you mean here by automatic restart for services? >>> >>> I'm worried what the implications would be for things that are running >>> under Pacemaker control in an HA environment, so I think we need more >>> details here >> >> FWIW, this option is already enabled in some services, such as nova-api >> (https://github.com/openstack-packages/nova/blob/rpm-master/openstack-nova-api.service) >> where Restart=always. I've checked a Juno deployment, and that option >> was already set back then. > > It shouldn?t affect pacemaker deployment, because we do override some of > those values via dbus config. To clarify things, doesn?t it affect just nova-api because you have some special treatment for the service? or it would not affect you if applied to other units? [In that case we could go forward and set automatic restart policy there, while you could stick to your hooks to influence service restart behaviour.] Ihar From fdinitto at redhat.com Thu Nov 12 16:04:21 2015 From: fdinitto at redhat.com (Fabio M. Di Nitto) Date: Thu, 12 Nov 2015 17:04:21 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <03C72FED-B93F-40EC-9DF1-694E45ED3764@redhat.com> References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <5644B699.7060605@redhat.com> <03C72FED-B93F-40EC-9DF1-694E45ED3764@redhat.com> Message-ID: <5644B885.6050506@redhat.com> On 11/12/2015 5:01 PM, Ihar Hrachyshka wrote: > Fabio M. Di Nitto wrote: > >> >> >> On 11/12/2015 4:55 PM, Javier Pena wrote: >>> ----- Original Message ----- >>>> On 11/12/2015 05:42 AM, Ha?kel wrote: >>>>> Hi, >>>>> >>>>> this has been discussed for a while, so I'd like to set automatic >>>>> restart for OpenStack services as a goal for Mitaka - milestone 1. >>>>> These are easy fixes, and by setting it to milestone 1, we'll have >>>>> more than enough time to test this and optimize. >>>>> >>>>> I already created a trello card to track that effort, but I'd like to >>>>> get your feedback first. >>>>> https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service >>>> >>>> can you describe what you mean here by automatic restart for services? >>>> >>>> I'm worried what the implications would be for things that are running >>>> under Pacemaker control in an HA environment, so I think we need more >>>> details here >>> >>> FWIW, this option is already enabled in some services, such as >>> nova-api >>> (https://github.com/openstack-packages/nova/blob/rpm-master/openstack-nova-api.service) >>> where Restart=always. I've checked a Juno deployment, and that option >>> was already set back then. >> >> It shouldn?t affect pacemaker deployment, because we do override some of >> those values via dbus config. > > To clarify things, doesn?t it affect just nova-api because you have some > special treatment for the service? or it would not affect you if applied > to other units? [In that case we could go forward and set automatic > restart policy there, while you could stick to your hooks to influence > service restart behaviour.] all services in pacemaker that are managed/monitored via systemd gets a set of overrides to avoid those kind of issues. Fabio > > Ihar From ihrachys at redhat.com Thu Nov 12 16:06:58 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 12 Nov 2015 17:06:58 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <5644B885.6050506@redhat.com> References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <5644B699.7060605@redhat.com> <03C72FED-B93F-40EC-9DF1-694E45ED3764@redhat.com> <5644B885.6050506@redhat.com> Message-ID: <47CCAF17-92BA-4DE9-A699-48265ACE4D5B@redhat.com> Fabio M. Di Nitto wrote: > > > On 11/12/2015 5:01 PM, Ihar Hrachyshka wrote: >> Fabio M. Di Nitto wrote: >> >>> On 11/12/2015 4:55 PM, Javier Pena wrote: >>>> ----- Original Message ----- >>>>> On 11/12/2015 05:42 AM, Ha?kel wrote: >>>>>> Hi, >>>>>> >>>>>> this has been discussed for a while, so I'd like to set automatic >>>>>> restart for OpenStack services as a goal for Mitaka - milestone 1. >>>>>> These are easy fixes, and by setting it to milestone 1, we'll have >>>>>> more than enough time to test this and optimize. >>>>>> >>>>>> I already created a trello card to track that effort, but I'd like to >>>>>> get your feedback first. >>>>>> https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service >>>>> >>>>> can you describe what you mean here by automatic restart for services? >>>>> >>>>> I'm worried what the implications would be for things that are running >>>>> under Pacemaker control in an HA environment, so I think we need more >>>>> details here >>>> >>>> FWIW, this option is already enabled in some services, such as >>>> nova-api >>>> (https://github.com/openstack-packages/nova/blob/rpm-master/openstack-nova-api.service) >>>> where Restart=always. I've checked a Juno deployment, and that option >>>> was already set back then. >>> >>> It shouldn?t affect pacemaker deployment, because we do override some of >>> those values via dbus config. >> >> To clarify things, doesn?t it affect just nova-api because you have some >> special treatment for the service? or it would not affect you if applied >> to other units? [In that case we could go forward and set automatic >> restart policy there, while you could stick to your hooks to influence >> service restart behaviour.] > > all services in pacemaker that are managed/monitored via systemd gets a > set of overrides to avoid those kind of issues. Then we should be safe to proceed with the feature. Thanks, Ihar From mmosesohn at mirantis.com Thu Nov 12 16:13:55 2015 From: mmosesohn at mirantis.com (Matthew Mosesohn) Date: Thu, 12 Nov 2015 19:13:55 +0300 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <47CCAF17-92BA-4DE9-A699-48265ACE4D5B@redhat.com> References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <5644B699.7060605@redhat.com> <03C72FED-B93F-40EC-9DF1-694E45ED3764@redhat.com> <5644B885.6050506@redhat.com> <47CCAF17-92BA-4DE9-A699-48265ACE4D5B@redhat.com> Message-ID: Typically if you manage services with pacemaker, you would put the affected service into unmanaged state before you run `yum update`, so that RPM can do its thing. Afterwards, just mark the services managed. On Thu, Nov 12, 2015 at 7:06 PM, Ihar Hrachyshka wrote: > Fabio M. Di Nitto wrote: > > >> >> On 11/12/2015 5:01 PM, Ihar Hrachyshka wrote: >> >>> Fabio M. Di Nitto wrote: >>> >>> On 11/12/2015 4:55 PM, Javier Pena wrote: >>>> >>>>> ----- Original Message ----- >>>>> >>>>>> On 11/12/2015 05:42 AM, Ha?kel wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> this has been discussed for a while, so I'd like to set automatic >>>>>>> restart for OpenStack services as a goal for Mitaka - milestone 1. >>>>>>> These are easy fixes, and by setting it to milestone 1, we'll have >>>>>>> more than enough time to test this and optimize. >>>>>>> >>>>>>> I already created a trello card to track that effort, but I'd like to >>>>>>> get your feedback first. >>>>>>> https://trello.com/c/HfXMLSTD/106-set-automatic-restart-for-service >>>>>>> >>>>>> >>>>>> can you describe what you mean here by automatic restart for services? >>>>>> >>>>>> I'm worried what the implications would be for things that are running >>>>>> under Pacemaker control in an HA environment, so I think we need more >>>>>> details here >>>>>> >>>>> >>>>> FWIW, this option is already enabled in some services, such as >>>>> nova-api >>>>> ( >>>>> https://github.com/openstack-packages/nova/blob/rpm-master/openstack-nova-api.service >>>>> ) >>>>> where Restart=always. I've checked a Juno deployment, and that option >>>>> was already set back then. >>>>> >>>> >>>> It shouldn?t affect pacemaker deployment, because we do override some of >>>> those values via dbus config. >>>> >>> >>> To clarify things, doesn?t it affect just nova-api because you have some >>> special treatment for the service? or it would not affect you if applied >>> to other units? [In that case we could go forward and set automatic >>> restart policy there, while you could stick to your hooks to influence >>> service restart behaviour.] >>> >> >> all services in pacemaker that are managed/monitored via systemd gets a >> set of overrides to avoid those kind of issues. >> > > Then we should be safe to proceed with the feature. > > Thanks, > > Ihar > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Thu Nov 12 14:05:27 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 12 Nov 2015 15:05:27 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <5644945F.6060500@redhat.com> References: <56448D6D.90704@redhat.com> <5644945F.6060500@redhat.com> Message-ID: 2015-11-12 14:30 GMT+01:00 Perry Myers : > On 11/12/2015 08:06 AM, Alan Pevec wrote: >> 2015-11-12 14:00 GMT+01:00 Perry Myers : >>> can you describe what you mean here by automatic restart for services? >>> >>> I'm worried what the implications would be for things that are running >>> under Pacemaker control in an HA environment, so I think we need more >>> details here >> >> Can we have more details what "under Pacemaker control" means? >> Haikel's proposal is about modifying systemd service unit files but >> afaik Pacemaker is not using them but what they call "resource agents" >> ? > > Not correct... not every service _has_ a resource agent. Mariadb/galera > and rabbitmq do, but all other services are controlled via Pacemaker via > their systemd scripts > > So when you tell pacemaker to stop nova-api, it does so via systemd > This is ok, if you stop normally services, they won't be restarted. > If something outside of pacemaker knowledge (like user from command > line, or rpm %post or whatever) uses systemd directly to stop nova-api, > pacemaker says "OMG the service died" and tries to recover > > Perry > If Andrew (or anyone else) could confirm, but from my understanding, Pacemaker already disable systemd auto-respawn since 1.1.10 (which is quite old, RHEL7 ship 1.1.12 at least) https://github.com/ClusterLabs/pacemaker/commit/37c6efae3b9d1db18d8d782bd8ff1da7b77f14f3 https://github.com/ClusterLabs/pacemaker/blob/master/lib/services/systemd.c#L564 <= the code generating the override file shows that it does disable that feature/ Concerning the changes, they will considerably improve reliability for people not relying on Pacemaker, so I guess we're good in that aspect. Regards, H. From hguemar at fedoraproject.org Thu Nov 12 17:04:04 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 12 Nov 2015 18:04:04 +0100 Subject: [Rdo-list] [Mitaka] Automatic restart for services In-Reply-To: <47CCAF17-92BA-4DE9-A699-48265ACE4D5B@redhat.com> References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <5644B699.7060605@redhat.com> <03C72FED-B93F-40EC-9DF1-694E45ED3764@redhat.com> <5644B885.6050506@redhat.com> <47CCAF17-92BA-4DE9-A699-48265ACE4D5B@redhat.com> Message-ID: 2015-11-12 17:06 GMT+01:00 Ihar Hrachyshka : > > Then we should be safe to proceed with the feature. > > Thanks, > > Ihar > I guess we could start with fixing services to implement that feature. H. From mcornea at redhat.com Thu Nov 12 17:39:15 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 12 Nov 2015 12:39:15 -0500 (EST) Subject: [Rdo-list] Horizon crashed on my Kilo environment In-Reply-To: References: <966267949.8668316.1447335775296.JavaMail.zimbra@redhat.com> <612007878.8673067.1447335977849.JavaMail.zimbra@redhat.com> Message-ID: <1271060555.8886458.1447349955698.JavaMail.zimbra@redhat.com> Where can I get the url for the CloudSIG kilo testing repo? Thanks ----- Original Message ----- > From: "Alan Pevec" > To: "Marius Cornea" > Cc: "Rdo-list at redhat.com" > Sent: Thursday, November 12, 2015 4:28:07 PM > Subject: Re: [Rdo-list] Horizon crashed on my Kilo environment > > Please enable CloudSIG kilo testing repo, there are updates for both > horizon and python-d-o-a, and report back if they fix your issue. > > Cheers, > Alan > From bderzhavets at hotmail.com Fri Nov 13 18:09:05 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 13 Nov 2015 18:09:05 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> Message-ID: Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) Looks like provider external networks doesn't work for me. But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, I need NetworkManager active, rather then network.service [root at hacontroller1 network-scripts]# systemctl status NetworkManager NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago Main PID: 808 (NetworkManager) CGroup: /system.slice/NetworkManager.service ?? 808 /usr/sbin/NetworkManager --no-daemon ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete [root at hacontroller1 network-scripts]# systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: inactive (dead) [root at hacontroller1 network-scripts]# cat ifcfg-eth0 TYPE="Ethernet" BOOTPROTO="static" NAME="eth0" DEVICE=eth0 ONBOOT="yes" [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms --- 10.10.10.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, to provide route to 10.10.10.0/24. Thank you. Boris. -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2015-11-12 21-29-48.png Type: image/png Size: 170500 bytes Desc: Screenshot from 2015-11-12 21-29-48.png URL: From apevec at gmail.com Fri Nov 13 18:39:11 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 13 Nov 2015 19:39:11 +0100 Subject: [Rdo-list] Horizon crashed on my Kilo environment In-Reply-To: <1271060555.8886458.1447349955698.JavaMail.zimbra@redhat.com> References: <966267949.8668316.1447335775296.JavaMail.zimbra@redhat.com> <612007878.8673067.1447335977849.JavaMail.zimbra@redhat.com> <1271060555.8886458.1447349955698.JavaMail.zimbra@redhat.com> Message-ID: 2015-11-12 18:39 GMT+01:00 Marius Cornea : > Where can I get the url for the CloudSIG kilo testing repo? http://buildlogs.centos.org/centos/7/cloud/openstack-kilo/ It is installed as, disabled by default, centos-openstack-kilo-test repo by centos-release-openstack-kilo RPM. Cheers, Alan From mcornea at redhat.com Fri Nov 13 18:55:58 2015 From: mcornea at redhat.com (Marius Cornea) Date: Fri, 13 Nov 2015 13:55:58 -0500 (EST) Subject: [Rdo-list] Horizon crashed on my Kilo environment In-Reply-To: References: <966267949.8668316.1447335775296.JavaMail.zimbra@redhat.com> <612007878.8673067.1447335977849.JavaMail.zimbra@redhat.com> <1271060555.8886458.1447349955698.JavaMail.zimbra@redhat.com> Message-ID: <1364572332.10052467.1447440958666.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alan Pevec" > To: "Marius Cornea" > Cc: "Rdo-list at redhat.com" > Sent: Friday, November 13, 2015 7:39:11 PM > Subject: Re: [Rdo-list] Horizon crashed on my Kilo environment > > 2015-11-12 18:39 GMT+01:00 Marius Cornea : > > Where can I get the url for the CloudSIG kilo testing repo? > > http://buildlogs.centos.org/centos/7/cloud/openstack-kilo/ > It is installed as, disabled by default, centos-openstack-kilo-test > repo by centos-release-openstack-kilo RPM. Thanks! I applied the updates and I can successfully access Horizon. > > Cheers, > Alan > From bderzhavets at hotmail.com Fri Nov 13 19:38:01 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 13 Nov 2015 19:38:01 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, Message-ID: I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) What bad does NetworkManager when external network provider is used ? Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), so nothing is supposed to work :- http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html Either I am missing something here. ________________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Friday, November 13, 2015 1:09 PM To: Javier Pena Cc: rdo-list at redhat.com Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) Looks like provider external networks doesn't work for me. But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, I need NetworkManager active, rather then network.service [root at hacontroller1 network-scripts]# systemctl status NetworkManager NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago Main PID: 808 (NetworkManager) CGroup: /system.slice/NetworkManager.service ?? 808 /usr/sbin/NetworkManager --no-daemon ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete [root at hacontroller1 network-scripts]# systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: inactive (dead) [root at hacontroller1 network-scripts]# cat ifcfg-eth0 TYPE="Ethernet" BOOTPROTO="static" NAME="eth0" DEVICE=eth0 ONBOOT="yes" [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms --- 10.10.10.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, to provide route to 10.10.10.0/24. Thank you. Boris. From dsneddon at redhat.com Fri Nov 13 19:46:47 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 13 Nov 2015 11:46:47 -0800 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, Message-ID: <56463E27.7010909@redhat.com> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: > I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , > `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) > What bad does NetworkManager when external network provider is used ? > Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), > so nothing is supposed to work :- > http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ > http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html > Either I am missing something here. > ________________________________________ > From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets > Sent: Friday, November 13, 2015 1:09 PM > To: Javier Pena > Cc: rdo-list at redhat.com > Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md > > Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, > However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) > Looks like provider external networks doesn't work for me. > > But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, > I need NetworkManager active, rather then network.service > > [root at hacontroller1 network-scripts]# systemctl status NetworkManager > NetworkManager.service - Network Manager > Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) > Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago > Main PID: 808 (NetworkManager) > CGroup: /system.slice/NetworkManager.service > ?? 808 /usr/sbin/NetworkManager --no-daemon > ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... > > Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L > Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. > Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... > Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete > > [root at hacontroller1 network-scripts]# systemctl status network.service > network.service - LSB: Bring up/down networking > Loaded: loaded (/etc/rc.d/init.d/network) > Active: inactive (dead) > > [root at hacontroller1 network-scripts]# cat ifcfg-eth0 > TYPE="Ethernet" > BOOTPROTO="static" > NAME="eth0" > DEVICE=eth0 > ONBOOT="yes" > > [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 > PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. > 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms > 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms > 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms > > --- 10.10.10.1 ping statistics --- > 3 packets transmitted, 3 received, 0% packet loss, time 1999ms > rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms > > If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, > to provide route to 10.10.10.0/24. > > Thank you. > Boris. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > OK, a few things here. First of all, you don't actually need to have an IP address on the host system to use a VLAN or interface as an external provider network. The Neutron router will have an IP on the right network, and within its namespace will be able to reach the 10.10.10.x network. It looks to me like NetworkManager is running dhclient for eth0, even though you have BOOTPROTO="static". This is causing an IP address to be added to eth0, so you are able to ping 10.10.10.x from the host. When you turn off NetworkManager, this unexpected behavior goes away, *but you should still be able to use provider networks*. Try creating a Neutron router with an IP on 10.10.10.x, and then you should be able to ping that network from the router namespace. If you want to be able to ping 10.10.10.x from the host, then you should put either a static IP or DHCP on the bridge, not on eth0. This should work whether you are running NetworkManager or network.service. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From dsneddon at redhat.com Fri Nov 13 20:56:29 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 13 Nov 2015 12:56:29 -0800 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, , <56463E27.7010909@redhat.com> Message-ID: <56464E7D.1020907@redhat.com> Hi Boris, Let's keep this on-list, there may be others who are having similar issues who could find this discussion useful. Answers inline... On 11/13/2015 12:17 PM, Boris Derzhavets wrote: > > > ________________________________________ > From: Dan Sneddon > Sent: Friday, November 13, 2015 2:46 PM > To: Boris Derzhavets; Javier Pena > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md > > On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >> What bad does NetworkManager when external network provider is used ? >> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >> so nothing is supposed to work :- >> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >> Either I am missing something here. >> ________________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >> Sent: Friday, November 13, 2015 1:09 PM >> To: Javier Pena >> Cc: rdo-list at redhat.com >> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >> >> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >> Looks like provider external networks doesn't work for me. >> >> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >> I need NetworkManager active, rather then network.service >> >> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >> NetworkManager.service - Network Manager >> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >> Main PID: 808 (NetworkManager) >> CGroup: /system.slice/NetworkManager.service >> ?? 808 /usr/sbin/NetworkManager --no-daemon >> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >> >> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >> >> [root at hacontroller1 network-scripts]# systemctl status network.service >> network.service - LSB: Bring up/down networking >> Loaded: loaded (/etc/rc.d/init.d/network) >> Active: inactive (dead) >> >> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >> TYPE="Ethernet" >> BOOTPROTO="static" >> NAME="eth0" >> DEVICE=eth0 >> ONBOOT="yes" >> >> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >> >> --- 10.10.10.1 ping statistics --- >> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >> >> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >> to provide route to 10.10.10.0/24. >> >> Thank you. >> Boris. >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > OK, a few things here. First of all, you don't actually need to have an > IP address on the host system to use a VLAN or interface as an external > provider network. The Neutron router will have an IP on the right > network, and within its namespace will be able to reach the 10.10.10.x > network. > >> It looks to me like NetworkManager is running dhclient for eth0, even >> though you have BOOTPROTO="static". This is causing an IP address to be >> added to eth0, so you are able to ping 10.10.10.x from the host. When >> you turn off NetworkManager, this unexpected behavior goes away, *but >> you should still be able to use provider networks*. > > Here I am quoting Lars Kellogg Stedman > http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ > The bottom statement in blog post above states :- > "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." Right, what Lars means is that eth1 is physically connected to a network with the 10.1.0.0/24 subnet, and eth2 is physically connected to a network with the 10.2.0.0/24 subnet. You might notice that in Lars's instructions, he never puts a host IP on either interface. >> Try creating a Neutron router with an IP on 10.10.10.x, and then you >> should be able to ping that network from the router namespace. > > " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's > IP " Let me refer you to this page, which explains the basics of creating and managing Neutron networks: http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html You will have to create an external network, which you will associate with a physical network via a bridge mapping. The default bridge mapping for br-ex is datacentre:br-ex. Using the name of the physical network "datacentre", we can create an external network: [If the external network is on VLAN 104] neutron net-create ext-net --router:external \ --provider:physical_network datacentre \ --provider:network_type vlan \ --provider:segmentation_id 104 [If the external net is on the native VLAN (flat)] neutron net-create ext-net --router:external \ --provider:physical_network datacentre \ --provider:network_type flat Next, you must create a subnet for the network, including the range of floating IPs (allocation pool): neutron subnet-create --name ext-subnet \ --enable_dhcp=False \ --allocation-pool start=10.10.10.50,end=10.10.10.100 \ --gateway 10.10.10.1 \ ext-net 10.10.10.0/24 Next, you have to create a router: neutron router-create ext-router You then add an interface to the router. Since Neutron will assign the first address in the subnet to the router by default (10.10.10.1), you will want to first create a port with a specific IP, then assign that port to the router. neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 You will need to note the UUID of the newly created port. You can also see this with "neutron port-list". Now, create the router interface with the port you just created: neutron router-interface-add ext-router port= >> If you want to be able to ping 10.10.10.x from the host, then you >> should put either a static IP or DHCP on the bridge, not on eth0. This >> should work whether you are running NetworkManager or network.service. > > "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), > it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". > It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to > cloud VM on this subnet." I think you will have better luck once you create the external network and router. You can then use namespaces to ping the network from the router: First, obtain the qrouter- from the list of namespaces: sudo ip netns list Then, find the qrouter- and ping from there: ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From dsneddon at redhat.com Fri Nov 13 21:10:26 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 13 Nov 2015 13:10:26 -0800 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: <56464E7D.1020907@redhat.com> References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, , <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com> Message-ID: <564651C2.3070409@redhat.com> On 11/13/2015 12:56 PM, Dan Sneddon wrote: > Hi Boris, > > Let's keep this on-list, there may be others who are having similar > issues who could find this discussion useful. > > Answers inline... > > On 11/13/2015 12:17 PM, Boris Derzhavets wrote: >> >> >> ________________________________________ >> From: Dan Sneddon >> Sent: Friday, November 13, 2015 2:46 PM >> To: Boris Derzhavets; Javier Pena >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >> >> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >>> What bad does NetworkManager when external network provider is used ? >>> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >>> so nothing is supposed to work :- >>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >>> Either I am missing something here. >>> ________________________________________ >>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>> Sent: Friday, November 13, 2015 1:09 PM >>> To: Javier Pena >>> Cc: rdo-list at redhat.com >>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>> >>> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >>> Looks like provider external networks doesn't work for me. >>> >>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >>> I need NetworkManager active, rather then network.service >>> >>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >>> NetworkManager.service - Network Manager >>> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >>> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >>> Main PID: 808 (NetworkManager) >>> CGroup: /system.slice/NetworkManager.service >>> ?? 808 /usr/sbin/NetworkManager --no-daemon >>> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >>> >>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >>> >>> [root at hacontroller1 network-scripts]# systemctl status network.service >>> network.service - LSB: Bring up/down networking >>> Loaded: loaded (/etc/rc.d/init.d/network) >>> Active: inactive (dead) >>> >>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >>> TYPE="Ethernet" >>> BOOTPROTO="static" >>> NAME="eth0" >>> DEVICE=eth0 >>> ONBOOT="yes" >>> >>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >>> >>> --- 10.10.10.1 ping statistics --- >>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >>> >>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >>> to provide route to 10.10.10.0/24. >>> >>> Thank you. >>> Boris. >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> OK, a few things here. First of all, you don't actually need to have an >> IP address on the host system to use a VLAN or interface as an external >> provider network. The Neutron router will have an IP on the right >> network, and within its namespace will be able to reach the 10.10.10.x >> network. >> >>> It looks to me like NetworkManager is running dhclient for eth0, even >>> though you have BOOTPROTO="static". This is causing an IP address to be >>> added to eth0, so you are able to ping 10.10.10.x from the host. When >>> you turn off NetworkManager, this unexpected behavior goes away, *but >>> you should still be able to use provider networks*. >> >> Here I am quoting Lars Kellogg Stedman >> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >> The bottom statement in blog post above states :- >> "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." > > Right, what Lars means is that eth1 is physically connected to a > network with the 10.1.0.0/24 subnet, and eth2 is physically connected > to a network with the 10.2.0.0/24 subnet. > > You might notice that in Lars's instructions, he never puts a host IP > on either interface. > >>> Try creating a Neutron router with an IP on 10.10.10.x, and then you >>> should be able to ping that network from the router namespace. >> >> " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's >> IP " > > Let me refer you to this page, which explains the basics of creating > and managing Neutron networks: > > http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html > > You will have to create an external network, which you will associate > with a physical network via a bridge mapping. The default bridge > mapping for br-ex is datacentre:br-ex. > > Using the name of the physical network "datacentre", we can create an > external network: > > [If the external network is on VLAN 104] > neutron net-create ext-net --router:external \ > --provider:physical_network datacentre \ > --provider:network_type vlan \ > --provider:segmentation_id 104 > > [If the external net is on the native VLAN (flat)] > neutron net-create ext-net --router:external \ > --provider:physical_network datacentre \ > --provider:network_type flat > > Next, you must create a subnet for the network, including the range of > floating IPs (allocation pool): > > neutron subnet-create --name ext-subnet \ > --enable_dhcp=False \ > --allocation-pool start=10.10.10.50,end=10.10.10.100 \ > --gateway 10.10.10.1 \ > ext-net 10.10.10.0/24 > > Next, you have to create a router: > > neutron router-create ext-router > > You then add an interface to the router. Since Neutron will assign the > first address in the subnet to the router by default (10.10.10.1), you > will want to first create a port with a specific IP, then assign that > port to the router. > > neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 > > You will need to note the UUID of the newly created port. You can also > see this with "neutron port-list". Now, create the router interface > with the port you just created: > > neutron router-interface-add ext-router port= > >>> If you want to be able to ping 10.10.10.x from the host, then you >>> should put either a static IP or DHCP on the bridge, not on eth0. This >>> should work whether you are running NetworkManager or network.service. >> >> "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), >> it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". >> It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to >> cloud VM on this subnet." > > I think you will have better luck once you create the external network > and router. You can then use namespaces to ping the network from the > router: > > First, obtain the qrouter- from the list of namespaces: > > sudo ip netns list > > Then, find the qrouter- and ping from there: > > ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 > One more quick thing to note: In order to use floating IPs, you will also have to attach the external router to the tenant networks where floating IPs will be used. When you go through the steps to create a tenant network, also attach it to the router: 1) Create the network: neutron net-create tenant-net-1 2) Create the subnet: neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 3) Attach the external router to the network: neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 (since no specific port was given in the router-interface-add command, Neutron will automatically choose the first address in the given subnet, so 172.21.0.1 in this example) -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From bderzhavets at hotmail.com Sat Nov 14 08:35:33 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 14 Nov 2015 08:35:33 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: <564651C2.3070409@redhat.com> References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, , <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com>,<564651C2.3070409@redhat.com> Message-ID: ________________________________________ From: Dan Sneddon Sent: Friday, November 13, 2015 4:10 PM To: Boris Derzhavets; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md On 11/13/2015 12:56 PM, Dan Sneddon wrote: > Hi Boris, > > Let's keep this on-list, there may be others who are having similar > issues who could find this discussion useful. > > Answers inline... > > On 11/13/2015 12:17 PM, Boris Derzhavets wrote: >> >> >> ________________________________________ >> From: Dan Sneddon >> Sent: Friday, November 13, 2015 2:46 PM >> To: Boris Derzhavets; Javier Pena >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >> >> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >>> What bad does NetworkManager when external network provider is used ? >>> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >>> so nothing is supposed to work :- >>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >>> Either I am missing something here. >>> ________________________________________ >>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>> Sent: Friday, November 13, 2015 1:09 PM >>> To: Javier Pena >>> Cc: rdo-list at redhat.com >>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>> >>> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >>> Looks like provider external networks doesn't work for me. >>> >>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >>> I need NetworkManager active, rather then network.service >>> >>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >>> NetworkManager.service - Network Manager >>> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >>> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >>> Main PID: 808 (NetworkManager) >>> CGroup: /system.slice/NetworkManager.service >>> ?? 808 /usr/sbin/NetworkManager --no-daemon >>> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >>> >>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >>> >>> [root at hacontroller1 network-scripts]# systemctl status network.service >>> network.service - LSB: Bring up/down networking >>> Loaded: loaded (/etc/rc.d/init.d/network) >>> Active: inactive (dead) >>> >>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >>> TYPE="Ethernet" >>> BOOTPROTO="static" >>> NAME="eth0" >>> DEVICE=eth0 >>> ONBOOT="yes" >>> >>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >>> >>> --- 10.10.10.1 ping statistics --- >>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >>> >>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >>> to provide route to 10.10.10.0/24. >>> >>> Thank you. >>> Boris. >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> OK, a few things here. First of all, you don't actually need to have an >> IP address on the host system to use a VLAN or interface as an external >> provider network. The Neutron router will have an IP on the right >> network, and within its namespace will be able to reach the 10.10.10.x >> network. >> >>> It looks to me like NetworkManager is running dhclient for eth0, even >>> though you have BOOTPROTO="static". This is causing an IP address to be >>> added to eth0, so you are able to ping 10.10.10.x from the host. When >>> you turn off NetworkManager, this unexpected behavior goes away, *but >>> you should still be able to use provider networks*. >> >> Here I am quoting Lars Kellogg Stedman >> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >> The bottom statement in blog post above states :- >> "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." > > Right, what Lars means is that eth1 is physically connected to a > network with the 10.1.0.0/24 subnet, and eth2 is physically connected > to a network with the 10.2.0.0/24 subnet. > > You might notice that in Lars's instructions, he never puts a host IP > on either interface. > >>> Try creating a Neutron router with an IP on 10.10.10.x, and then you >>> should be able to ping that network from the router namespace. >> >> " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's >> IP " > > Let me refer you to this page, which explains the basics of creating > and managing Neutron networks: > > http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html > > You will have to create an external network, which you will associate > with a physical network via a bridge mapping. The default bridge > mapping for br-ex is datacentre:br-ex. > > Using the name of the physical network "datacentre", we can create an 1. Javier is using external network provider ( and so did I , following him) #. /root/keystonerc_admin # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 HA Neutron router and tenant's subnet have been created. Then interface to tenant's network was activated as well as gateway to public. Security rules were implemented as usual. Cloud VM was launched, it obtained private IP and committed cloud-init OK. Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization Host 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net > external network: > > [If the external network is on VLAN 104] > neutron net-create ext-net --router:external \ > --provider:physical_network datacentre \ > --provider:network_type vlan \ > --provider:segmentation_id 104 > > [If the external net is on the native VLAN (flat)] > neutron net-create ext-net --router:external \ > --provider:physical_network datacentre \ > --provider:network_type flat > > Next, you must create a subnet for the network, including the range of > floating IPs (allocation pool): > > neutron subnet-create --name ext-subnet \ > --enable_dhcp=False \ > --allocation-pool start=10.10.10.50,end=10.10.10.100 \ > --gateway 10.10.10.1 \ > ext-net 10.10.10.0/24 > > Next, you have to create a router: > > neutron router-create ext-router > > You then add an interface to the router. Since Neutron will assign the > first address in the subnet to the router by default (10.10.10.1), you > will want to first create a port with a specific IP, then assign that > port to the router. > > neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 > > You will need to note the UUID of the newly created port. You can also > see this with "neutron port-list". Now, create the router interface > with the port you just created: > > neutron router-interface-add ext-router port= > >>> If you want to be able to ping 10.10.10.x from the host, then you >>> should put either a static IP or DHCP on the bridge, not on eth0. This >>> should work whether you are running NetworkManager or network.service. >> >> "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), >> it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". >> It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to >> cloud VM on this subnet." > > I think you will have better luck once you create the external network > and router. You can then use namespaces to ping the network from the > router: > > First, obtain the qrouter- from the list of namespaces: > > sudo ip netns list > > Then, find the qrouter- and ping from there: > > ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 > One more quick thing to note: In order to use floating IPs, you will also have to attach the external router to the tenant networks where floating IPs will be used. When you go through the steps to create a tenant network, also attach it to the router: 1) Create the network: neutron net-create tenant-net-1 2) Create the subnet: neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 3) Attach the external router to the network: neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 (since no specific port was given in the router-interface-add command, Neutron will automatically choose the first address in the given subnet, so 172.21.0.1 in this example) -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From alessandro at namecheap.com Sat Nov 14 09:26:06 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Sat, 14 Nov 2015 10:26:06 +0100 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com> <564651C2.3070409@redhat.com> Message-ID: I happened to have deployed following the exact same guide a cloud on bare metal composed of 3 controllers, 2 haproxy nodes and N computes, with external provider networks. What I did was: -) (at https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/neutron-config.md): On controller and compute, define the external vlan interface and its bridge: cat < /etc/sysconfig/network-scripts/ifcfg-bond0.102 DEVICE=bond0.102 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-bond0.102 ONBOOT=yes BOOTPROTO=none VLAN=yes MTU="1500" NM_CONTROLLED=no EOF cat < /etc/sysconfig/network-scripts/ifcfg-br-bond0.102 DEVICE=br-bond0.102 DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge ONBOOT=yes BOOTPROTO=static MTU="1500" NM_CONTROLLED=no EOF then, make sure that everywhere exists /etc/neutron/plugins/ml2/openvswitch_agent.ini as: [ovs] enable_tunneling = True tunnel_id_ranges = 1:1000 tenant_network_type = vxlan integration_bridge = br-int tunnel_bridge = br-tun local_ip = bridge_mappings = physnet1:br-bond0.102 network_vlan_ranges = physnet1 [agent] tunnel_types = vxlan vxlan_udp_port = 4789 l2_population = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver restart neutron-openvswitch-agent everywhere to make it work: Hope it helps Alessandro > On 14 Nov 2015, at 09:35, Boris Derzhavets wrote: > > > > ________________________________________ > From: Dan Sneddon > > Sent: Friday, November 13, 2015 4:10 PM > To: Boris Derzhavets; rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md > > On 11/13/2015 12:56 PM, Dan Sneddon wrote: >> Hi Boris, >> >> Let's keep this on-list, there may be others who are having similar >> issues who could find this discussion useful. >> >> Answers inline... >> >> On 11/13/2015 12:17 PM, Boris Derzhavets wrote: >>> >>> >>> ________________________________________ >>> From: Dan Sneddon >>> Sent: Friday, November 13, 2015 2:46 PM >>> To: Boris Derzhavets; Javier Pena >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>> >>> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >>>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >>>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >>>> What bad does NetworkManager when external network provider is used ? >>>> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >>>> so nothing is supposed to work :- >>>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>>> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >>>> Either I am missing something here. >>>> ________________________________________ >>>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>>> Sent: Friday, November 13, 2015 1:09 PM >>>> To: Javier Pena >>>> Cc: rdo-list at redhat.com >>>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>>> >>>> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >>>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >>>> Looks like provider external networks doesn't work for me. >>>> >>>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >>>> I need NetworkManager active, rather then network.service >>>> >>>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >>>> NetworkManager.service - Network Manager >>>> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >>>> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >>>> Main PID: 808 (NetworkManager) >>>> CGroup: /system.slice/NetworkManager.service >>>> ?? 808 /usr/sbin/NetworkManager --no-daemon >>>> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >>>> >>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >>>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >>>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >>>> >>>> [root at hacontroller1 network-scripts]# systemctl status network.service >>>> network.service - LSB: Bring up/down networking >>>> Loaded: loaded (/etc/rc.d/init.d/network) >>>> Active: inactive (dead) >>>> >>>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >>>> TYPE="Ethernet" >>>> BOOTPROTO="static" >>>> NAME="eth0" >>>> DEVICE=eth0 >>>> ONBOOT="yes" >>>> >>>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >>>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >>>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >>>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >>>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >>>> >>>> --- 10.10.10.1 ping statistics --- >>>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >>>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >>>> >>>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >>>> to provide route to 10.10.10.0/24. >>>> >>>> Thank you. >>>> Boris. >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> OK, a few things here. First of all, you don't actually need to have an >>> IP address on the host system to use a VLAN or interface as an external >>> provider network. The Neutron router will have an IP on the right >>> network, and within its namespace will be able to reach the 10.10.10.x >>> network. >>> >>>> It looks to me like NetworkManager is running dhclient for eth0, even >>>> though you have BOOTPROTO="static". This is causing an IP address to be >>>> added to eth0, so you are able to ping 10.10.10.x from the host. When >>>> you turn off NetworkManager, this unexpected behavior goes away, *but >>>> you should still be able to use provider networks*. >>> >>> Here I am quoting Lars Kellogg Stedman >>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>> The bottom statement in blog post above states :- >>> "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." >> >> Right, what Lars means is that eth1 is physically connected to a >> network with the 10.1.0.0/24 subnet, and eth2 is physically connected >> to a network with the 10.2.0.0/24 subnet. >> >> You might notice that in Lars's instructions, he never puts a host IP >> on either interface. >> >>>> Try creating a Neutron router with an IP on 10.10.10.x, and then you >>>> should be able to ping that network from the router namespace. >>> >>> " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's >>> IP " >> >> Let me refer you to this page, which explains the basics of creating >> and managing Neutron networks: >> >> http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html >> >> You will have to create an external network, which you will associate >> with a physical network via a bridge mapping. The default bridge >> mapping for br-ex is datacentre:br-ex. >> >> Using the name of the physical network "datacentre", we can create an > > 1. Javier is using external network provider ( and so did I , following him) > > #. /root/keystonerc_admin > # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external > # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 > > HA Neutron router and tenant's subnet have been created. > Then interface to tenant's network was activated as well as gateway to public. > Security rules were implemented as usual. > Cloud VM was launched, it obtained private IP and committed cloud-init OK. > Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization > Host > > 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. > When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) > should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . > In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) > > In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 > would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. > Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html > If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net > > >> external network: >> >> [If the external network is on VLAN 104] >> neutron net-create ext-net --router:external \ >> --provider:physical_network datacentre \ >> --provider:network_type vlan \ >> --provider:segmentation_id 104 >> >> [If the external net is on the native VLAN (flat)] >> neutron net-create ext-net --router:external \ >> --provider:physical_network datacentre \ >> --provider:network_type flat >> >> Next, you must create a subnet for the network, including the range of >> floating IPs (allocation pool): >> >> neutron subnet-create --name ext-subnet \ >> --enable_dhcp=False \ >> --allocation-pool start=10.10.10.50,end=10.10.10.100 \ >> --gateway 10.10.10.1 \ >> ext-net 10.10.10.0/24 >> >> Next, you have to create a router: >> >> neutron router-create ext-router >> >> You then add an interface to the router. Since Neutron will assign the >> first address in the subnet to the router by default (10.10.10.1), you >> will want to first create a port with a specific IP, then assign that >> port to the router. >> >> neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 >> >> You will need to note the UUID of the newly created port. You can also >> see this with "neutron port-list". Now, create the router interface >> with the port you just created: >> >> neutron router-interface-add ext-router port= >> >>>> If you want to be able to ping 10.10.10.x from the host, then you >>>> should put either a static IP or DHCP on the bridge, not on eth0. This >>>> should work whether you are running NetworkManager or network.service. >>> >>> "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), >>> it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". >>> It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to >>> cloud VM on this subnet." >> >> I think you will have better luck once you create the external network >> and router. You can then use namespaces to ping the network from the >> router: >> >> First, obtain the qrouter- from the list of namespaces: >> >> sudo ip netns list >> >> Then, find the qrouter- and ping from there: >> >> ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 >> > > One more quick thing to note: > > In order to use floating IPs, you will also have to attach the external > router to the tenant networks where floating IPs will be used. > > When you go through the steps to create a tenant network, also attach > it to the router: > > 1) Create the network: > > neutron net-create tenant-net-1 > > 2) Create the subnet: > > neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 > > 3) Attach the external router to the network: > > neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 > > (since no specific port was given in the router-interface-add command, > Neutron will automatically choose the first address in the given > subnet, so 172.21.0.1 in this example) > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 25835 bytes Desc: not available URL: From ashraf.hassan at t-mobile.nl Sat Nov 14 11:17:51 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sat, 14 Nov 2015 12:17:51 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: Thank you so much for your help for the previous steps, it has been a long time, and now I am trying to upload the images, I am executing the command, which is the next step for me (as per the link https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html ) : [stack at rdo01 ~]$ openstack overcloud image upload But I am getting back: Missing parameter(s): Set a username with --os-username, OS_USERNAME, or auth.username Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url Set a scope, such as a project or domain, set a project scope with --os-project- name, OS_PROJECT_NAME or auth.project_name, set a domain scope with --os-domain- name, OS_DOMAIN_NAME or auth.domain_name Do you prefer to start a new mailing thread? Met vriendelijke groet, Ashraf Hassan Sr. VAS System Analyst Telefoon mobiel: +316 2409 5907 E-mail: ashraf.hassan at t-mobile.nl T-Mobile Netherlands BV ??????????????????????????????????????????????????????????????????????????????? Waldorpstraat 60 2521 CC Den Haag http://www.t-mobile.nl? http://www.facebook.com/tmobilenl https://twitter.com/TMobile_NL ? Life is for sharing. ? PLEASE CONSIDER THE ENVIRONMENT?BEFORE PRINTING THIS EMAIL -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Sunday, November 08, 2015 10:23 PM To: Hassan, Ashraf Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Trying to install the RDO If the overcloud-full, deploy and ironic python agent images were created I'd say you're ready for the next steps. > On 08 Nov 2015, at 22:15, Hassan, Ashraf wrote: > > Now I have managed to build the images , but I got 2 errors, one is "Grubby fatal error" and the other if the failure to download the delta packages, these errors occurred many times during the building process, and I am not sure how serious they are, what do you? Is it safe to upload the images? As I can from the next steps it will be difficult to make the rerun. > Ending of the build process: http://pastebin.com/ZLmWhzxb Grubby > error: http://pastebin.com/gDckQW9D Delta packages errors: > http://pastebin.com/ViTYbibr Deploy overcloud log: > http://pastebin.com/ujFakbvW > > Thanks, > > Ashraf Hassan > > ********************************************************************** > ********** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke > VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > > This e-mail and its contents are subject to a DISCLAIMER with > important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > > ********************************************************************** > ********** ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sat Nov 14 11:24:32 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 14 Nov 2015 12:24:32 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: This means that you need to load the undercloud credentials (source stackrc). On Sat, Nov 14, 2015 at 12:17 PM, Hassan, Ashraf wrote: > Thank you so much for your help for the previous steps, it has been a long time, and now I am trying to upload the images, I am executing the command, which is the next step for me (as per the link https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html ) : > [stack at rdo01 ~]$ openstack overcloud image upload > But I am getting back: > Missing parameter(s): > Set a username with --os-username, OS_USERNAME, or auth.username > Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url > Set a scope, such as a project or domain, set a project scope with --os-project- name, OS_PROJECT_NAME or auth.project_name, set a domain scope with --os-domain- name, OS_DOMAIN_NAME or auth.domain_name > Do you prefer to start a new mailing thread? > > > Met vriendelijke groet, > > Ashraf Hassan > Sr. VAS System Analyst > Telefoon mobiel: +316 2409 5907 > E-mail: ashraf.hassan at t-mobile.nl > > T-Mobile Netherlands BV > > Waldorpstraat 60 > 2521 CC Den Haag > http://www.t-mobile.nl > http://www.facebook.com/tmobilenl > https://twitter.com/TMobile_NL > > Life is for sharing. > > ? PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Sunday, November 08, 2015 10:23 PM > To: Hassan, Ashraf > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Trying to install the RDO > > If the overcloud-full, deploy and ironic python agent images were created I'd say you're ready for the next steps. > >> On 08 Nov 2015, at 22:15, Hassan, Ashraf wrote: >> >> Now I have managed to build the images , but I got 2 errors, one is "Grubby fatal error" and the other if the failure to download the delta packages, these errors occurred many times during the building process, and I am not sure how serious they are, what do you? Is it safe to upload the images? As I can from the next steps it will be difficult to make the rerun. >> Ending of the build process: http://pastebin.com/ZLmWhzxb Grubby >> error: http://pastebin.com/gDckQW9D Delta packages errors: >> http://pastebin.com/ViTYbibr Deploy overcloud log: >> http://pastebin.com/ujFakbvW >> >> Thanks, >> >> Ashraf Hassan >> >> ********************************************************************** >> ********** >> >> N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke >> VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer >> >> >> This e-mail and its contents are subject to a DISCLAIMER with >> important RESERVATIONS: see http://www.t-mobile.nl/disclaimer >> >> >> >> ********************************************************************** >> ********** > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From bderzhavets at hotmail.com Sat Nov 14 12:58:54 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 14 Nov 2015 12:58:54 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com> <564651C2.3070409@redhat.com> , Message-ID: ________________________________ From: Alessandro Vozza Sent: Saturday, November 14, 2015 4:26 AM To: Boris Derzhavets Cc: Dan Sneddon; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md I happened to have deployed following the exact same guide a cloud on bare metal composed of 3 controllers, 2 haproxy nodes and N computes, with external provider networks. What I did was: -) (at https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/neutron-config.md): [https://avatars0.githubusercontent.com/u/114726?v=3&s=400] beekhof/osp-ha-deploy osp-ha-deploy - scripts for deploying a HA install of OSP Read more... On controller and compute, define the external vlan interface and its bridge: cat < /etc/sysconfig/network-scripts/ifcfg-bond0.102 DEVICE=bond0.102 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-bond0.102 ONBOOT=yes BOOTPROTO=none VLAN=yes MTU="1500" NM_CONTROLLED=no EOF cat < /etc/sysconfig/network-scripts/ifcfg-br-bond0.102 DEVICE=br-bond0.102 DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge ONBOOT=yes BOOTPROTO=static MTU="1500" NM_CONTROLLED=no EOF then, make sure that everywhere exists /etc/neutron/plugins/ml2/openvswitch_agent.ini as: [ovs] enable_tunneling = True tunnel_id_ranges = 1:1000 tenant_network_type = vxlan integration_bridge = br-int tunnel_bridge = br-tun local_ip = bridge_mappings = physnet1:br-bond0.102 network_vlan_ranges = physnet1 [agent] tunnel_types = vxlan vxlan_udp_port = 4789 l2_population = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver restart neutron-openvswitch-agent everywhere to make it work: [cid:762776E3-E3BC-4FA0-B1CF-3DAC1E1EBD91 at fritz.box] Hope it helps Alessandro > I would guess , that external net was created similar to this way controller# neutron net-create --router:external=True \ --provider:network_type=vlan --provider:segmentation_id=102 ext-network controller# neutron subnet-create --name ext-subnet --disable-dhcp \ --allocation-pool start 10.10.10.100,end=10.10.10.150 \ ext_network 10.10.10.0/24 of VLAN type not FLAT Could you be so kind to share yours ml2_conf.ini on controllers in cluster. I also believe that tuning your switch contains a kind of :- switchport trunk allowed vlan 100,102,104 Thank you Boris. > 14 Nov 2015, at 09:35, Boris Derzhavets > wrote: ________________________________________ From: Dan Sneddon > Sent: Friday, November 13, 2015 4:10 PM To: Boris Derzhavets; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md On 11/13/2015 12:56 PM, Dan Sneddon wrote: Hi Boris, Let's keep this on-list, there may be others who are having similar issues who could find this discussion useful. Answers inline... On 11/13/2015 12:17 PM, Boris Derzhavets wrote: ________________________________________ From: Dan Sneddon > Sent: Friday, November 13, 2015 2:46 PM To: Boris Derzhavets; Javier Pena Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md On 11/13/2015 11:38 AM, Boris Derzhavets wrote: I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) What bad does NetworkManager when external network provider is used ? Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), so nothing is supposed to work :- http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html Either I am missing something here. ________________________________________ From: rdo-list-bounces at redhat.com > on behalf of Boris Derzhavets > Sent: Friday, November 13, 2015 1:09 PM To: Javier Pena Cc: rdo-list at redhat.com Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) Looks like provider external networks doesn't work for me. But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, I need NetworkManager active, rather then network.service [root at hacontroller1 network-scripts]# systemctl status NetworkManager NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago Main PID: 808 (NetworkManager) CGroup: /system.slice/NetworkManager.service ?? 808 /usr/sbin/NetworkManager --no-daemon ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete [root at hacontroller1 network-scripts]# systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: inactive (dead) [root at hacontroller1 network-scripts]# cat ifcfg-eth0 TYPE="Ethernet" BOOTPROTO="static" NAME="eth0" DEVICE=eth0 ONBOOT="yes" [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms --- 10.10.10.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, to provide route to 10.10.10.0/24. Thank you. Boris. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com OK, a few things here. First of all, you don't actually need to have an IP address on the host system to use a VLAN or interface as an external provider network. The Neutron router will have an IP on the right network, and within its namespace will be able to reach the 10.10.10.x network. It looks to me like NetworkManager is running dhclient for eth0, even though you have BOOTPROTO="static". This is causing an IP address to be added to eth0, so you are able to ping 10.10.10.x from the host. When you turn off NetworkManager, this unexpected behavior goes away, *but you should still be able to use provider networks*. Here I am quoting Lars Kellogg Stedman http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ The bottom statement in blog post above states :- "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." Right, what Lars means is that eth1 is physically connected to a network with the 10.1.0.0/24 subnet, and eth2 is physically connected to a network with the 10.2.0.0/24 subnet. You might notice that in Lars's instructions, he never puts a host IP on either interface. Try creating a Neutron router with an IP on 10.10.10.x, and then you should be able to ping that network from the router namespace. " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's IP " Let me refer you to this page, which explains the basics of creating and managing Neutron networks: http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html You will have to create an external network, which you will associate with a physical network via a bridge mapping. The default bridge mapping for br-ex is datacentre:br-ex. Using the name of the physical network "datacentre", we can create an 1. Javier is using external network provider ( and so did I , following him) #. /root/keystonerc_admin # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 HA Neutron router and tenant's subnet have been created. Then interface to tenant's network was activated as well as gateway to public. Security rules were implemented as usual. Cloud VM was launched, it obtained private IP and committed cloud-init OK. Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization Host 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net external network: [If the external network is on VLAN 104] neutron net-create ext-net --router:external \ --provider:physical_network datacentre \ --provider:network_type vlan \ --provider:segmentation_id 104 [If the external net is on the native VLAN (flat)] neutron net-create ext-net --router:external \ --provider:physical_network datacentre \ --provider:network_type flat Next, you must create a subnet for the network, including the range of floating IPs (allocation pool): neutron subnet-create --name ext-subnet \ --enable_dhcp=False \ --allocation-pool start=10.10.10.50,end=10.10.10.100 \ --gateway 10.10.10.1 \ ext-net 10.10.10.0/24 Next, you have to create a router: neutron router-create ext-router You then add an interface to the router. Since Neutron will assign the first address in the subnet to the router by default (10.10.10.1), you will want to first create a port with a specific IP, then assign that port to the router. neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 You will need to note the UUID of the newly created port. You can also see this with "neutron port-list". Now, create the router interface with the port you just created: neutron router-interface-add ext-router port= If you want to be able to ping 10.10.10.x from the host, then you should put either a static IP or DHCP on the bridge, not on eth0. This should work whether you are running NetworkManager or network.service. "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to cloud VM on this subnet." I think you will have better luck once you create the external network and router. You can then use namespaces to ping the network from the router: First, obtain the qrouter- from the list of namespaces: sudo ip netns list Then, find the qrouter- and ping from there: ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 One more quick thing to note: In order to use floating IPs, you will also have to attach the external router to the tenant networks where floating IPs will be used. When you go through the steps to create a tenant network, also attach it to the router: 1) Create the network: neutron net-create tenant-net-1 2) Create the subnet: neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 3) Attach the external router to the network: neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 (since no specific port was given in the router-interface-add command, Neutron will automatically choose the first address in the given subnet, so 172.21.0.1 in this example) -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 25835 bytes Desc: PastedGraphic-1.png URL: From ashraf.hassan at t-mobile.nl Sat Nov 14 13:35:55 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sat, 14 Nov 2015 14:35:55 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: Hi Marius, Now I have uploaded the images, but when I try register the baremetal it cannot find instackenv.json , when I search the file I do not find anywhere, what do you think? [stack at rdo01 ~]$ openstack baremetal import --json instackenv.json usage: openstack baremetal import [-h] [-s SERVICE_HOST] [--json] [--csv] file_in openstack baremetal import: error: argument file_in: can't open 'instackenv.json': [Errno 2] No such file or directory: 'instackenv.json' Ashraf Hassan ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sat Nov 14 13:49:38 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 14 Nov 2015 14:49:38 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: You need to build it as described in the documentation: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/environments/baremetal.html On Sat, Nov 14, 2015 at 2:35 PM, Hassan, Ashraf wrote: > Hi Marius, > Now I have uploaded the images, but when I try register the baremetal it cannot find instackenv.json , when I search the file I do not find anywhere, what do you think? > [stack at rdo01 ~]$ openstack baremetal import --json instackenv.json > usage: openstack baremetal import [-h] [-s SERVICE_HOST] [--json] [--csv] > file_in > openstack baremetal import: error: argument file_in: can't open 'instackenv.json': [Errno 2] No such file or directory: 'instackenv.json' > > Ashraf Hassan > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From bderzhavets at hotmail.com Sat Nov 14 14:03:53 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 14 Nov 2015 14:03:53 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com> <564651C2.3070409@redhat.com> , , Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Saturday, November 14, 2015 7:58 AM To: Alessandro Vozza Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md ________________________________ From: Alessandro Vozza Sent: Saturday, November 14, 2015 4:26 AM To: Boris Derzhavets Cc: Dan Sneddon; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md I happened to have deployed following the exact same guide a cloud on bare metal composed of 3 controllers, 2 haproxy nodes and N computes, with external provider networks. What I did was: -) (at https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/neutron-config.md): [https://avatars0.githubusercontent.com/u/114726?v=3&s=400] beekhof/osp-ha-deploy osp-ha-deploy - scripts for deploying a HA install of OSP Read more... Actually, using bondings and external network provider of VLAN type, controller nodes may have 3 VLAN's : bond0.100(management network), bond0.101 (tunnel network) and bond0.102(external-network). Then tune haproxy with keepalived for a virtual ip setup on bond0.100 This excellent approach, but question which is my concern is a bit different. Thanks Boris On controller and compute, define the external vlan interface and its bridge: cat < /etc/sysconfig/network-scripts/ifcfg-bond0.102 DEVICE=bond0.102 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-bond0.102 ONBOOT=yes BOOTPROTO=none VLAN=yes MTU="1500" NM_CONTROLLED=no EOF cat < /etc/sysconfig/network-scripts/ifcfg-br-bond0.102 DEVICE=br-bond0.102 DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge ONBOOT=yes BOOTPROTO=static MTU="1500" NM_CONTROLLED=no EOF then, make sure that everywhere exists /etc/neutron/plugins/ml2/openvswitch_agent.ini as: [ovs] enable_tunneling = True tunnel_id_ranges = 1:1000 tenant_network_type = vxlan integration_bridge = br-int tunnel_bridge = br-tun local_ip = bridge_mappings = physnet1:br-bond0.102 network_vlan_ranges = physnet1 [agent] tunnel_types = vxlan vxlan_udp_port = 4789 l2_population = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver restart neutron-openvswitch-agent everywhere to make it work: [cid:762776E3-E3BC-4FA0-B1CF-3DAC1E1EBD91 at fritz.box] Hope it helps Alessandro > I would guess , that external net was created similar to this way controller# neutron net-create --router:external=True \ --provider:network_type=vlan --provider:segmentation_id=102 ext-network controller# neutron subnet-create --name ext-subnet --disable-dhcp \ --allocation-pool start 10.10.10.100,end=10.10.10.150 \ ext_network 10.10.10.0/24 of VLAN type not FLAT Could you be so kind to share yours ml2_conf.ini on controllers in cluster. I also believe that tuning your switch contains a kind of :- switchport trunk allowed vlan 100,102,104 Thank you Boris. > 14 Nov 2015, at 09:35, Boris Derzhavets > wrote: ________________________________________ From: Dan Sneddon > Sent: Friday, November 13, 2015 4:10 PM To: Boris Derzhavets; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md On 11/13/2015 12:56 PM, Dan Sneddon wrote: Hi Boris, Let's keep this on-list, there may be others who are having similar issues who could find this discussion useful. Answers inline... On 11/13/2015 12:17 PM, Boris Derzhavets wrote: ________________________________________ From: Dan Sneddon > Sent: Friday, November 13, 2015 2:46 PM To: Boris Derzhavets; Javier Pena Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md On 11/13/2015 11:38 AM, Boris Derzhavets wrote: I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) What bad does NetworkManager when external network provider is used ? Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), so nothing is supposed to work :- http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html Either I am missing something here. ________________________________________ From: rdo-list-bounces at redhat.com > on behalf of Boris Derzhavets > Sent: Friday, November 13, 2015 1:09 PM To: Javier Pena Cc: rdo-list at redhat.com Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) Looks like provider external networks doesn't work for me. But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, I need NetworkManager active, rather then network.service [root at hacontroller1 network-scripts]# systemctl status NetworkManager NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago Main PID: 808 (NetworkManager) CGroup: /system.slice/NetworkManager.service ?? 808 /usr/sbin/NetworkManager --no-daemon ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete [root at hacontroller1 network-scripts]# systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: inactive (dead) [root at hacontroller1 network-scripts]# cat ifcfg-eth0 TYPE="Ethernet" BOOTPROTO="static" NAME="eth0" DEVICE=eth0 ONBOOT="yes" [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms --- 10.10.10.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, to provide route to 10.10.10.0/24. Thank you. Boris. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com OK, a few things here. First of all, you don't actually need to have an IP address on the host system to use a VLAN or interface as an external provider network. The Neutron router will have an IP on the right network, and within its namespace will be able to reach the 10.10.10.x network. It looks to me like NetworkManager is running dhclient for eth0, even though you have BOOTPROTO="static". This is causing an IP address to be added to eth0, so you are able to ping 10.10.10.x from the host. When you turn off NetworkManager, this unexpected behavior goes away, *but you should still be able to use provider networks*. Here I am quoting Lars Kellogg Stedman http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ The bottom statement in blog post above states :- "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." Right, what Lars means is that eth1 is physically connected to a network with the 10.1.0.0/24 subnet, and eth2 is physically connected to a network with the 10.2.0.0/24 subnet. You might notice that in Lars's instructions, he never puts a host IP on either interface. Try creating a Neutron router with an IP on 10.10.10.x, and then you should be able to ping that network from the router namespace. " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's IP " Let me refer you to this page, which explains the basics of creating and managing Neutron networks: http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html You will have to create an external network, which you will associate with a physical network via a bridge mapping. The default bridge mapping for br-ex is datacentre:br-ex. Using the name of the physical network "datacentre", we can create an 1. Javier is using external network provider ( and so did I , following him) #. /root/keystonerc_admin # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 HA Neutron router and tenant's subnet have been created. Then interface to tenant's network was activated as well as gateway to public. Security rules were implemented as usual. Cloud VM was launched, it obtained private IP and committed cloud-init OK. Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization Host 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net external network: [If the external network is on VLAN 104] neutron net-create ext-net --router:external \ --provider:physical_network datacentre \ --provider:network_type vlan \ --provider:segmentation_id 104 [If the external net is on the native VLAN (flat)] neutron net-create ext-net --router:external \ --provider:physical_network datacentre \ --provider:network_type flat Next, you must create a subnet for the network, including the range of floating IPs (allocation pool): neutron subnet-create --name ext-subnet \ --enable_dhcp=False \ --allocation-pool start=10.10.10.50,end=10.10.10.100 \ --gateway 10.10.10.1 \ ext-net 10.10.10.0/24 Next, you have to create a router: neutron router-create ext-router You then add an interface to the router. Since Neutron will assign the first address in the subnet to the router by default (10.10.10.1), you will want to first create a port with a specific IP, then assign that port to the router. neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 You will need to note the UUID of the newly created port. You can also see this with "neutron port-list". Now, create the router interface with the port you just created: neutron router-interface-add ext-router port= If you want to be able to ping 10.10.10.x from the host, then you should put either a static IP or DHCP on the bridge, not on eth0. This should work whether you are running NetworkManager or network.service. "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to cloud VM on this subnet." I think you will have better luck once you create the external network and router. You can then use namespaces to ping the network from the router: First, obtain the qrouter- from the list of namespaces: sudo ip netns list Then, find the qrouter- and ping from there: ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 One more quick thing to note: In order to use floating IPs, you will also have to attach the external router to the tenant networks where floating IPs will be used. When you go through the steps to create a tenant network, also attach it to the router: 1) Create the network: neutron net-create tenant-net-1 2) Create the subnet: neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 3) Attach the external router to the network: neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 (since no specific port was given in the router-interface-add command, Neutron will automatically choose the first address in the given subnet, so 172.21.0.1 in this example) -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 25835 bytes Desc: PastedGraphic-1.png URL: From mohammed.arafa at gmail.com Sat Nov 14 17:26:29 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sat, 14 Nov 2015 12:26:29 -0500 Subject: [Rdo-list] [rdo-manager] RFE: Documentation update - logging into overcloud nodes Message-ID: The documentation doesnt mention how to login to the overcloud nodes once provisioned. it is from the stack at undercloud node run ssh heat-admin@ thanks -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro at namecheap.com Sat Nov 14 17:56:23 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Sat, 14 Nov 2015 18:56:23 +0100 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com> <564651C2.3070409@redhat.com> Message-ID: <883ECEDA-D59B-4FBC-BD66-A2EF90843B2C@namecheap.com> Hi cutting some html, responses inline: > Actually, using bondings and external network provider of VLAN type, > controller nodes may have 3 VLAN's : bond0.100(management network), > bond0.101 (tunnel network) and bond0.102(external-network). > Then tune haproxy with keepalived for a virtual ip setup on bond0.100 In my case, I even collapse tunnel network with the general/management/provisioning network (where my foreman smart proxy lives and provision bare metal); it?s a routed but secure network. My nodes thus have two interfaces: -bond0 (untagged native vlan for provisioniong) -bond0.112 (external traffic, no nodes have an IP) My haproxys have two interfaces: -bond0 -bond0.111 (external API access, routed) This way I isolate and/or allow traffic from instances (and the rest of the organisation) to the openstack API?s, secured by terminating SSL at the loadbalancers (see diagram) > This excellent approach, but question which is my concern > is a bit different. > > Thanks > Boris > > On controller and compute, define the external vlan interface and its bridge: > > cat < /etc/sysconfig/network-scripts/ifcfg-bond0.102 > DEVICE=bond0.102 > ONBOOT=yes > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-bond0.102 > ONBOOT=yes > BOOTPROTO=none > VLAN=yes > MTU="1500" > NM_CONTROLLED=no > EOF > > cat < /etc/sysconfig/network-scripts/ifcfg-br-bond0.102 > DEVICE=br-bond0.102 > DEVICETYPE=ovs > OVSBOOTPROTO=none > TYPE=OVSBridge > ONBOOT=yes > BOOTPROTO=static > MTU="1500" > NM_CONTROLLED=no > EOF > > then, make sure that everywhere exists /etc/neutron/plugins/ml2/openvswitch_agent.ini as: > > [ovs] > enable_tunneling = True > tunnel_id_ranges = 1:1000 > tenant_network_type = vxlan > integration_bridge = br-int > tunnel_bridge = br-tun > local_ip = > bridge_mappings = physnet1:br-bond0.102 > network_vlan_ranges = physnet1 > [agent] > tunnel_types = vxlan > vxlan_udp_port = 4789 > l2_population = False > [securitygroup] > firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > > restart neutron-openvswitch-agent everywhere to make it work: > > > Hope it helps > Alessandro > > > > I would guess , that external net was created similar to this way > > controller# neutron net-create --router:external=True \ > --provider:network_type=vlan --provider:segmentation_id=102 ext-network > No, external provider network is a flat network (because it uses an already-tagged interface, bond0.112) > controller# neutron subnet-create --name ext-subnet --disable-dhcp \ > --allocation-pool start 10.10.10.100,end=10.10.10.150 \ > ext_network 10.10.10.0/24 > I do use neutron DHCP: some instances have only one interface in the provider network, thus they won?t be accessible if I don?t provide them an IP+metadata > of VLAN type not FLAT > It would be a VLAN-provider network if you would add bond0 interface to the bridge, and let OVS tag packets. In this case it?s ?flat?, as OVS is not aware of any VLAN tagging, which happens downstream at the interface. But there?s dozens of more skilled network dudes&gals on the list that may correct me :) > Could you be so kind to share yours ml2_conf.ini on controllers in cluster. a very simple one: [ml2] type_drivers = flat,vxlan,vlan tenant_network_types = vxlan mechanism_drivers = openvswitch [ml2_type_flat] flat_networks = * [ml2_type_vlan] [ml2_type_gre] [ml2_type_vxlan] vni_ranges = 10:10000 vxlan_group = 224.0.0.1 [ml2_type_geneve] [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > I also believe that tuning your switch contains a kind of :- > switchport trunk allowed vlan 100,102,104 > Indeed, switch ports are trunked. > Thank you > Boris. > > > > > 14 Nov 2015, at 09:35, Boris Derzhavets wrote: >> >> >> >> ________________________________________ >> From: Dan Sneddon >> Sent: Friday, November 13, 2015 4:10 PM >> To: Boris Derzhavets; rdo-list at redhat.com >> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >> >> On 11/13/2015 12:56 PM, Dan Sneddon wrote: >>> Hi Boris, >>> >>> Let's keep this on-list, there may be others who are having similar >>> issues who could find this discussion useful. >>> >>> Answers inline... >>> >>> On 11/13/2015 12:17 PM, Boris Derzhavets wrote: >>>> >>>> >>>> ________________________________________ >>>> From: Dan Sneddon >>>> Sent: Friday, November 13, 2015 2:46 PM >>>> To: Boris Derzhavets; Javier Pena >>>> Cc: rdo-list at redhat.com >>>> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>>> >>>> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >>>>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >>>>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >>>>> What bad does NetworkManager when external network provider is used ? >>>>> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >>>>> so nothing is supposed to work :- >>>>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>>>> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >>>>> Either I am missing something here. >>>>> ________________________________________ >>>>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>>>> Sent: Friday, November 13, 2015 1:09 PM >>>>> To: Javier Pena >>>>> Cc: rdo-list at redhat.com >>>>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>>>> >>>>> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >>>>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >>>>> Looks like provider external networks doesn't work for me. >>>>> >>>>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >>>>> I need NetworkManager active, rather then network.service >>>>> >>>>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >>>>> NetworkManager.service - Network Manager >>>>> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >>>>> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >>>>> Main PID: 808 (NetworkManager) >>>>> CGroup: /system.slice/NetworkManager.service >>>>> ?? 808 /usr/sbin/NetworkManager --no-daemon >>>>> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >>>>> >>>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >>>>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >>>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >>>>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >>>>> >>>>> [root at hacontroller1 network-scripts]# systemctl status network.service >>>>> network.service - LSB: Bring up/down networking >>>>> Loaded: loaded (/etc/rc.d/init.d/network) >>>>> Active: inactive (dead) >>>>> >>>>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >>>>> TYPE="Ethernet" >>>>> BOOTPROTO="static" >>>>> NAME="eth0" >>>>> DEVICE=eth0 >>>>> ONBOOT="yes" >>>>> >>>>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >>>>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >>>>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >>>>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >>>>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >>>>> >>>>> --- 10.10.10.1 ping statistics --- >>>>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >>>>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >>>>> >>>>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >>>>> to provide route to 10.10.10.0/24. >>>>> >>>>> Thank you. >>>>> Boris. >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >>>> >>>> OK, a few things here. First of all, you don't actually need to have an >>>> IP address on the host system to use a VLAN or interface as an external >>>> provider network. The Neutron router will have an IP on the right >>>> network, and within its namespace will be able to reach the 10.10.10.x >>>> network. >>>> >>>>> It looks to me like NetworkManager is running dhclient for eth0, even >>>>> though you have BOOTPROTO="static". This is causing an IP address to be >>>>> added to eth0, so you are able to ping 10.10.10.x from the host. When >>>>> you turn off NetworkManager, this unexpected behavior goes away, *but >>>>> you should still be able to use provider networks*. >>>> >>>> Here I am quoting Lars Kellogg Stedman >>>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>>> The bottom statement in blog post above states :- >>>> "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." >>> >>> Right, what Lars means is that eth1 is physically connected to a >>> network with the 10.1.0.0/24 subnet, and eth2 is physically connected >>> to a network with the 10.2.0.0/24 subnet. >>> >>> You might notice that in Lars's instructions, he never puts a host IP >>> on either interface. >>> >>>>> Try creating a Neutron router with an IP on 10.10.10.x, and then you >>>>> should be able to ping that network from the router namespace. >>>> >>>> " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's >>>> IP " >>> >>> Let me refer you to this page, which explains the basics of creating >>> and managing Neutron networks: >>> >>> http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html >>> >>> You will have to create an external network, which you will associate >>> with a physical network via a bridge mapping. The default bridge >>> mapping for br-ex is datacentre:br-ex. >>> >>> Using the name of the physical network "datacentre", we can create an >> >> 1. Javier is using external network provider ( and so did I , following him) >> >> #. /root/keystonerc_admin >> # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external >> # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 >> >> HA Neutron router and tenant's subnet have been created. >> Then interface to tenant's network was activated as well as gateway to public. >> Security rules were implemented as usual. >> Cloud VM was launched, it obtained private IP and committed cloud-init OK. >> Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization >> Host >> >> 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. >> When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) >> should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . >> In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) >> >> In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 >> would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. >> Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >> If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net >> >> >>> external network: >>> >>> [If the external network is on VLAN 104] >>> neutron net-create ext-net --router:external \ >>> --provider:physical_network datacentre \ >>> --provider:network_type vlan \ >>> --provider:segmentation_id 104 >>> >>> [If the external net is on the native VLAN (flat)] >>> neutron net-create ext-net --router:external \ >>> --provider:physical_network datacentre \ >>> --provider:network_type flat >>> >>> Next, you must create a subnet for the network, including the range of >>> floating IPs (allocation pool): >>> >>> neutron subnet-create --name ext-subnet \ >>> --enable_dhcp=False \ >>> --allocation-pool start=10.10.10.50,end=10.10.10.100 \ >>> --gateway 10.10.10.1 \ >>> ext-net 10.10.10.0/24 >>> >>> Next, you have to create a router: >>> >>> neutron router-create ext-router >>> >>> You then add an interface to the router. Since Neutron will assign the >>> first address in the subnet to the router by default (10.10.10.1), you >>> will want to first create a port with a specific IP, then assign that >>> port to the router. >>> >>> neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 >>> >>> You will need to note the UUID of the newly created port. You can also >>> see this with "neutron port-list". Now, create the router interface >>> with the port you just created: >>> >>> neutron router-interface-add ext-router port= >>> >>>>> If you want to be able to ping 10.10.10.x from the host, then you >>>>> should put either a static IP or DHCP on the bridge, not on eth0. This >>>>> should work whether you are running NetworkManager or network.service. >>>> >>>> "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), >>>> it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". >>>> It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to >>>> cloud VM on this subnet." >>> >>> I think you will have better luck once you create the external network >>> and router. You can then use namespaces to ping the network from the >>> router: >>> >>> First, obtain the qrouter- from the list of namespaces: >>> >>> sudo ip netns list >>> >>> Then, find the qrouter- and ping from there: >>> >>> ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 >>> >> >> One more quick thing to note: >> >> In order to use floating IPs, you will also have to attach the external >> router to the tenant networks where floating IPs will be used. >> >> When you go through the steps to create a tenant network, also attach >> it to the router: >> >> 1) Create the network: >> >> neutron net-create tenant-net-1 >> >> 2) Create the subnet: >> >> neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 >> >> 3) Attach the external router to the network: >> >> neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 >> >> (since no specific port was given in the router-interface-add command, >> Neutron will automatically choose the first address in the given >> subnet, so 172.21.0.1 in this example) >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NCC-1.0-diagram.png Type: image/png Size: 66268 bytes Desc: not available URL: From qasims at plumgrid.com Sat Nov 14 19:15:48 2015 From: qasims at plumgrid.com (Qasim Sarfraz) Date: Sun, 15 Nov 2015 00:15:48 +0500 Subject: [Rdo-list] Multiple flavors for overcloud controller nodes Message-ID: Folks, I am deploying 3 controller and 1 compute in HA using RDO-Manager. I have controllers with different hardware spec. How do I specify flavor for each of them at the time of deployment? -- Regards, Qasim Sarfraz -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashraf.hassan at t-mobile.nl Sat Nov 14 22:27:08 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sat, 14 Nov 2015 23:27:08 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: Well first I am very very very sorry, because I have many questions, but the link is not very clear to me, sorry again for my stupid questions. I am trying to compose the file, but I have some questions not clear to me: 1- I understood that the new installation requires 3 controller node, what I see in the link you sent me earlier, is states that the undercloud node is different node from the controller nodes? Does it mean that I need 3 controller nodes + the undercloud node, or I need 2 controller node + the undercloud node? 2- I have a node which I will use as a storage HW, but this node can be connected to the PXE/Boot VLAN over a trunk interface, that mean it cannot be PXE booted (I cannot PXE boot over a trunk interface as far as I know), it means I need to install it separately configure the VLANs on this interface, and then let it join the controller, is that possible? 3- Finally for the Jason file itself here is a sample section: { "pm_type":"pxe_ipmitool", "mac":[ "00:17:a4:77:78:30" ], "cpu":"2", "memory":"49152", "disk":"300", "arch":"x86_64", "pm_user":"admin", "pm_password":"password", "pm_addr":"10.0.0.8" }, a- I understand that the MAC address for the PXE boot interface for the different nodes in the overcloud is that correct? b- In the link it says that pm_addr is " node BMC IP address ", but that is not clear to me, does it mean it is the IP of the "local_ip" in the undercloud.conf, which is always the IP of the eth1 of the undercloud node in the link you sent me earlier, or it is not the case? c- In the link it say that pm_user, and pm_password are "node BMC credentials" , are these the credentials I need to configure manually per node, do they need to be the same for all nodes? Or they are already defined in the stackrc file? Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From bderzhavets at hotmail.com Sun Nov 15 09:11:53 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 15 Nov 2015 09:11:53 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, , <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com>, <564651C2.3070409@redhat.com>, Message-ID: ________________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Saturday, November 14, 2015 3:35 AM To: Dan Sneddon; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md ________________________________________ From: Dan Sneddon Sent: Friday, November 13, 2015 4:10 PM To: Boris Derzhavets; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md On 11/13/2015 12:56 PM, Dan Sneddon wrote: > Hi Boris, > > Let's keep this on-list, there may be others who are having similar > issues who could find this discussion useful. > > Answers inline... > > On 11/13/2015 12:17 PM, Boris Derzhavets wrote: >> >> >> ________________________________________ >> From: Dan Sneddon >> Sent: Friday, November 13, 2015 2:46 PM >> To: Boris Derzhavets; Javier Pena >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >> >> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >>> What bad does NetworkManager when external network provider is used ? >>> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >>> so nothing is supposed to work :- >>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >>> Either I am missing something here. >>> ________________________________________ >>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>> Sent: Friday, November 13, 2015 1:09 PM >>> To: Javier Pena >>> Cc: rdo-list at redhat.com >>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>> >>> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >>> Looks like provider external networks doesn't work for me. >>> >>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >>> I need NetworkManager active, rather then network.service >>> >>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >>> NetworkManager.service - Network Manager >>> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >>> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >>> Main PID: 808 (NetworkManager) >>> CGroup: /system.slice/NetworkManager.service >>> ?? 808 /usr/sbin/NetworkManager --no-daemon >>> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >>> >>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >>> >>> [root at hacontroller1 network-scripts]# systemctl status network.service >>> network.service - LSB: Bring up/down networking >>> Loaded: loaded (/etc/rc.d/init.d/network) >>> Active: inactive (dead) >>> >>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >>> TYPE="Ethernet" >>> BOOTPROTO="static" >>> NAME="eth0" >>> DEVICE=eth0 >>> ONBOOT="yes" >>> >>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >>> >>> --- 10.10.10.1 ping statistics --- >>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >>> >>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >>> to provide route to 10.10.10.0/24. >>> >>> Thank you. >>> Boris. >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> OK, a few things here. First of all, you don't actually need to have an >> IP address on the host system to use a VLAN or interface as an external >> provider network. The Neutron router will have an IP on the right >> network, and within its namespace will be able to reach the 10.10.10.x >> network. >> >>> It looks to me like NetworkManager is running dhclient for eth0, even >>> though you have BOOTPROTO="static". This is causing an IP address to be >>> added to eth0, so you are able to ping 10.10.10.x from the host. When >>> you turn off NetworkManager, this unexpected behavior goes away, *but >>> you should still be able to use provider networks*. >> >> Here I am quoting Lars Kellogg Stedman >> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >> The bottom statement in blog post above states :- >> "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." > > Right, what Lars means is that eth1 is physically connected to a > network with the 10.1.0.0/24 subnet, and eth2 is physically connected > to a network with the 10.2.0.0/24 subnet. > > You might notice that in Lars's instructions, he never puts a host IP > on either interface. > >>> Try creating a Neutron router with an IP on 10.10.10.x, and then you >>> should be able to ping that network from the router namespace. >> >> " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's >> IP " > > Let me refer you to this page, which explains the basics of creating > and managing Neutron networks: > > http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html > > You will have to create an external network, which you will associate > with a physical network via a bridge mapping. The default bridge > mapping for br-ex is datacentre:br-ex. > > Using the name of the physical network "datacentre", we can create an 1. Javier is using external network provider ( and so did I , following him) #. /root/keystonerc_admin # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 HA Neutron router and tenant's subnet have been created. Then interface to tenant's network was activated as well as gateway to public. Security rules were implemented as usual. Cloud VM was launched, it obtained private IP and committed cloud-init OK. Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Virtualization Host /* 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) */ /* In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net */ I've double checked schema from http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ It works whether eth3 has IP or hasn't. I was wrong about (2). > external network: > > [If the external network is on VLAN 104] > neutron net-create ext-net --router:external \ > --provider:physical_network datacentre \ > --provider:network_type vlan \ > --provider:segmentation_id 104 > > [If the external net is on the native VLAN (flat)] > neutron net-create ext-net --router:external \ > --provider:physical_network datacentre \ > --provider:network_type flat > > Next, you must create a subnet for the network, including the range of > floating IPs (allocation pool): > > neutron subnet-create --name ext-subnet \ > --enable_dhcp=False \ > --allocation-pool start=10.10.10.50,end=10.10.10.100 \ > --gateway 10.10.10.1 \ > ext-net 10.10.10.0/24 > > Next, you have to create a router: > > neutron router-create ext-router > > You then add an interface to the router. Since Neutron will assign the > first address in the subnet to the router by default (10.10.10.1), you > will want to first create a port with a specific IP, then assign that > port to the router. > > neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 > > You will need to note the UUID of the newly created port. You can also > see this with "neutron port-list". Now, create the router interface > with the port you just created: > > neutron router-interface-add ext-router port= > >>> If you want to be able to ping 10.10.10.x from the host, then you >>> should put either a static IP or DHCP on the bridge, not on eth0. This >>> should work whether you are running NetworkManager or network.service. >> >> "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), >> it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". >> It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to >> cloud VM on this subnet." > > I think you will have better luck once you create the external network > and router. You can then use namespaces to ping the network from the > router: > > First, obtain the qrouter- from the list of namespaces: > > sudo ip netns list > > Then, find the qrouter- and ping from there: > > ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 > One more quick thing to note: In order to use floating IPs, you will also have to attach the external router to the tenant networks where floating IPs will be used. When you go through the steps to create a tenant network, also attach it to the router: 1) Create the network: neutron net-create tenant-net-1 2) Create the subnet: neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 3) Attach the external router to the network: neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 (since no specific port was given in the router-interface-add command, Neutron will automatically choose the first address in the given subnet, so 172.21.0.1 in this example) -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From marius at remote-lab.net Sun Nov 15 09:54:27 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 15 Nov 2015 10:54:27 +0100 Subject: [Rdo-list] Multiple flavors for overcloud controller nodes In-Reply-To: References: Message-ID: Hi Qasim, You can use manual tagging as described in the docs: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/profile_matching.html#optional-manually-add-the-profiles-to-the-nodes On Sat, Nov 14, 2015 at 8:15 PM, Qasim Sarfraz wrote: > Folks, > > I am deploying 3 controller and 1 compute in HA using RDO-Manager. I have > controllers with different hardware spec. How do I specify flavor for each > of them at the time of deployment? > > > -- > Regards, > Qasim Sarfraz > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From marius at remote-lab.net Sun Nov 15 10:09:13 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 15 Nov 2015 11:09:13 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: On Sat, Nov 14, 2015 at 11:27 PM, Hassan, Ashraf wrote: > Well first I am very very very sorry, because I have many questions, but the link is not very clear to me, sorry again for my stupid questions. > I am trying to compose the file, but I have some questions not clear to me: > 1- I understood that the new installation requires 3 controller node, what I see in the link you sent me earlier, is states that the undercloud node is different node from the controller nodes? Does it mean that I need 3 controller nodes + the undercloud node, or I need 2 controller node + the undercloud node? Undercloud node is different from the overcloud controller nodes. You require 3 overcloud controllers when you deploy a HA control plane. In that case yes, you need the undercloud + 3 controller nodes deployed by the undercloud. > 2- I have a node which I will use as a storage HW, but this node can be connected to the PXE/Boot VLAN over a trunk interface, that mean it cannot be PXE booted (I cannot PXE boot over a trunk interface as far as I know), it means I need to install it separately configure the VLANs on this interface, and then let it join the controller, is that possible? PXE boot can work on a trunk interface as long as it's running over the native vlan. > 3- Finally for the Jason file itself here is a sample section: > { > "pm_type":"pxe_ipmitool", > "mac":[ > "00:17:a4:77:78:30" > ], > "cpu":"2", > "memory":"49152", > "disk":"300", > "arch":"x86_64", > "pm_user":"admin", > "pm_password":"password", > "pm_addr":"10.0.0.8" > }, > > a- I understand that the MAC address for the PXE boot interface for the different nodes in the overcloud is that correct? Yes, that's correct. > b- In the link it says that pm_addr is " node BMC IP address ", but that is not clear to me, does it mean it is the IP of the "local_ip" in the undercloud.conf, which is always the IP of the eth1 of the undercloud node in the link you sent me earlier, or it is not the case? This is the IPMI address of the server used for power operations. > c- In the link it say that pm_user, and pm_password are "node BMC credentials" , are these the credentials I need to configure manually per node, do they need to be the same for all nodes? Or they are already defined in the stackrc file? Again, these are the IPMI credentials used for logging into pm_addr . They need to be configured on the servers to be able to manage them. > Thanks, > Ashraf > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From qasims at plumgrid.com Sun Nov 15 10:13:15 2015 From: qasims at plumgrid.com (Qasim Sarfraz) Date: Sun, 15 Nov 2015 15:13:15 +0500 Subject: [Rdo-list] Multiple flavors for overcloud controller nodes In-Reply-To: References: Message-ID: Thanks Marius. The link was useful. It turned out that I had to set the flavor to lower bound of all the hardware specs and it worked. On Sun, Nov 15, 2015 at 2:54 PM, Marius Cornea wrote: > Hi Qasim, > > You can use manual tagging as described in the docs: > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/profile_matching.html#optional-manually-add-the-profiles-to-the-nodes > > On Sat, Nov 14, 2015 at 8:15 PM, Qasim Sarfraz > wrote: > > Folks, > > > > I am deploying 3 controller and 1 compute in HA using RDO-Manager. I have > > controllers with different hardware spec. How do I specify flavor for > each > > of them at the time of deployment? > > > > > > -- > > Regards, > > Qasim Sarfraz > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Regards, Qasim Sarfraz -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashraf.hassan at t-mobile.nl Sun Nov 15 12:20:02 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 15 Nov 2015 13:20:02 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: Thank you so much for the help and info. For the pm_addr, pm_user, and pm_password, they are not yet fully clear to me, sorry for that. You said that the pm_addr is the " This is the IPMI address of the server used for power operations." Is the IP of the undercloud which or the local IP of the baremetal of the overclouds? My confusion is because of: If they are the IP of the baremetals, shouldn't be that be done using the DHCP pool configured in the undercloud.conf? Same confusion for pm_user, and pm_password, are these for the undercloud baremetal? If they are for the overcloud baremetals, they are currently blank uninstalled nodes they have nothing on them. Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 15 12:26:20 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 15 Nov 2015 13:26:20 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: The IPMI interfaces are used for out of band management: https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface The IP for the baremetals will indeed be served via DHCP during deployment according to the configuration in undercloud.conf. These IPs will be configured on the provisioning interface so they don't relate to the IPMI interfaces. What type of baremetals are you using? On Sun, Nov 15, 2015 at 1:20 PM, Hassan, Ashraf wrote: > Thank you so much for the help and info. > For the pm_addr, pm_user, and pm_password, they are not yet fully clear to me, sorry for that. > You said that the pm_addr is the " This is the IPMI address of the server used for power operations." Is the IP of the undercloud which or the local IP of the baremetal of the overclouds? My confusion is because of: > If they are the IP of the baremetals, shouldn't be that be done using the DHCP pool configured in the undercloud.conf? > > Same confusion for pm_user, and pm_password, are these for the undercloud baremetal? If they are for the overcloud baremetals, they are currently blank uninstalled nodes they have nothing on them. > > Thanks, > Ashraf > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 15 12:41:16 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 15 Nov 2015 13:41:16 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: I am using HP G6 baremetals, for all nodes, for the undercloud I have assigned a node of 48GB Memory and 300GB HDD, with the assumption it is needed for the controller but it appears now it might be to much for the undercloud, what do you think? So the first NIC will be the one used for PXE boot. I have read for the Wikipedia article, if I understand correctly that the IPMI IP which will be used for the power management, and it is different than the DHCP IP will be used for the deployment correct? Can they be in the same VLAN? Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Sun Nov 15 13:02:30 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 15 Nov 2015 14:02:30 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: On Sun, Nov 15, 2015 at 1:41 PM, Hassan, Ashraf wrote: > I am using HP G6 baremetals, for all nodes, for the undercloud I have assigned a node of 48GB Memory and 300GB HDD, with the assumption it is needed for the controller but it appears now it might be to much for the undercloud, what do you think? If you're using HP servers then the IPMI ips/credentials are the iLO ones. I usually use a 16GB node for the undercloud. Most probably you can go lower than that. > So the first NIC will be the one used for PXE boot. > I have read for the Wikipedia article, if I understand correctly that the IPMI IP which will be used for the power management, and it is different than the DHCP IP will be used for the deployment correct? Can they be in the same VLAN? They can, just make sure they're out of the dhcp allocation ranges. > Thanks, > Ashraf > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 15 13:25:19 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 15 Nov 2015 14:25:19 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: That is new info for me :-), then for the mac parameter in this case the mac address for the PXE boot interface (Which is NIC1) or the ILO interface? Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From ibravo at ltgfederal.com Sun Nov 15 14:16:12 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Sun, 15 Nov 2015 09:16:12 -0500 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> Message-ID: <3BACC95A-C80F-4807-ACFB-16EF2B68DCA6@ltgfederal.com> Hassan, I'm also deploying on HP hardware. This is the basic summary of how the deployment works: You install the undercloud in a stand alone machine. This can be either physical or virtual. I went first the physical route and ended with a virtual machine as the undercloud just need to have a big HDD space to store the images. You add to the undercloud all the credentials for the ILO servers in the json file that you will be using in the overcloud. User, password, IP and MAC address of the iLO network. I usually leave the values for Ram and hdd in that file un modified and these values are stored in ironic when you upload the json file. Through the process of introspection, the undercloud will use these credentials to get the details of each overcloud node: HDD space, RAM, number of NIC cards, etc. and will update these values in ironic. You can use ironic node-list to see the details. Then you will need to modify a couple of yaml files to accommodate to your particular deployment. Most likely the one related to network. The it is just a OpenStack deploy command away to getting the best OpenStack installation possible! Enjoy. IB > On Nov 15, 2015, at 8:25 AM, Hassan, Ashraf wrote: > > That is new info for me :-), then for the mac parameter in this case the mac address for the PXE boot interface (Which is NIC1) or the ILO interface? > > Thanks, > Ashraf > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ibravo at ltgfederal.com Sun Nov 15 14:29:50 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Sun, 15 Nov 2015 09:29:50 -0500 Subject: [Rdo-list] [rdo-manager] RFE: Documentation update - logging into overcloud nodes In-Reply-To: References: Message-ID: <9BBD4E35-3C5A-4A58-A5FB-3519F7EB897D@ltgfederal.com> All, The installation creates ssh keys just for the stack user to login to the overcloud as described by Mohammed. This means that if you loose the undercloud, you loose any chance to login to your overcloud as well. What I did was to create a new user in each overcloud node so you can login to these servers directly without having to tunnel ssh into the undercloud first. Additionally, guard the undercloud with your life. That's why I went the VM route for the undercloud where I can create a snapshot and save it in a safe place. The documentation talks about backing up the undercloud, but I haven't read in detail through that section yet. IB > On Nov 14, 2015, at 12:26 PM, Mohammed Arafa wrote: > > The documentation doesnt mention how to login to the overcloud nodes once provisioned. > it is > from the stack at undercloud node run ssh heat-admin@ > thanks > > -- > > > > > > > 805010942448935 > > GR750055912MA > > Link to me on LinkedIn > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashraf.hassan at t-mobile.nl Sun Nov 15 14:47:35 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 15 Nov 2015 15:47:35 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: <3BACC95A-C80F-4807-ACFB-16EF2B68DCA6@ltgfederal.com> References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> <3BACC95A-C80F-4807-ACFB-16EF2B68DCA6@ltgfederal.com> Message-ID: Hi Ignacio, That was really helpful summary I will change the mac in my json file to the ILOs' mac, you mentioned one point about you leave the ram and HDD empty where it will be updated by the introspection, now I am planning to move rams from the undercloud BM to other nodes (I presume that should be simple and have no impact on the undercloud), but to I need to repeat the node introspection again after I change the rams? In the link it says: " It's not recommended to delete nodes and/or rerun this command after you have proceeded to the next steps. Particularly, if you start introspection and then re-register nodes, you won't be able to retry introspection until the previous one times out (1 hour by default). If you are having issues with nodes after registration, please follow " Or is it easier not to introspect the nodes until I finish upgrade their memories? Further for HA, do I need to have more than one undercloud as well? Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From ashraf.hassan at t-mobile.nl Sun Nov 15 18:37:58 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Sun, 15 Nov 2015 19:37:58 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: <3BACC95A-C80F-4807-ACFB-16EF2B68DCA6@ltgfederal.com> References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> <3BACC95A-C80F-4807-ACFB-16EF2B68DCA6@ltgfederal.com> Message-ID: I executed the command: openstack baremetal import --json instackenv.json I did not see any progress on the screen, I was looking for the logs, and I only found: /var/log/ironic/ ironic-conductor.log It says it is successful, but my concern here is that the I can see that the undercloud is trying to reach ILO from the local interface 192.168.1.1 (I made this interface only to boot the PXE), while the IPMIs (ILOs) of the new nodes can be only reached through the normal management interface of the undercloud. Is there a command to verify the status of the nodes registered in the ironic db ? /var/log/ironic/ ironic-conductor.log: http://pastebin.com/0w32f2xY Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From ibravo at ltgfederal.com Mon Nov 16 02:06:59 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Sun, 15 Nov 2015 21:06:59 -0500 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> <3BACC95A-C80F-4807-ACFB-16EF2B68DCA6@ltgfederal.com> Message-ID: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> The import takes literally seconds. After that you can reach the node details by doing 'ironic node-list' and ironic node-show uuid to see the values for each node, IB > On Nov 15, 2015, at 1:37 PM, Hassan, Ashraf wrote: > > I executed the command: > openstack baremetal import --json instackenv.json > > I did not see any progress on the screen, I was looking for the logs, and I only found: > /var/log/ironic/ ironic-conductor.log > > It says it is successful, but my concern here is that the I can see that the undercloud is trying to reach ILO from the local interface 192.168.1.1 (I made this interface only to boot the PXE), while the IPMIs (ILOs) of the new nodes can be only reached through the normal management interface of the undercloud. > Is there a command to verify the status of the nodes registered in the ironic db ? > > > /var/log/ironic/ ironic-conductor.log: http://pastebin.com/0w32f2xY > > Thanks, > Ashraf > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** > From mohammed.arafa at gmail.com Mon Nov 16 02:57:41 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sun, 15 Nov 2015 21:57:41 -0500 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> References: <38666FD3-8234-40C9-8A97-41D13F360987@remote-lab.net> <3BACC95A-C80F-4807-ACFB-16EF2B68DCA6@ltgfederal.com> <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> Message-ID: the command openstack baremetal import --json instackenv.json has only one purpose to import the BMC details into the ironic database _later_ the BMC details will be used to power on and off the servers during the introspection phase and the deployment phase (via heat templates) for the next stage pls double check your boot order. make sure the nic is first and the hard disk or your storage device is later on ps. i used ipmitool to verify the credentials and ip before putting them into the json file. pps. there is also an ironic command that escapes me that can verify your json file's syntax. it is mentioned in one of the threads on this mailing list. On Sun, Nov 15, 2015 at 9:06 PM, Ignacio Bravo wrote: > The import takes literally seconds. > After that you can reach the node details by doing 'ironic node-list' and > ironic node-show uuid to see the values for each node, > > IB > > > On Nov 15, 2015, at 1:37 PM, Hassan, Ashraf > wrote: > > > > I executed the command: > > openstack baremetal import --json instackenv.json > > > > I did not see any progress on the screen, I was looking for the logs, > and I only found: > > /var/log/ironic/ ironic-conductor.log > > > > It says it is successful, but my concern here is that the I can see that > the undercloud is trying to reach ILO from the local interface 192.168.1.1 > (I made this interface only to boot the PXE), while the IPMIs (ILOs) of the > new nodes can be only reached through the normal management interface of > the undercloud. > > Is there a command to verify the status of the nodes registered in the > ironic db ? > > > > > > /var/log/ironic/ ironic-conductor.log: http://pastebin.com/0w32f2xY > > > > Thanks, > > Ashraf > > > > > ******************************************************************************** > > > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke > VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer < > http://www.t-mobile.nl/disclaimer> > > > > This e-mail and its contents are subject to a DISCLAIMER with important > RESERVATIONS: see http://www.t-mobile.nl/disclaimer < > http://www.t-mobile.nl/disclaimer> > > > > > > > ******************************************************************************** > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashraf.hassan at t-mobile.nl Mon Nov 16 12:05:09 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Mon, 16 Nov 2015 13:05:09 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> Message-ID: Thank you for your help, I have tested the IPMI command line and looks working fine. Further I tried for Ironic commands, and it look ok except one node (UUID ff09865c-c909-4cb4-b24c-f7d9372b5f05), where the IPMI address was wrong, I updated using the command: When I list the node I still see they are in maintenance, what should I do for that? Ironic commands output: http://pastebin.com/BjdNahLR Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Nov 16 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 16 Nov 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20151116150003.0A65660A3FD9@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2015-11-18 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Packaging ](https://etherpad.openstack.org/p/RDO-Packaging) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From dsneddon at redhat.com Mon Nov 16 16:39:24 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Mon, 16 Nov 2015 08:39:24 -0800 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, , <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com>, <564651C2.3070409@redhat.com> Message-ID: <564A06BC.1030301@redhat.com> Answers inline... On 11/14/2015 12:35 AM, Boris Derzhavets wrote: > > > ________________________________________ > From: Dan Sneddon > Sent: Friday, November 13, 2015 4:10 PM > To: Boris Derzhavets; rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md > > On 11/13/2015 12:56 PM, Dan Sneddon wrote: >> Hi Boris, >> >> Let's keep this on-list, there may be others who are having similar >> issues who could find this discussion useful. >> >> Answers inline... >> >> On 11/13/2015 12:17 PM, Boris Derzhavets wrote: >>> >>> >>> ________________________________________ >>> From: Dan Sneddon >>> Sent: Friday, November 13, 2015 2:46 PM >>> To: Boris Derzhavets; Javier Pena >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>> >>> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >>>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >>>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >>>> What bad does NetworkManager when external network provider is used ? >>>> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >>>> so nothing is supposed to work :- >>>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>>> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >>>> Either I am missing something here. >>>> ________________________________________ >>>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>>> Sent: Friday, November 13, 2015 1:09 PM >>>> To: Javier Pena >>>> Cc: rdo-list at redhat.com >>>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>>> >>>> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >>>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >>>> Looks like provider external networks doesn't work for me. >>>> >>>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >>>> I need NetworkManager active, rather then network.service >>>> >>>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >>>> NetworkManager.service - Network Manager >>>> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >>>> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >>>> Main PID: 808 (NetworkManager) >>>> CGroup: /system.slice/NetworkManager.service >>>> ?? 808 /usr/sbin/NetworkManager --no-daemon >>>> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >>>> >>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >>>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >>>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >>>> >>>> [root at hacontroller1 network-scripts]# systemctl status network.service >>>> network.service - LSB: Bring up/down networking >>>> Loaded: loaded (/etc/rc.d/init.d/network) >>>> Active: inactive (dead) >>>> >>>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >>>> TYPE="Ethernet" >>>> BOOTPROTO="static" >>>> NAME="eth0" >>>> DEVICE=eth0 >>>> ONBOOT="yes" >>>> >>>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >>>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >>>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >>>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >>>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >>>> >>>> --- 10.10.10.1 ping statistics --- >>>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >>>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >>>> >>>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >>>> to provide route to 10.10.10.0/24. >>>> >>>> Thank you. >>>> Boris. >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> OK, a few things here. First of all, you don't actually need to have an >>> IP address on the host system to use a VLAN or interface as an external >>> provider network. The Neutron router will have an IP on the right >>> network, and within its namespace will be able to reach the 10.10.10.x >>> network. >>> >>>> It looks to me like NetworkManager is running dhclient for eth0, even >>>> though you have BOOTPROTO="static". This is causing an IP address to be >>>> added to eth0, so you are able to ping 10.10.10.x from the host. When >>>> you turn off NetworkManager, this unexpected behavior goes away, *but >>>> you should still be able to use provider networks*. >>> >>> Here I am quoting Lars Kellogg Stedman >>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>> The bottom statement in blog post above states :- >>> "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." >> >> Right, what Lars means is that eth1 is physically connected to a >> network with the 10.1.0.0/24 subnet, and eth2 is physically connected >> to a network with the 10.2.0.0/24 subnet. >> >> You might notice that in Lars's instructions, he never puts a host IP >> on either interface. >> >>>> Try creating a Neutron router with an IP on 10.10.10.x, and then you >>>> should be able to ping that network from the router namespace. >>> >>> " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's >>> IP " >> >> Let me refer you to this page, which explains the basics of creating >> and managing Neutron networks: >> >> http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html >> >> You will have to create an external network, which you will associate >> with a physical network via a bridge mapping. The default bridge >> mapping for br-ex is datacentre:br-ex. >> >> Using the name of the physical network "datacentre", we can create an > > 1. Javier is using external network provider ( and so did I , following him) > > #. /root/keystonerc_admin > # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external > # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 That looks like it would be OK if physnet1 is a flat connection (native VLAN on the interface). If you want to create a provider network on, for example VLAN 104, you can use this command: neutron net-create --provider:physical_network physnet1 --provider:network_type vlan --provider:segmentation_id 104 --router:external public Your subnet-create statement looks correct. > HA Neutron router and tenant's subnet have been created. > Then interface to tenant's network was activated as well as gateway to public. > Security rules were implemented as usual. > Cloud VM was launched, it obtained private IP and committed cloud-init OK. > Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization > Host > > 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. > When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) > should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . > In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) That's OK that eth0 doesn't have any kind of IP assigned or routes. The IP gets assigned to the Neutron router, and the routing table exists only inside of the router namespace. Once you have created the router, you will see a "qrouter-XXXX" entry for the router when you run the command: sudo ip netns list Copy the name of the namespace that starts with "qrouter" (you might have more than one if you have more than one Neutron router), then try pinging the external network from inside the namespace: sudo ip netns exec qrouter-c333bd80-ccc3-43ba-99e4-8df471ed8b9e ping 10.10.10.1 > In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 > would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. > Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html > If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net For Provider networks, you shouldn't have to assign an IP address to eth0 or to the bridge. The IP address lives on the router inside of the router namespace. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter > >> external network: >> >> [If the external network is on VLAN 104] >> neutron net-create ext-net --router:external \ >> --provider:physical_network datacentre \ >> --provider:network_type vlan \ >> --provider:segmentation_id 104 >> >> [If the external net is on the native VLAN (flat)] >> neutron net-create ext-net --router:external \ >> --provider:physical_network datacentre \ >> --provider:network_type flat >> >> Next, you must create a subnet for the network, including the range of >> floating IPs (allocation pool): >> >> neutron subnet-create --name ext-subnet \ >> --enable_dhcp=False \ >> --allocation-pool start=10.10.10.50,end=10.10.10.100 \ >> --gateway 10.10.10.1 \ >> ext-net 10.10.10.0/24 >> >> Next, you have to create a router: >> >> neutron router-create ext-router >> >> You then add an interface to the router. Since Neutron will assign the >> first address in the subnet to the router by default (10.10.10.1), you >> will want to first create a port with a specific IP, then assign that >> port to the router. >> >> neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 >> >> You will need to note the UUID of the newly created port. You can also >> see this with "neutron port-list". Now, create the router interface >> with the port you just created: >> >> neutron router-interface-add ext-router port= >> >>>> If you want to be able to ping 10.10.10.x from the host, then you >>>> should put either a static IP or DHCP on the bridge, not on eth0. This >>>> should work whether you are running NetworkManager or network.service. >>> >>> "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), >>> it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". >>> It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to >>> cloud VM on this subnet." >> >> I think you will have better luck once you create the external network >> and router. You can then use namespaces to ping the network from the >> router: >> >> First, obtain the qrouter- from the list of namespaces: >> >> sudo ip netns list >> >> Then, find the qrouter- and ping from there: >> >> ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 >> > > One more quick thing to note: > > In order to use floating IPs, you will also have to attach the external > router to the tenant networks where floating IPs will be used. > > When you go through the steps to create a tenant network, also attach > it to the router: > > 1) Create the network: > > neutron net-create tenant-net-1 > > 2) Create the subnet: > > neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 > > 3) Attach the external router to the network: > > neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 > > (since no specific port was given in the router-interface-add command, > Neutron will automatically choose the first address in the given > subnet, so 172.21.0.1 in this example) > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > From sasha at redhat.com Mon Nov 16 17:08:38 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Mon, 16 Nov 2015 12:08:38 -0500 (EST) Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> Message-ID: <1227894863.13551943.1447693718002.JavaMail.zimbra@redhat.com> So it seems like the IPMI IP for node ff09865c-c909-4cb4-b24c-f7d9372b5f05 is wrong: 110.254.103.25 I assume, you meant "10" in the first octet. run "ironic node-delete ff09865c-c909-4cb4-b24c-f7d9372b5f05" Fix the entry in the instackenv.json file and then run again the import command. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Ashraf Hassan" > To: "Mohammed Arafa" , "Ignacio Bravo" > Cc: rdo-list at redhat.com > Sent: Monday, November 16, 2015 7:05:09 AM > Subject: Re: [Rdo-list] Trying to install the RDO > > > > Thank you for your help, I have tested the IPMI command line and looks > working fine. > > Further I tried for Ironic commands, and it look ok except one node (UUID > ff09865c-c909-4cb4-b24c-f7d9372b5f05 ) , where the > > IPMI address was wrong, I updated using the command: > > When I list the node I still see they are in maintenance, what should I do > for that? > > Ironic commands output: http://pastebin.com/BjdNahLR > > Thanks, > > Ashraf > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke > VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important > RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Mon Nov 16 19:45:01 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 16 Nov 2015 14:45:01 -0500 Subject: [Rdo-list] This week's meetups - November 16 Message-ID: <564A323D.9030106@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Monday November 16 in Amsterdam, NL: Meetup & Hear from KB Singh, CentOS Project Lead - http://www.meetup.com/CentOS-Netherlands/events/226589559/ * Monday November 16 in Manchester, 18, GB: November Meetup - http://www.meetup.com/Manchester-OpenStack-Meetup/events/225442374/ * Tuesday November 17 in New York, NY, US: Intro to OpenStack - http://www.meetup.com/nycnetworkers/events/226008746/ * Tuesday November 17 in Houston, TX, US: Deploying OpenStack Using OSAD - http://www.meetup.com/openstackhoustonmeetup/events/226009298/ * Tuesday November 17 in Santa Clara, CA, US: Kuryr - Connecting containers to OpenStack Neutron & Intel Open Network Platform - http://www.meetup.com/openvswitch/events/226518209/ * Tuesday November 17 in London, 17, GB: Arista and OpenStack SDN with Nuage - http://www.meetup.com/Arista-Networks-London-Meetup-Group/events/226533916/ * Tuesday November 17 in Frederick, MD, US: Discuss the Changes in Openstack releases from Juno to Kilo to Liberty - http://www.meetup.com/Frederick-MD-OpenStack-Meetup/events/226653740/ * Wednesday November 18 in Prague, CZ: Cloud - Openstack technical overview - http://www.meetup.com/Morning-Talks/events/225867303/ * Wednesday November 18 in Paris, FR: Meetup#18 - OpenStack Big Tent : usine d'innovations @POSS - http://www.meetup.com/OpenStack-France/events/226167226/ * Wednesday November 18 in Chicago, IL, US: Lessons learned from the field with Mirantis - http://www.meetup.com/meetup-group-NjZdcegA/events/226599397/ * Wednesday November 18 in London, 17, GB: Tokyo Aftermath Meetup - http://www.meetup.com/Openstack-London/events/226314483/ * Wednesday November 18 in Porto Alegre, BR: 8? Hangout OpenStack Brasil - http://www.meetup.com/Openstack-Brasil/events/226770126/ * Thursday November 19 in Xian, CN: ?????????IBM??????? ?? - http://www.meetup.com/Xian-OpenStack-Meetup/events/226749667/ * Thursday November 19 in Sofia, BG: Fourth OpenStack UG Meetup in Bulgaria - http://www.meetup.com/OpenStack-Bulgaria/events/226587732/ * Thursday November 19 in Bangalore, IN: OpenStack mini-conf at OSI Days India - http://www.meetup.com/Indian-OpenStack-User-Group/events/226157966/ * Thursday November 19 in Portland, OR, US: OpenStack PDX Meetup - http://www.meetup.com/openstack-pdx/events/226168331/ * Thursday November 19 in Washington, DC, US: OpenStack in the Space Industry (#28) - http://www.meetup.com/OpenStackDC/events/224954125/ * Thursday November 19 in Phoenix, AZ, US: Discuss and learn about OpenStack - http://www.meetup.com/OpenStack-Phoenix/events/226264510/ * Thursday November 19 in Littleton, CO, US: Hands-on with OpenStack - http://www.meetup.com/OpenStack-Denver/events/225567012/ * Thursday November 19 in Chesterfield, MO, US: OpenStack Storage - http://www.meetup.com/OpenStack-STL/events/226660023/ * Thursday November 19 in Boston, MA, US: OpenStack Neutron Advanced Services Deep Dive - http://www.meetup.com/Openstack-Boston/events/226422816/ * Thursday November 19 in Atlanta, GA, US: OpenStack Meetup (Topic TBD) - http://www.meetup.com/openstack-atlanta/events/226387040/ * Thursday November 19 in Herriman, UT, US: Open Stack Monthly Meetup - http://www.meetup.com/openstack-utah/events/225939602/ * Saturday November 21 in Shanghai, CN: 11/21 ?? openstack meetup - ? ????? summit keynote,Container, Neutron, Cinder... - http://www.meetup.com/Shanghai-OpenStack-Meetup/events/226674732/ * Saturday November 21 in Beijing, CN: OpenStack???????????? ?????????????? - http://www.meetup.com/China-OpenStack-User-Group/events/226675511/ * Sunday November 22 in Bangalore, IN: OpenStack & SDN/NFV at JNU - http://www.meetup.com/Indian-OpenStack-User-Group/events/226534245/ * Sunday November 22 in Delhi, IN: SDN/NFV & OpenStack @JNU - http://www.meetup.com/SDN-OpenDayLight-Delhi-User-Group/events/226560720/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From bderzhavets at hotmail.com Mon Nov 16 20:10:34 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 16 Nov 2015 20:10:34 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: <564A06BC.1030301@redhat.com> References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, , <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com>,<564651C2.3070409@redhat.com> , <564A06BC.1030301@redhat.com> Message-ID: ________________________________________ From: Dan Sneddon Sent: Monday, November 16, 2015 11:39 AM To: Boris Derzhavets; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md Answers inline... > Thank you so much for your support. In meantime I can manage HAProxy/Keepalived 3 VMs Controller and one Compute VM ( nested kvm enabled ) via Nova && Neutron CLI with no problems (RDO Liberty). Dashboard is extremely slow ( i7 4790 CPU, 32 GB RAM). I still believe that problem is 4 Core desktop CPUs limitations. As soon as 3 Controllers get in sync and start working ( 4 VCPUs each one ,4 GB RAM ) graphics slows down immediately. Testing RDO Manager on desktops is hardly possible. > On 11/14/2015 12:35 AM, Boris Derzhavets wrote: > > > ________________________________________ > From: Dan Sneddon > Sent: Friday, November 13, 2015 4:10 PM > To: Boris Derzhavets; rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md > > On 11/13/2015 12:56 PM, Dan Sneddon wrote: >> Hi Boris, >> >> Let's keep this on-list, there may be others who are having similar >> issues who could find this discussion useful. >> >> Answers inline... >> >> On 11/13/2015 12:17 PM, Boris Derzhavets wrote: >>> >>> >>> ________________________________________ >>> From: Dan Sneddon >>> Sent: Friday, November 13, 2015 2:46 PM >>> To: Boris Derzhavets; Javier Pena >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>> >>> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >>>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >>>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >>>> What bad does NetworkManager when external network provider is used ? >>>> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >>>> so nothing is supposed to work :- >>>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>>> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >>>> Either I am missing something here. >>>> ________________________________________ >>>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>>> Sent: Friday, November 13, 2015 1:09 PM >>>> To: Javier Pena >>>> Cc: rdo-list at redhat.com >>>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>>> >>>> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >>>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >>>> Looks like provider external networks doesn't work for me. >>>> >>>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >>>> I need NetworkManager active, rather then network.service >>>> >>>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >>>> NetworkManager.service - Network Manager >>>> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >>>> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >>>> Main PID: 808 (NetworkManager) >>>> CGroup: /system.slice/NetworkManager.service >>>> ?? 808 /usr/sbin/NetworkManager --no-daemon >>>> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >>>> >>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >>>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >>>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >>>> >>>> [root at hacontroller1 network-scripts]# systemctl status network.service >>>> network.service - LSB: Bring up/down networking >>>> Loaded: loaded (/etc/rc.d/init.d/network) >>>> Active: inactive (dead) >>>> >>>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >>>> TYPE="Ethernet" >>>> BOOTPROTO="static" >>>> NAME="eth0" >>>> DEVICE=eth0 >>>> ONBOOT="yes" >>>> >>>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >>>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >>>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >>>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >>>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >>>> >>>> --- 10.10.10.1 ping statistics --- >>>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >>>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >>>> >>>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >>>> to provide route to 10.10.10.0/24. >>>> >>>> Thank you. >>>> Boris. >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> OK, a few things here. First of all, you don't actually need to have an >>> IP address on the host system to use a VLAN or interface as an external >>> provider network. The Neutron router will have an IP on the right >>> network, and within its namespace will be able to reach the 10.10.10.x >>> network. >>> >>>> It looks to me like NetworkManager is running dhclient for eth0, even >>>> though you have BOOTPROTO="static". This is causing an IP address to be >>>> added to eth0, so you are able to ping 10.10.10.x from the host. When >>>> you turn off NetworkManager, this unexpected behavior goes away, *but >>>> you should still be able to use provider networks*. >>> >>> Here I am quoting Lars Kellogg Stedman >>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>> The bottom statement in blog post above states :- >>> "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." >> >> Right, what Lars means is that eth1 is physically connected to a >> network with the 10.1.0.0/24 subnet, and eth2 is physically connected >> to a network with the 10.2.0.0/24 subnet. >> >> You might notice that in Lars's instructions, he never puts a host IP >> on either interface. >> >>>> Try creating a Neutron router with an IP on 10.10.10.x, and then you >>>> should be able to ping that network from the router namespace. >>> >>> " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's >>> IP " >> >> Let me refer you to this page, which explains the basics of creating >> and managing Neutron networks: >> >> http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html >> >> You will have to create an external network, which you will associate >> with a physical network via a bridge mapping. The default bridge >> mapping for br-ex is datacentre:br-ex. >> >> Using the name of the physical network "datacentre", we can create an > > 1. Javier is using external network provider ( and so did I , following him) > > #. /root/keystonerc_admin > # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external > # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 That looks like it would be OK if physnet1 is a flat connection (native VLAN on the interface). If you want to create a provider network on, for example VLAN 104, you can use this command: neutron net-create --provider:physical_network physnet1 --provider:network_type vlan --provider:segmentation_id 104 --router:external public Your subnet-create statement looks correct. > HA Neutron router and tenant's subnet have been created. > Then interface to tenant's network was activated as well as gateway to public. > Security rules were implemented as usual. > Cloud VM was launched, it obtained private IP and committed cloud-init OK. > Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization > Host > > 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. > When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) > should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . > In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) That's OK that eth0 doesn't have any kind of IP assigned or routes. The IP gets assigned to the Neutron router, and the routing table exists only inside of the router namespace. Once you have created the router, you will see a "qrouter-XXXX" entry for the router when you run the command: sudo ip netns list Copy the name of the namespace that starts with "qrouter" (you might have more than one if you have more than one Neutron router), then try pinging the external network from inside the namespace: sudo ip netns exec qrouter-c333bd80-ccc3-43ba-99e4-8df471ed8b9e ping 10.10.10.1 > In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 > would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. > Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html > If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net For Provider networks, you shouldn't have to assign an IP address to eth0 or to the bridge. The IP address lives on the router inside of the router namespace. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter > >> external network: >> >> [If the external network is on VLAN 104] >> neutron net-create ext-net --router:external \ >> --provider:physical_network datacentre \ >> --provider:network_type vlan \ >> --provider:segmentation_id 104 >> >> [If the external net is on the native VLAN (flat)] >> neutron net-create ext-net --router:external \ >> --provider:physical_network datacentre \ >> --provider:network_type flat >> >> Next, you must create a subnet for the network, including the range of >> floating IPs (allocation pool): >> >> neutron subnet-create --name ext-subnet \ >> --enable_dhcp=False \ >> --allocation-pool start=10.10.10.50,end=10.10.10.100 \ >> --gateway 10.10.10.1 \ >> ext-net 10.10.10.0/24 >> >> Next, you have to create a router: >> >> neutron router-create ext-router >> >> You then add an interface to the router. Since Neutron will assign the >> first address in the subnet to the router by default (10.10.10.1), you >> will want to first create a port with a specific IP, then assign that >> port to the router. >> >> neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 >> >> You will need to note the UUID of the newly created port. You can also >> see this with "neutron port-list". Now, create the router interface >> with the port you just created: >> >> neutron router-interface-add ext-router port= >> >>>> If you want to be able to ping 10.10.10.x from the host, then you >>>> should put either a static IP or DHCP on the bridge, not on eth0. This >>>> should work whether you are running NetworkManager or network.service. >>> >>> "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), >>> it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". >>> It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to >>> cloud VM on this subnet." >> >> I think you will have better luck once you create the external network >> and router. You can then use namespaces to ping the network from the >> router: >> >> First, obtain the qrouter- from the list of namespaces: >> >> sudo ip netns list >> >> Then, find the qrouter- and ping from there: >> >> ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 >> > > One more quick thing to note: > > In order to use floating IPs, you will also have to attach the external > router to the tenant networks where floating IPs will be used. > > When you go through the steps to create a tenant network, also attach > it to the router: > > 1) Create the network: > > neutron net-create tenant-net-1 > > 2) Create the subnet: > > neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 > > 3) Attach the external router to the network: > > neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 > > (since no specific port was given in the router-interface-add command, > Neutron will automatically choose the first address in the given > subnet, so 172.21.0.1 in this example) > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > From ashraf.hassan at t-mobile.nl Tue Nov 17 18:30:19 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Tue, 17 Nov 2015 19:30:19 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: <1227894863.13551943.1447693718002.JavaMail.zimbra@redhat.com> References: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> <1227894863.13551943.1447693718002.JavaMail.zimbra@redhat.com> Message-ID: Thanks I fixed the nodes, I want now to define the DNSs. I have 2 DNS. I followed the link and I defined one DNS and it worked: neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver 192.168.1.48 But I want to add another DNS, I tried: neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver 192.168.1.49 but it changes the existing one I tried the following commands but neither worked: neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver 192.168.1.48 192.168.1.49 neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver list=true 192.168.1.48 192.168.1.49 How can I add another DNS? Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Tue Nov 17 18:35:35 2015 From: marius at remote-lab.net (Marius Cornea) Date: Tue, 17 Nov 2015 19:35:35 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> <1227894863.13551943.1447693718002.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Nov 17, 2015 at 7:30 PM, Hassan, Ashraf wrote: > Thanks I fixed the nodes, I want now to define the DNSs. > I have 2 DNS. > I followed the link and I defined one DNS and it worked: > neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver 192.168.1.48 > > But I want to add another DNS, I tried: > neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver 192.168.1.49 > > but it changes the existing one > > I tried the following commands but neither worked: > neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver 192.168.1.48 192.168.1.49 > neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver list=true 192.168.1.48 192.168.1.49 > > How can I add another DNS? neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver 192.168.1.48 --dns-nameserver 192.168.1.49 > Thanks, > Ashraf > > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From sasha at redhat.com Tue Nov 17 18:41:09 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 17 Nov 2015 13:41:09 -0500 (EST) Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> <1227894863.13551943.1447693718002.JavaMail.zimbra@redhat.com> Message-ID: <1810869669.14478354.1447785669298.JavaMail.zimbra@redhat.com> use the --dns-nameserver argument twice: --dns-nameserver 192.168.1.48 --dns-nameserver 192.168.1.49 Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Ashraf Hassan" > To: "Sasha Chuzhoy" > Cc: "Mohammed Arafa" , "Ignacio Bravo" , rdo-list at redhat.com > Sent: Tuesday, November 17, 2015 1:30:19 PM > Subject: RE: [Rdo-list] Trying to install the RDO > > Thanks I fixed the nodes, I want now to define the DNSs. > I have 2 DNS. > I followed the link and I defined one DNS and it worked: > neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver > 192.168.1.48 > > But I want to add another DNS, I tried: > neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver > 192.168.1.49 > > but it changes the existing one > > I tried the following commands but neither worked: > neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver > 192.168.1.48 192.168.1.49 > neutron subnet-update 82930541-414b-4da2-8ec9-083ae7a88b2e --dns-nameserver > list=true 192.168.1.48 192.168.1.49 > > How can I add another DNS? > Thanks, > Ashraf > > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke > VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > > This e-mail and its contents are subject to a DISCLAIMER with important > RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > > ******************************************************************************** > From ashraf.hassan at t-mobile.nl Tue Nov 17 18:43:59 2015 From: ashraf.hassan at t-mobile.nl (Hassan, Ashraf) Date: Tue, 17 Nov 2015 19:43:59 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> <1227894863.13551943.1447693718002.JavaMail.zimbra@redhat.com> Message-ID: Thank that worked :-) To deploy the overcloud it says by default if you run: openstack overcloud deploy --templates You will get one controller node and one compute node, but I have defined 5 nodes, I can see them in ironice node-list, is there a guide how to modify the templates to create 2 controllers, and 3 computing nodes, and state which one is a controller and which is a compute? Also can I create the rest of the subnets later after deploying the overcloud? Right now I have only the subnet which will deploy using the PXE boot. Thanks, Ashraf ******************************************************************************** N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer ******************************************************************************** From marius at remote-lab.net Tue Nov 17 18:52:22 2015 From: marius at remote-lab.net (Marius Cornea) Date: Tue, 17 Nov 2015 19:52:22 +0100 Subject: [Rdo-list] Trying to install the RDO In-Reply-To: References: <158C2A77-28E8-4D17-85E5-093211204AA7@ltgfederal.com> <1227894863.13551943.1447693718002.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Nov 17, 2015 at 7:43 PM, Hassan, Ashraf wrote: > Thank that worked :-) > To deploy the overcloud it says by default if you run: > openstack overcloud deploy --templates > > You will get one controller node and one compute node, but I have defined 5 nodes, I can see them in ironice node-list, is there a guide how to modify the templates to create 2 controllers, and 3 computing nodes, and state which one is a controller and which is a compute? Also can I create the rest of the subnets later after deploying the overcloud? Right now I have only the subnet which will deploy using the PXE boot. For nodes tagging( note that for ha a minimum of 3 controllers is required): https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/profile_matching.html#optional-manually-add-the-profiles-to-the-nodes If you want to use multiple isolated networks then you need to prepare some template before deployment. See the docs here: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/network_isolation.html > > Thanks, > Ashraf > > ******************************************************************************** > > N.B.: op (de inhoud van) deze e-mail is een DISCLAIMER met belangrijke VOORBEHOUDEN van toepassing: zie http://www.t-mobile.nl/disclaimer > > This e-mail and its contents are subject to a DISCLAIMER with important RESERVATIONS: see http://www.t-mobile.nl/disclaimer > > > ******************************************************************************** From mkkang at isi.edu Tue Nov 17 20:32:12 2015 From: mkkang at isi.edu (Mikyung Kang) Date: Tue, 17 Nov 2015 12:32:12 -0800 (PST) Subject: [Rdo-list] [RDO-Manager] deploy In-Reply-To: <23050389.275.1446738281644.JavaMail.mkang@guest246.east.isi.edu> Message-ID: <7047548.2211.1447792329115.JavaMail.mkang@guest246.east.isi.edu> Hello, I'm trying RDO-manager:Liberty version on CentOS7.1. https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html After adding /tftpboot/pxelinux.cfg/default [Using IPA] as follows, up to introspection step, it's OK (error=None, finished=True). [root at test tftpboot]# cat pxelinux.cfg/default (10.0.1.6 = undercloud IP) default introspect label introspect kernel agent.kernel append initrd=agent.ramdisk ipa-inspection-callback-url=http://10.0.1.6:5050/v1/continue systemd.journald.forward_to_console=yes ipappend 3 But, when deploying 1 controller and 1 compute, those systems couldn't be booted from right deploy images. I can see two instances are spawned (1 controller-node instance and 1 compute-node instance) based on the default heat template. Then, the provisioning state is changed from available to deploying. On this deploying step, I can see deploy images/config are put to each instance's UUID directory @/httpboot/ directory. And then, the provisioning state is changed from [deploying] to [wait call-back]. Even though ipmitool turns on the system, those systems can't find deploy images. Actually, I have another dhcp server @other machine. It includes RDO testbeds' MAC and IP. So, I setup RDO testbeds' next-server as RDO undercloud IP @dhcpd.conf. Then, overcloud nodes could boot from agent.kernel/ramdisk from undercloud:/tftpboot properly. But, I don't know how overcloud nodes can get deploy/overcloud images. If above pxelinux.cfg/default is put as-is @undercloud, agent kernel/ramdisk is loaded again, not from deploy image. Then deploying step can't be proceeded further and then goes to timeout error. If that default file is removed, system is unable to locate tftp configuration. How can I make controller/compute boot from right deploy images? Should I setup something for the httpboot/ipxe? Thanks, Mikyung From dsneddon at redhat.com Tue Nov 17 20:58:37 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 17 Nov 2015 12:58:37 -0800 Subject: [Rdo-list] [RDO-Manager] deploy In-Reply-To: <7047548.2211.1447792329115.JavaMail.mkang@guest246.east.isi.edu> References: <7047548.2211.1447792329115.JavaMail.mkang@guest246.east.isi.edu> Message-ID: <564B94FD.3080805@redhat.com> On 11/17/2015 12:32 PM, Mikyung Kang wrote: > Hello, > > I'm trying RDO-manager:Liberty version on CentOS7.1. > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > After adding /tftpboot/pxelinux.cfg/default [Using IPA] as follows, up to introspection step, it's OK (error=None, finished=True). > > [root at test tftpboot]# cat pxelinux.cfg/default (10.0.1.6 = undercloud IP) > default introspect > label introspect > kernel agent.kernel > append initrd=agent.ramdisk ipa-inspection-callback-url=http://10.0.1.6:5050/v1/continue systemd.journald.forward_to_console=yes > ipappend 3 > > But, when deploying 1 controller and 1 compute, those systems couldn't be booted from right deploy images. > > I can see two instances are spawned (1 controller-node instance and 1 compute-node instance) based on the default heat template. Then, the provisioning state is changed from available to deploying. On this deploying step, I can see deploy images/config are put to each instance's UUID directory @/httpboot/ directory. And then, the provisioning state is changed from [deploying] to [wait call-back]. Even though ipmitool turns on the system, those systems can't find deploy images. > > Actually, I have another dhcp server @other machine. It includes RDO testbeds' MAC and IP. So, I setup RDO testbeds' next-server as RDO undercloud IP @dhcpd.conf. Then, overcloud nodes could boot from agent.kernel/ramdisk from undercloud:/tftpboot properly. But, I don't know how overcloud nodes can get deploy/overcloud images. > > If above pxelinux.cfg/default is put as-is @undercloud, agent kernel/ramdisk is loaded again, not from deploy image. Then deploying step can't be proceeded further and then goes to timeout error. If that default file is removed, system is unable to locate tftp configuration. How can I make controller/compute boot from right deploy images? Should I setup something for the httpboot/ipxe? > > Thanks, > Mikyung > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > What is supposed to happen when introspection completes is that the Undercloud will add the MAC address of the newly-discovered system to iptables in order to block DHCP requests from reaching ironic-discovery's dnsmasq. If that doesn't happen, then you get a loop where the discovery image boots instead of the deploy image. Check your iptables and make sure that you see the MAC addresses added to the "discovery" chain, like this: Chain discovery (1 references) target prot opt source destination DROP all -- anywhere anywhere MAC 00:21:BA:17:0D:2B DROP all -- anywhere anywhere MAC 00:3C:A6:BB:68:FC DROP all -- anywhere anywhere MAC 00:92:5D:AE:62:37 Also, make sure that iptables is running, and that you don't have more than one interface attached to the provisioning network on the overcloud nodes. If you do, there is a workaround, but it's cleanest to just make sure you have only one interface attached to the provisioning interface. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From mkkang at isi.edu Tue Nov 17 21:32:38 2015 From: mkkang at isi.edu (Mikyung Kang) Date: Tue, 17 Nov 2015 13:32:38 -0800 (PST) Subject: [Rdo-list] [RDO-Manager] deploy In-Reply-To: <564B94FD.3080805@redhat.com> Message-ID: <15751834.2267.1447795954089.JavaMail.mkang@guest246.east.isi.edu> Hi Dan, Thanks for the description. As you described, iptables is running and MAC addresses for overcloud nodes are added as DROP rule properly. Only one interface is attached to the provisioning interface. But, overcloud nodes still load agent.kernel/ramdisk images, not deploy_kernel/ramdisk. Should I disable dhcp server @ other machine and setup new dhcp server on the same undercloud node? This is my log: [stack at gpu6 ~]$ openstack baremetal introspection bulk start Setting available nodes to manageable... Starting introspection of node: ffe9edca-fa5e-45bf-97df-f49a2cce0c92 Starting introspection of node: 1dc404db-0352-4355-ba64-67fae456f12a Waiting for introspection to finish... Introspection for UUID ffe9edca-fa5e-45bf-97df-f49a2cce0c92 finished successfully. Introspection for UUID 1dc404db-0352-4355-ba64-67fae456f12a finished successfully. Setting manageable nodes to available... Node ffe9edca-fa5e-45bf-97df-f49a2cce0c92 has been set to available. Node 1dc404db-0352-4355-ba64-67fae456f12a has been set to available. Introspection completed. [stack at gpu6 ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | ffe9edca-fa5e-45bf-97df-f49a2cce0c92 | None | None | power off | available | False | | 1dc404db-0352-4355-ba64-67fae456f12a | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ [stack at gpu6 ~]$ openstack baremetal introspection bulk status +--------------------------------------+----------+-------+ | Node UUID | Finished | Error | +--------------------------------------+----------+-------+ | ffe9edca-fa5e-45bf-97df-f49a2cce0c92 | True | None | | 1dc404db-0352-4355-ba64-67fae456f12a | True | None | +--------------------------------------+----------+-------+ Chain ironic-inspector (1 references) target prot opt source destination DROP all -- anywhere anywhere MAC 00:9C:02:A7:EA:36 DROP all -- anywhere anywhere MAC 00:9C:02:A5:4A:DA ACCEPT all -- anywhere anywhere [stack at gpu6 ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates hang....? [root at gpu6 tftpboot]# nova list +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | 9a0c3ba9-5502-4bd7-a7f1-c2109200c19e | overcloud-controller-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.9 | | 0e740a33-fbca-4690-a938-980fbe623223 | overcloud-novacompute-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.8 | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ [root at gpu6 tftpboot]# ls -al /httpboot/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/ total 92524 drwxr-xr-x. 2 ironic ironic 87 Nov 17 16:17 . drwxr-xr-x. 5 ironic ironic 4096 Nov 17 16:17 .. -rw-r--r--. 1 ironic ironic 956 Nov 17 16:17 config -rw-r--r--. 3 ironic ironic 5029328 Nov 6 14:44 deploy_kernel -rw-r--r--. 3 ironic ironic 50630736 Nov 6 14:44 deploy_ramdisk -rw-r--r--. 3 ironic ironic 5029328 Nov 6 14:44 kernel -rw-r--r--. 3 ironic ironic 34038813 Nov 6 14:44 ramdisk [root at gpu6 tftpboot]# ls -al /httpboot/1dc404db-0352-4355-ba64-67fae456f12a/ total 92524 drwxr-xr-x. 2 ironic ironic 87 Nov 17 16:17 . drwxr-xr-x. 5 ironic ironic 4096 Nov 17 16:17 .. -rw-r--r--. 1 ironic ironic 956 Nov 17 16:17 config -rw-r--r--. 3 ironic ironic 5029328 Nov 6 14:44 deploy_kernel -rw-r--r--. 3 ironic ironic 50630736 Nov 6 14:44 deploy_ramdisk -rw-r--r--. 3 ironic ironic 5029328 Nov 6 14:44 kernel -rw-r--r--. 3 ironic ironic 34038813 Nov 6 14:44 ramdisk [root at gpu6 tftpboot]# cat /httpboot/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/config #!ipxe dhcp goto deploy :deploy kernel http://192.0.2.1:8088/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/deploy_kernel selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn.2008-10.org.openstack:ffe9edca-fa5e-45bf-97df-f49a2cce0c92 deployment_id=ffe9edca-fa5e-45bf-97df-f49a2cce0c92 deployment_key=LIUI374KDMT55F8ATYY56BIDFWY0RRA1 ironic_api_url=http://192.0.2.1:6385 troubleshoot=0 text nofb nomodeset vga=normal boot_option=local ip=${ip}:${next-server}:${gateway}:${netmask} BOOTIF=${mac} ipa-api-url=http://192.0.2.1:6385 ipa-driver-name=pxe_ipmitool coreos.configdrive=0 initrd http://192.0.2.1:8088/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/deploy_ramdisk boot :boot_partition kernel http://192.0.2.1:8088/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/kernel root={{ ROOT }} ro text nofb nomodeset vga=normal initrd http://192.0.2.1:8088/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/ramdisk boot :boot_whole_disk kernel chain.c32 append mbr:{{ DISK_IDENTIFIER }} boot [root at gpu6 tftpboot]# ironic node-list +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | ffe9edca-fa5e-45bf-97df-f49a2cce0c92 | None | 0e740a33-fbca-4690-a938-980fbe623223 | power on | wait call-back | False | | 1dc404db-0352-4355-ba64-67fae456f12a | None | 9a0c3ba9-5502-4bd7-a7f1-c2109200c19e | power on | wait call-back | False | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ Thanks, Mikyung ----- Original Message ----- From: "Dan Sneddon" To: "Mikyung Kang" , rdo-list at redhat.com Sent: Tuesday, November 17, 2015 3:58:37 PM Subject: Re: [Rdo-list] [RDO-Manager] deploy On 11/17/2015 12:32 PM, Mikyung Kang wrote: > Hello, > > I'm trying RDO-manager:Liberty version on CentOS7.1. > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > After adding /tftpboot/pxelinux.cfg/default [Using IPA] as follows, up to introspection step, it's OK (error=None, finished=True). > > [root at test tftpboot]# cat pxelinux.cfg/default (10.0.1.6 = undercloud IP) > default introspect > label introspect > kernel agent.kernel > append initrd=agent.ramdisk ipa-inspection-callback-url=http://10.0.1.6:5050/v1/continue systemd.journald.forward_to_console=yes > ipappend 3 > > But, when deploying 1 controller and 1 compute, those systems couldn't be booted from right deploy images. > > I can see two instances are spawned (1 controller-node instance and 1 compute-node instance) based on the default heat template. Then, the provisioning state is changed from available to deploying. On this deploying step, I can see deploy images/config are put to each instance's UUID directory @/httpboot/ directory. And then, the provisioning state is changed from [deploying] to [wait call-back]. Even though ipmitool turns on the system, those systems can't find deploy images. > > Actually, I have another dhcp server @other machine. It includes RDO testbeds' MAC and IP. So, I setup RDO testbeds' next-server as RDO undercloud IP @dhcpd.conf. Then, overcloud nodes could boot from agent.kernel/ramdisk from undercloud:/tftpboot properly. But, I don't know how overcloud nodes can get deploy/overcloud images. > > If above pxelinux.cfg/default is put as-is @undercloud, agent kernel/ramdisk is loaded again, not from deploy image. Then deploying step can't be proceeded further and then goes to timeout error. If that default file is removed, system is unable to locate tftp configuration. How can I make controller/compute boot from right deploy images? Should I setup something for the httpboot/ipxe? > > Thanks, > Mikyung > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > What is supposed to happen when introspection completes is that the Undercloud will add the MAC address of the newly-discovered system to iptables in order to block DHCP requests from reaching ironic-discovery's dnsmasq. If that doesn't happen, then you get a loop where the discovery image boots instead of the deploy image. Check your iptables and make sure that you see the MAC addresses added to the "discovery" chain, like this: Chain discovery (1 references) target prot opt source destination DROP all -- anywhere anywhere MAC 00:21:BA:17:0D:2B DROP all -- anywhere anywhere MAC 00:3C:A6:BB:68:FC DROP all -- anywhere anywhere MAC 00:92:5D:AE:62:37 Also, make sure that iptables is running, and that you don't have more than one interface attached to the provisioning network on the overcloud nodes. If you do, there is a workaround, but it's cleanest to just make sure you have only one interface attached to the provisioning interface. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From apevec at gmail.com Tue Nov 17 22:46:28 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 17 Nov 2015 23:46:28 +0100 Subject: [Rdo-list] [meeting] RDO meeting (2015-11-11) Message-ID: ============================== #rdo: RDO meeting (2015-11-11) ============================== Meeting started by apevec at 15:00:33 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-11-11/rdo.2015-11-11-15.00.log.html . Meeting summary --------------- * roll call (apevec, 15:00:50) * agenda https://etherpad.openstack.org/p/RDO-Packaging (apevec, 15:01:03) * client separate target/repo in CBS? (apevec, 15:03:40) * AGREED: have latest clients in rawhide + separate mitaka target in CBS (number80, 15:14:29) * TODO: inheritance of the mitaka clients target by mitaka target (number80, 15:15:02) * Mitaka test days suggestion (apevec, 15:19:41) * LINK: https://www.redhat.com/archives/rdo-list/2015-November/msg00096.html (apevec, 15:19:57) * FOSDEM RDO Day (apevec, 15:22:34) * ACTION: number80 submit a delorean hands-on for RDO meetup (number80, 15:23:31) * FOSDEM IaaS devroom CFP (apevec, 15:25:36) * LINK: http://community.redhat.com/blog/2015/10/call-for-proposals-fosdem16-virtualization-iaas-devroom/ (apevec, 15:25:46) * loads of py3.5 FTBFS (apevec, 15:28:04) * wait releng to finish py3.5 mass rebuild (apevec, 15:29:19) * LINK: https://fedoraproject.org/wiki/Fails_to_build_from_source (apevec, 15:29:22) * open floor (apevec, 15:29:47) Meeting ended at 15:34:28 UTC. Action Items ------------ * number80 submit a delorean hands-on for RDO meetup Action Items, by person ----------------------- * number80 * number80 submit a delorean hands-on for RDO meetup * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (64) * number80 (33) * rbowen (28) * dmsimard (18) * jruzicka (16) * zodbot (6) * trown (5) * elmiko (5) * kashyap (4) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From bderzhavets at hotmail.com Wed Nov 18 10:00:10 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 18 Nov 2015 10:00:10 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: References: <56448D6D.90704@redhat.com>, <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com>, , <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com>,<564651C2.3070409@redhat.com> , <564A06BC.1030301@redhat.com>, Message-ID: An embedded and charset-unspecified text was scrubbed... Name: disclaimer.txt URL: -------------- next part -------------- I would guess that page for Kilo :- https://github.com/beekhof/osp-ha-deploy/blob/Kilo-RDO7/keepalived/neutron-config.md systemctl enable openvswitch systemctl start openvswitch ovs-vsctl add-br br-int ovs-vsctl add-br br-eth0 Note: we have seeing issues when trying to configure an IP on br-eth0 (specially ARP problems), so it is not recommended. ovs-vsctl add-port br-eth0 eth0 would better update in same way as for Liberty - https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/neutron-config.md. 4 days ago updated by Alessandro Vozza as follows Assuming eth0 is your interface attached to the external network, create two files in /etc/sysconfig/network-scripts/ as follows (change MTU if you need): cat < /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-eth0 ONBOOT=yes BOOTPROTO=none VLAN=yes MTU="9000" NM_CONTROLLED=no EOF cat < /etc/sysconfig/network-scripts/ifcfg-br-eth0 DEVICE=br-eth0 DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge ONBOOT=yes BOOTPROTO=static MTU="9000" NM_CONTROLLED=no EOF In other words patch https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d should be applied to Kilo OVS bridging configuration. I believe so. Thank you very much. Boris ________________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Monday, November 16, 2015 3:10 PM To: Dan Sneddon; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md ________________________________________ From: Dan Sneddon Sent: Monday, November 16, 2015 11:39 AM To: Boris Derzhavets; rdo-list at redhat.com Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md Answers inline... > Thank you so much for your support. In meantime I can manage HAProxy/Keepalived 3 VMs Controller and one Compute VM ( nested kvm enabled ) via Nova && Neutron CLI with no problems (RDO Liberty). Dashboard is extremely slow ( i7 4790 CPU, 32 GB RAM). I still believe that problem is 4 Core desktop CPUs limitations. As soon as 3 Controllers get in sync and start working ( 4 VCPUs each one ,4 GB RAM ) graphics slows down immediately. Testing RDO Manager on desktops is hardly possible. > On 11/14/2015 12:35 AM, Boris Derzhavets wrote: > > > ________________________________________ > From: Dan Sneddon > Sent: Friday, November 13, 2015 4:10 PM > To: Boris Derzhavets; rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md > > On 11/13/2015 12:56 PM, Dan Sneddon wrote: >> Hi Boris, >> >> Let's keep this on-list, there may be others who are having similar >> issues who could find this discussion useful. >> >> Answers inline... >> >> On 11/13/2015 12:17 PM, Boris Derzhavets wrote: >>> >>> >>> ________________________________________ >>> From: Dan Sneddon >>> Sent: Friday, November 13, 2015 2:46 PM >>> To: Boris Derzhavets; Javier Pena >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>> >>> On 11/13/2015 11:38 AM, Boris Derzhavets wrote: >>>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) , >>>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available) >>>> What bad does NetworkManager when external network provider is used ? >>>> Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net), >>>> so nothing is supposed to work :- >>>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>>> http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html >>>> Either I am missing something here. >>>> ________________________________________ >>>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>>> Sent: Friday, November 13, 2015 1:09 PM >>>> To: Javier Pena >>>> Cc: rdo-list at redhat.com >>>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md >>>> >>>> Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM, >>>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled ) >>>> Looks like provider external networks doesn't work for me. >>>> >>>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1, >>>> I need NetworkManager active, rather then network.service >>>> >>>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager >>>> NetworkManager.service - Network Manager >>>> Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) >>>> Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago >>>> Main PID: 808 (NetworkManager) >>>> CGroup: /system.slice/NetworkManager.service >>>> ?? 808 /usr/sbin/NetworkManager --no-daemon >>>> ??2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0... >>>> >>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: NetworkManager state is n...L >>>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s. >>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: (eth0): Activation: succe.... >>>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: startup complete >>>> >>>> [root at hacontroller1 network-scripts]# systemctl status network.service >>>> network.service - LSB: Bring up/down networking >>>> Loaded: loaded (/etc/rc.d/init.d/network) >>>> Active: inactive (dead) >>>> >>>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0 >>>> TYPE="Ethernet" >>>> BOOTPROTO="static" >>>> NAME="eth0" >>>> DEVICE=eth0 >>>> ONBOOT="yes" >>>> >>>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1 >>>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. >>>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms >>>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms >>>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms >>>> >>>> --- 10.10.10.1 ping statistics --- >>>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms >>>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms >>>> >>>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease, >>>> to provide route to 10.10.10.0/24. >>>> >>>> Thank you. >>>> Boris. >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> OK, a few things here. First of all, you don't actually need to have an >>> IP address on the host system to use a VLAN or interface as an external >>> provider network. The Neutron router will have an IP on the right >>> network, and within its namespace will be able to reach the 10.10.10.x >>> network. >>> >>>> It looks to me like NetworkManager is running dhclient for eth0, even >>>> though you have BOOTPROTO="static". This is causing an IP address to be >>>> added to eth0, so you are able to ping 10.10.10.x from the host. When >>>> you turn off NetworkManager, this unexpected behavior goes away, *but >>>> you should still be able to use provider networks*. >>> >>> Here I am quoting Lars Kellogg Stedman >>> http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/ >>> The bottom statement in blog post above states :- >>> "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address." >> >> Right, what Lars means is that eth1 is physically connected to a >> network with the 10.1.0.0/24 subnet, and eth2 is physically connected >> to a network with the 10.2.0.0/24 subnet. >> >> You might notice that in Lars's instructions, he never puts a host IP >> on either interface. >> >>>> Try creating a Neutron router with an IP on 10.10.10.x, and then you >>>> should be able to ping that network from the router namespace. >>> >>> " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's >>> IP " >> >> Let me refer you to this page, which explains the basics of creating >> and managing Neutron networks: >> >> http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html >> >> You will have to create an external network, which you will associate >> with a physical network via a bridge mapping. The default bridge >> mapping for br-ex is datacentre:br-ex. >> >> Using the name of the physical network "datacentre", we can create an > > 1. Javier is using external network provider ( and so did I , following him) > > #. /root/keystonerc_admin > # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external > # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24 That looks like it would be OK if physnet1 is a flat connection (native VLAN on the interface). If you want to create a provider network on, for example VLAN 104, you can use this command: neutron net-create --provider:physical_network physnet1 --provider:network_type vlan --provider:segmentation_id 104 --router:external public Your subnet-create statement looks correct. > HA Neutron router and tenant's subnet have been created. > Then interface to tenant's network was activated as well as gateway to public. > Security rules were implemented as usual. > Cloud VM was launched, it obtained private IP and committed cloud-init OK. > Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization > Host > > 2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed. > When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node) > should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) . > In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself) That's OK that eth0 doesn't have any kind of IP assigned or routes. The IP gets assigned to the Neutron router, and the routing table exists only inside of the router namespace. Once you have created the router, you will see a "qrouter-XXXX" entry for the router when you run the command: sudo ip netns list Copy the name of the namespace that starts with "qrouter" (you might have more than one if you have more than one Neutron router), then try pinging the external network from inside the namespace: sudo ip netns exec qrouter-c333bd80-ccc3-43ba-99e4-8df471ed8b9e ping 10.10.10.1 > In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 > would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that. > Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html > If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net For Provider networks, you shouldn't have to assign an IP address to eth0 or to the bridge. The IP address lives on the router inside of the router namespace. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter > >> external network: >> >> [If the external network is on VLAN 104] >> neutron net-create ext-net --router:external \ >> --provider:physical_network datacentre \ >> --provider:network_type vlan \ >> --provider:segmentation_id 104 >> >> [If the external net is on the native VLAN (flat)] >> neutron net-create ext-net --router:external \ >> --provider:physical_network datacentre \ >> --provider:network_type flat >> >> Next, you must create a subnet for the network, including the range of >> floating IPs (allocation pool): >> >> neutron subnet-create --name ext-subnet \ >> --enable_dhcp=False \ >> --allocation-pool start=10.10.10.50,end=10.10.10.100 \ >> --gateway 10.10.10.1 \ >> ext-net 10.10.10.0/24 >> >> Next, you have to create a router: >> >> neutron router-create ext-router >> >> You then add an interface to the router. Since Neutron will assign the >> first address in the subnet to the router by default (10.10.10.1), you >> will want to first create a port with a specific IP, then assign that >> port to the router. >> >> neutron port-create ext-net --fixed-ip ip_address=10.10.10.254 >> >> You will need to note the UUID of the newly created port. You can also >> see this with "neutron port-list". Now, create the router interface >> with the port you just created: >> >> neutron router-interface-add ext-router port= >> >>>> If you want to be able to ping 10.10.10.x from the host, then you >>>> should put either a static IP or DHCP on the bridge, not on eth0. This >>>> should work whether you are running NetworkManager or network.service. >>> >>> "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes), >>> it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto". >>> It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to >>> cloud VM on this subnet." >> >> I think you will have better luck once you create the external network >> and router. You can then use namespaces to ping the network from the >> router: >> >> First, obtain the qrouter- from the list of namespaces: >> >> sudo ip netns list >> >> Then, find the qrouter- and ping from there: >> >> ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1 >> > > One more quick thing to note: > > In order to use floating IPs, you will also have to attach the external > router to the tenant networks where floating IPs will be used. > > When you go through the steps to create a tenant network, also attach > it to the router: > > 1) Create the network: > > neutron net-create tenant-net-1 > > 2) Create the subnet: > > neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22 > > 3) Attach the external router to the network: > > neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1 > > (since no specific port was given in the router-interface-add command, > Neutron will automatically choose the first address in the given > subnet, so 172.21.0.1 in this example) > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From chkumar246 at gmail.com Thu Nov 19 05:28:41 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Thu, 19 Nov 2015 10:58:41 +0530 Subject: [Rdo-list] RDO Bugs stats on 2015-11-19 Message-ID: # RDO Bugs on 2015-11-19 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 338 - Fixed (MODIFIED, POST, ON_QA): 201 ## Number of open bugs by component diskimage-builder [ 4] ++ distribution [ 14] ++++++++++ dnsmasq [ 1] Documentation [ 4] ++ instack [ 4] ++ instack-undercloud [ 28] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 5] +++ openstack-cinder [ 14] ++++++++++ openstack-foreman-inst... [ 2] + openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 2] + openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 8] +++++ openstack-manila [ 12] ++++++++ openstack-neutron [ 11] +++++++ openstack-nova [ 19] +++++++++++++ openstack-packstack [ 56] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] +++++++ openstack-selinux [ 10] +++++++ openstack-swift [ 3] ++ openstack-tripleo [ 26] ++++++++++++++++++ openstack-tripleo-heat... [ 5] +++ openstack-tripleo-imag... [ 2] + openstack-tuskar [ 3] ++ openstack-utils [ 4] ++ openvswitch [ 1] Package Review [ 6] ++++ python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 3] ++ python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 2] + rdo-manager [ 49] +++++++++++++++++++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (338 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-11-17 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-10-07 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (5 bugs) [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (8 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1280530 ] http://bugzilla.redhat.com/1280530 (NEW) Component: openstack-keystone Last change: 2015-11-12 Summary: Fernet tokens cannot read key files with SELInuxz enabeld [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-manila (12 bugs) [1278918 ] http://bugzilla.redhat.com/1278918 (NEW) Component: openstack-manila Last change: 2015-11-06 Summary: manila-api fails to start without updates from upstream stable/liberty [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1277787 ] http://bugzilla.redhat.com/1277787 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: Glusterfs_driver: Export location for Glusterfs NFS- Ganesha is incorrect [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: puppet module for manila should include service type - shareV2 [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1277792 ] http://bugzilla.redhat.com/1277792 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: glusterfs_driver: Access-deny for glusterfs driver should be dynamic [1278919 ] http://bugzilla.redhat.com/1278919 (NEW) Component: openstack-manila Last change: 2015-11-12 Summary: AvailabilityZoneFilter is not working in manila- scheduler [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272954 ] http://bugzilla.redhat.com/1272954 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterFS_native_driver: snapshot delete doesn't delete snapshot entries that are in error state [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (11 bugs) [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2015-11-16 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-11-05 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-11-12 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1281308 ] http://bugzilla.redhat.com/1281308 (NEW) Component: openstack-neutron Last change: 2015-11-12 Summary: QoS policy is not enforced when using a previously used port [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1280258 ] http://bugzilla.redhat.com/1280258 (NEW) Component: openstack-neutron Last change: 2015-11-11 Summary: tenants seem like they are able to detach admin enforced QoS policies from ports or networks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (19 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2015-10-19 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1278808 ] http://bugzilla.redhat.com/1278808 (NEW) Component: openstack-nova Last change: 2015-11-06 Summary: Guest fails to use more than 1 vCPU with smpboot: do_boot_cpu failed(-1) to wakeup [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Installation of openstack-nova-compute fails on PowerKVM ### openstack-packstack (56 bugs) [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-11-06 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-10-27 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-10-08 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1279642 ] http://bugzilla.redhat.com/1279642 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run fails when running with DEMO [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1279641 ] http://bugzilla.redhat.com/1279641 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run does not install keystoneauth1 [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1276277 ] http://bugzilla.redhat.com/1276277 (NEW) Component: openstack-packstack Last change: 2015-10-31 Summary: packstack --allinone fails on CentOS 7 x86_64 1503-01 [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (10 bugs) [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-29 Summary: Nova rootwrap-daemon requires a selinux exception [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (3 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-10-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (26 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: missing python-proliantutils [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (4 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1270615 ] http://bugzilla.redhat.com/1270615 (NEW) Component: openstack-utils Last change: 2015-10-11 Summary: openstack status still checking mysql not mariadb [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (6 bugs) [1283295 ] http://bugzilla.redhat.com/1283295 (NEW) Component: Package Review Last change: 2015-11-18 Summary: Review Request: CloudKitty - Rating as a Service [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2015-11-18 Summary: Review Request: Mistral - workflow Service for OpenStack cloud [1282912 ] http://bugzilla.redhat.com/1282912 (NEW) Component: Package Review Last change: 2015-11-17 Summary: Review Request: Python-kafka - Python client for Apache Kafka [1268372 ] http://bugzilla.redhat.com/1268372 (ASSIGNED) Component: Package Review Last change: 2015-11-18 Summary: Review Request: openstack-app-catalog-ui - openstack horizon plugin for the openstack app-catalog [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2015-11-05 Summary: Review Request: Murano - is an application catalog for OpenStack [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2015-11-13 Summary: New Package: python-dracclient ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (3 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1281352 ] http://bugzilla.redhat.com/1281352 (NEW) Component: python-neutronclient Last change: 2015-11-12 Summary: Internal server error when running qos-bandwidth-limit- rule-update as a tenant Edit ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (2 bugs) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config [1282093 ] http://bugzilla.redhat.com/1282093 (NEW) Component: python-oslo-config Last change: 2015-11-14 Summary: please rebase oslo.log to 1.12.0 ### rdo-manager (49 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support IPv6 [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-10-20 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1271335 ] http://bugzilla.redhat.com/1271335 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support explicit configuration of L2 population [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1272180 ] http://bugzilla.redhat.com/1272180 (ASSIGNED) Component: rdo-manager Last change: 2015-11-13 Summary: Horizon doesn't load when deploying without pacemaker [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] Support enabling the port security extension [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-11-18 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-31 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (201 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (8 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-15 Summary: package ceilometermiddleware missing [1265721 ] http://bugzilla.redhat.com/1265721 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-15 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-15 Summary: Ceilometer dbsync failing during HA deployment [1265818 ] http://bugzilla.redhat.com/1265818 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: ceilometer polling agent does not start ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (5 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1278962 ] http://bugzilla.redhat.com/1278962 (ON_QA) Component: openstack-glance Last change: 2015-11-13 Summary: python-cryptography requires pyasn1>=0.1.8 but only 0.1.6 is available in Centos [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-11-17 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (15 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2015-10-26 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1281920 ] http://bugzilla.redhat.com/1281920 (POST) Component: openstack-neutron Last change: 2015-11-16 Summary: neutron-server will not start: fails with pbr version issue [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (61 bugs) [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1259354 ] http://bugzilla.redhat.com/1259354 (MODIFIED) Component: openstack-packstack Last change: 2015-11-10 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-11-12 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (13 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1219406 ] http://bugzilla.redhat.com/1219406 (MODIFIED) Component: openstack-selinux Last change: 2015-11-06 Summary: Glance over nfs fails due to selinux [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (2 bugs) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account [1272572 ] http://bugzilla.redhat.com/1272572 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-11-15 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon ### openstack-trove (3 bugs) [1278608 ] http://bugzilla.redhat.com/1278608 (MODIFIED) Component: openstack-trove Last change: 2015-11-06 Summary: trove-api fails to start [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies [1219069 ] http://bugzilla.redhat.com/1219069 (POST) Component: openstack-trove Last change: 2015-11-05 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (8 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-11-04 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (10 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2015-11-08 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository ### tempest (1 bug) [1272289 ] http://bugzilla.redhat.com/1272289 (MODIFIED) Component: tempest Last change: 2015-11-18 Summary: rdo-manager tempest smoke test failing on "floating ip pool not found' Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at buskey.name Thu Nov 19 15:02:53 2015 From: tom at buskey.name (Tom Buskey) Date: Thu, 19 Nov 2015 10:02:53 -0500 Subject: [Rdo-list] Kilo Horizon session timeout and cookie In-Reply-To: <56371F4C.2040306@redhat.com> References: <56315C5B.9000605@redhat.com> <56333F7F.5020205@redhat.com> <56371F4C.2040306@redhat.com> Message-ID: The testing rpm + adding the add AUTH_USER_MODE and SESSION_ENGINE variables has been working. https://bugzilla.redhat.com/show_bug.cgi?id=1218894 has been closed as it's been pushed to F23 stable. Is the testing rpm ever going to be promoted to the rdo repo for kilo at http://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7? On Mon, Nov 2, 2015 at 3:31 AM, Matthias Runge wrote: > On 30/10/15 14:18, Tom Buskey wrote: > > The bug reports say you need to add AUTH_USER_MODE and SESSION_ENGINE to > > /etc/openstack-dashboard/local_settings but neither rpm does. The only > > way to know about it is to read the bug reports > > > > On Fri, Oct 30, 2015 at 5:59 AM, Martin Pavl?sek > > wrote: > > > > In my understanding, both reports differ, and esp. reasons differ. > > Matthias > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weiler at soe.ucsc.edu Fri Nov 20 03:43:06 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Thu, 19 Nov 2015 19:43:06 -0800 Subject: [Rdo-list] Quick rabbitmq question.... Message-ID: <564E96CA.5010405@soe.ucsc.edu> Hi Y'all, I'm sure someone has encountered this issue... basically my rabbitmq instance on my controller node is running out of file descriptors, this is on RHEL 7. I've upped the max file descriptors (nofile) to 1000000 in /etc/security/limits.conf, and my sysctl limit for file descriptors is equally huge. Yet, I can't get my rabbitmq process to get it's limit's past 1000 or so: [root at os-con-01 ~]# ps -afe | grep rabbit rabbitmq 4989 1 4 16:42 ? 00:07:10 /usr/lib64/erlang/erts-5.10.4/bin/beam.smp -W w -K true -A30 -P 1048576 -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.3.5/sbin/../ebin -noshell -noinput -s rabbit boot -sname rabbit at os-con-01 -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/var/log/rabbitmq/rabbit at os-con-01.log"} -rabbit sasl_error_logger {file,"/var/log/rabbitmq/rabbit at os-con-01-sasl.log"} -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/lib/rabbitmq_server-3.3.5/sbin/../plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/mnesia/rabbit at os-con-01-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit at os-con-01" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 rabbitmq 5004 1 0 16:42 ? 00:00:00 /usr/lib64/erlang/erts-5.10.4/bin/epmd -daemon rabbitmq 5129 4989 0 16:42 ? 00:00:00 inet_gethost 4 rabbitmq 5130 5129 0 16:42 ? 00:00:00 inet_gethost 4 root 17470 17403 0 19:34 pts/0 00:00:00 grep --color=auto rabbit [root at os-con-01 ~]# cat /proc/4989/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 127788 127788 processes Max open files 1024 4096 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 127788 127788 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us [root at os-con-01 ~]# This is causing huge problems in my OpenStack cluster (Kilo Release). I've read that you can set this limit in /etc/rabbitmq/rabbitmq-env.conf or /etc/rabbitmq/rabbitmq.config but no matter what I do there I get nothing, after restarting rabbitmq many times. Does this have something to do with systemd? [root at os-con-01 ~]# rabbitmqctl status Status of node 'rabbit at os-con-01' ... [{pid,4989}, {running_applications,[{rabbit,"RabbitMQ","3.3.5"}, {os_mon,"CPO CXC 138 46","2.2.14"}, {mnesia,"MNESIA CXC 138 12","4.11"}, {xmerl,"XML parser","1.3.6"}, {sasl,"SASL CXC 138 11","2.3.4"}, {stdlib,"ERTS CXC 138 10","1.19.4"}, {kernel,"ERTS CXC 138 10","2.16.4"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:32:32] [async-threads:30] [hipe] [kernel-poll:true]\n"}, {memory,[{total,645523200}, {connection_procs,32257624}, {queue_procs,48513416}, {plugins,0}, {other_proc,15448376}, {mnesia,1209984}, {mgmt_db,0}, {msg_index,292800}, {other_ets,1991744}, {binary,517865992}, {code,16698259}, {atom,602729}, {other_system,10642276}]}, {alarms,[]}, {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,13406973132}, {disk_free_limit,50000000}, {disk_free,37354610688}, {file_descriptors,[{total_limit,924}, <----- ?????? {total_used,831}, {sockets_limit,829}, {sockets_used,829}]}, {processes,[{limit,1048576},{used,8121}]}, {run_queue,0}, {uptime,10537}] ...done. Anyone know how to get the file descriptor limits up for rabbitmq? I've only got like 40 nodes in my OpenStack cluster, and it's choking, and I need to add several hundred more nodes... Any help much appreciated!!! I looked around the list and couldn't find anything on this, and I've RTFM'd as much as I could... cheers, erich From jeckersb at redhat.com Fri Nov 20 03:56:26 2015 From: jeckersb at redhat.com (John Eckersberg) Date: Thu, 19 Nov 2015 22:56:26 -0500 Subject: [Rdo-list] Quick rabbitmq question.... In-Reply-To: <564E96CA.5010405@soe.ucsc.edu> References: <564E96CA.5010405@soe.ucsc.edu> Message-ID: <87ziy9e32t.fsf@redhat.com> Erich Weiler writes: > Does this have something to do with systemd? Bingo. Try... mkdir /etc/systemd/system/rabbitmq-server.service.d cat < /etc/systemd/system/rabbitmq-server.service.d/limits.conf [Service] LimitNOFILE=16384 EOF eck From weiler at soe.ucsc.edu Fri Nov 20 05:32:49 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Thu, 19 Nov 2015 21:32:49 -0800 Subject: [Rdo-list] Quick rabbitmq question.... In-Reply-To: References: <564E96CA.5010405@soe.ucsc.edu> Message-ID: <564EB081.3030701@soe.ucsc.edu> Totally fixed it. Thanks man. I learn something new about systemd every day.... On 11/19/15 7:55 PM, Jeff Weber wrote: > I struggled with this as well until I found out the limits.conf entries > don't apply to systemd managed services. > > If you create a /etc/systemd/system/rabbitmq-server.service.d directory > and place a limits.conf file in there with contents similar to > > [Service] > LimitNOFILE=4096 > > Then reload + restart > > systemctl daemon-reload > systemctl restart rabbitmq-server > > https://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F > > > > On Thu, Nov 19, 2015 at 10:43 PM, Erich Weiler > wrote: > > Hi Y'all, > > I'm sure someone has encountered this issue... basically my > rabbitmq instance on my controller node is running out of file > descriptors, this is on RHEL 7. I've upped the max file descriptors > (nofile) to 1000000 in /etc/security/limits.conf, and my sysctl > limit for file descriptors is equally huge. Yet, I can't get my > rabbitmq process to get it's limit's past 1000 or so: > > [root at os-con-01 ~]# ps -afe | grep rabbit > rabbitmq 4989 1 4 16:42 ? 00:07:10 > /usr/lib64/erlang/erts-5.10.4/bin/beam.smp -W w -K true -A30 -P > 1048576 -- -root /usr/lib64/erlang -progname erl -- -home > /var/lib/rabbitmq -- -pa > /usr/lib/rabbitmq/lib/rabbitmq_server-3.3.5/sbin/../ebin -noshell > -noinput -s rabbit boot -sname rabbit at os-con-01 -boot start_sasl > -kernel inet_default_connect_options [{nodelay,true}] -sasl > errlog_type error -sasl sasl_error_logger false -rabbit error_logger > {file,"/var/log/rabbitmq/rabbit at os-con-01.log"} -rabbit > sasl_error_logger > {file,"/var/log/rabbitmq/rabbit at os-con-01-sasl.log"} -rabbit > enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit > plugins_dir > "/usr/lib/rabbitmq/lib/rabbitmq_server-3.3.5/sbin/../plugins" > -rabbit plugins_expand_dir > "/var/lib/rabbitmq/mnesia/rabbit at os-con-01-plugins-expand" -os_mon > start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup > false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit at os-con-01" > -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 > rabbitmq 5004 1 0 16:42 ? 00:00:00 > /usr/lib64/erlang/erts-5.10.4/bin/epmd -daemon > rabbitmq 5129 4989 0 16:42 ? 00:00:00 inet_gethost 4 > rabbitmq 5130 5129 0 16:42 ? 00:00:00 inet_gethost 4 > root 17470 17403 0 19:34 pts/0 00:00:00 grep --color=auto rabbit > > [root at os-con-01 ~]# cat /proc/4989/limits > Limit Soft Limit Hard Limit > Units > Max cpu time unlimited unlimited > seconds > Max file size unlimited unlimited > bytes > Max data size unlimited unlimited > bytes > Max stack size 8388608 unlimited > bytes > Max core file size 0 unlimited > bytes > Max resident set unlimited unlimited > bytes > Max processes 127788 127788 processes > Max open files 1024 4096 > files > Max locked memory 65536 65536 > bytes > Max address space unlimited unlimited > bytes > Max file locks unlimited unlimited > locks > Max pending signals 127788 127788 > signals > Max msgqueue size 819200 819200 > bytes > Max nice priority 0 0 > Max realtime priority 0 0 > Max realtime timeout unlimited unlimited us > [root at os-con-01 ~]# > > This is causing huge problems in my OpenStack cluster (Kilo > Release). I've read that you can set this limit in > /etc/rabbitmq/rabbitmq-env.conf or /etc/rabbitmq/rabbitmq.config but > no matter what I do there I get nothing, after restarting rabbitmq > many times. Does this have something to do with systemd? > > [root at os-con-01 ~]# rabbitmqctl status > Status of node 'rabbit at os-con-01' ... > [{pid,4989}, > {running_applications,[{rabbit,"RabbitMQ","3.3.5"}, > {os_mon,"CPO CXC 138 46","2.2.14"}, > {mnesia,"MNESIA CXC 138 12","4.11"}, > {xmerl,"XML parser","1.3.6"}, > {sasl,"SASL CXC 138 11","2.3.4"}, > {stdlib,"ERTS CXC 138 10","1.19.4"}, > {kernel,"ERTS CXC 138 10","2.16.4"}]}, > {os,{unix,linux}}, > {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] > [smp:32:32] [async-threads:30] [hipe] [kernel-poll:true]\n"}, > {memory,[{total,645523200}, > {connection_procs,32257624}, > {queue_procs,48513416}, > {plugins,0}, > {other_proc,15448376}, > {mnesia,1209984}, > {mgmt_db,0}, > {msg_index,292800}, > {other_ets,1991744}, > {binary,517865992}, > {code,16698259}, > {atom,602729}, > {other_system,10642276}]}, > {alarms,[]}, > {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, > {vm_memory_high_watermark,0.4}, > {vm_memory_limit,13406973132}, > {disk_free_limit,50000000}, > {disk_free,37354610688}, > {file_descriptors,[{total_limit,924}, <----- ?????? > {total_used,831}, > {sockets_limit,829}, > {sockets_used,829}]}, > {processes,[{limit,1048576},{used,8121}]}, > {run_queue,0}, > {uptime,10537}] > ...done. > > Anyone know how to get the file descriptor limits up for rabbitmq? > I've only got like 40 nodes in my OpenStack cluster, and it's > choking, and I need to add several hundred more nodes... > > Any help much appreciated!!! I looked around the list and couldn't > find anything on this, and I've RTFM'd as much as I could... > > cheers, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From dradez at redhat.com Fri Nov 20 21:15:04 2015 From: dradez at redhat.com (Dan Radez) Date: Fri, 20 Nov 2015 16:15:04 -0500 Subject: [Rdo-list] [apex] OPNFV Project Apex Update Message-ID: <564F8D58.1080704@redhat.com> OPNFV Apex is a project that is working towards using RDO Project's RDO Manager installation tool to deploy OpenStack and OpenDaylight according to the Requirements and Standards that are defined by the OPNFV Project. In the past couple weeks we have merged a large number of patches working towards our second release, Bramaputra, due in February. More work and patching will be come before the Bramaputra release, though we have come to a point that many people have been waiting for: an installation document! Currently there is one out standing patch that is comprised of mostly the installation doc itself: https://gerrit.opnfv.org/gerrit/#/c/3455/ The install doc will be able to be viewed via OPNFV's gitweb once that patch merges. https://gerrit.opnfv.org/gerrit/gitweb?p=apex.git;a=blob;f=docs/src/installation-instructions.rst Alternatively you can view in on my github: https://github.com/radez/apex/blob/master/docs/src/installation-instructions.rst Our daily builds are uploaded to artifacts.opnfv.org. Last night's build, that the installation documentation can be used with, is available at: http://artifacts.opnfv.org/apex//opnfv-2015-11-20_03-00-46.iso or http://artifacts.opnfv.org/apex//opnfv-apex-2.2-20151120030046.noarch.rpm The iso includes CentOS 7 and the rpm. Alternatively, The rpm can be installed onto a Virtualization Host install of CentOS 7. Just a note, the baremetal deployment has not been fully tested or documented yet. I would recommend starting with the Virtualized deployment. We will be adding information and links to the Apex page on the OPNFV Wiki page: https://wiki.opnfv.org/apex as we collect and publish more. The installation instructions link on the wiki page is the same included above and is currently broken. It will link properly once the above patch merges which should happen in the next day or two. I you have questions or need help with an opnfv-apex deployment please ask on the opnfv-users at list.opnfv.org list. We'll be watching this list and happy to help you get started if needed. Dan Radez Sr Software Engineer Red Hat Inc. freenode: radez From weiler at soe.ucsc.edu Fri Nov 20 22:51:27 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Fri, 20 Nov 2015 14:51:27 -0800 Subject: [Rdo-list] Max VM filesystem size? Message-ID: <564FA3EF.50900@soe.ucsc.edu> Hi Y'all, I have a bunch of OpenStack Nova nodes, each with 11TB of local disk for Cloud VMs. This is all on RDO - RHEL 7 on OpenStack Kilo. That local storage is configured with a XFS filesystem. If I specify 8TB on 'root filesystem size' in a flavor, it only seems to be able to make a 2TB root filesystem on the guest. No matter what guest OS I use (I've tried CentOS 7.1, Ubuntu 15.10, others) it only seems to be able to create a 2TB root filesystem. I've also tried making a 5GB root filesystem and specifying 8TB for ephemeral disk, but the ephemeral disk that shows up on /tmp in the guest is only 2TB as well. Is there a 2TB KVM limit on VM size? Max file size in ext4 and XFS is well beyond 2TB, so I don't think that's it... Maybe a limit in KVM or libvirt? The qemu xml file on the node does specify a 8TB disk.... Any insight most welcome!! cheers, erich From weiler at soe.ucsc.edu Fri Nov 20 23:37:41 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Fri, 20 Nov 2015 15:37:41 -0800 Subject: [Rdo-list] Max VM filesystem size? In-Reply-To: <564FA3EF.50900@soe.ucsc.edu> References: <564FA3EF.50900@soe.ucsc.edu> Message-ID: <564FAEC5.1000101@soe.ucsc.edu> Ah, it appears that the limit is 2TB because the VMs are created with a 'msdos' partition table on the virtual disks. I think we need them to be 'gpt'.... parted shows the disks are big, just the filesystems are small: [centos at bigtest2 ~]$ sudo parted /dev/vda GNU Parted 3.1 Using /dev/vda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: Virtio Block Device (virtblk) Disk /dev/vda: 10.7TB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 1941GB 1941GB primary xfs boot Still digging for info on how to use gpt with OpenStack/libvirt.... Nothing yet... On 11/20/2015 02:51 PM, Erich Weiler wrote: > Hi Y'all, > > I have a bunch of OpenStack Nova nodes, each with 11TB of local disk for > Cloud VMs. This is all on RDO - RHEL 7 on OpenStack Kilo. That local > storage is configured with a XFS filesystem. > > If I specify 8TB on 'root filesystem size' in a flavor, it only seems to > be able to make a 2TB root filesystem on the guest. No matter what > guest OS I use (I've tried CentOS 7.1, Ubuntu 15.10, others) it only > seems to be able to create a 2TB root filesystem. I've also tried > making a 5GB root filesystem and specifying 8TB for ephemeral disk, but > the ephemeral disk that shows up on /tmp in the guest is only 2TB as well. > > Is there a 2TB KVM limit on VM size? Max file size in ext4 and XFS is > well beyond 2TB, so I don't think that's it... Maybe a limit in KVM or > libvirt? > > The qemu xml file on the node does specify a 8TB disk.... > > Any insight most welcome!! > > cheers, > erich From weiler at soe.ucsc.edu Sat Nov 21 15:37:08 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Sat, 21 Nov 2015 07:37:08 -0800 Subject: [Rdo-list] OpenStack GPT Disk Label for VMs (was Re: Max VM filesystem size?) In-Reply-To: <564FAEC5.1000101@soe.ucsc.edu> References: <564FA3EF.50900@soe.ucsc.edu> <564FAEC5.1000101@soe.ucsc.edu> Message-ID: <56508FA4.10305@soe.ucsc.edu> Looks like there has been a little chatter on this topic: https://review.openstack.org/#/c/225556 I would guess the disk label is configured through nova somehow? Maybe there would be a way I could patch in something to switch my default disk label to GPT, or somehow make it an option? Has anyone run into this before? On 11/20/15 3:37 PM, Erich Weiler wrote: > Ah, it appears that the limit is 2TB because the VMs are created with a > 'msdos' partition table on the virtual disks. I think we need them to > be 'gpt'.... parted shows the disks are big, just the filesystems are > small: > > [centos at bigtest2 ~]$ sudo parted /dev/vda > GNU Parted 3.1 > Using /dev/vda > Welcome to GNU Parted! Type 'help' to view a list of commands. > (parted) p > Model: Virtio Block Device (virtblk) > Disk /dev/vda: 10.7TB > Sector size (logical/physical): 512B/512B > Partition Table: msdos > Disk Flags: > > Number Start End Size Type File system Flags > 1 1049kB 1941GB 1941GB primary xfs boot > > Still digging for info on how to use gpt with OpenStack/libvirt.... > Nothing yet... > > On 11/20/2015 02:51 PM, Erich Weiler wrote: >> Hi Y'all, >> >> I have a bunch of OpenStack Nova nodes, each with 11TB of local disk for >> Cloud VMs. This is all on RDO - RHEL 7 on OpenStack Kilo. That local >> storage is configured with a XFS filesystem. >> >> If I specify 8TB on 'root filesystem size' in a flavor, it only seems to >> be able to make a 2TB root filesystem on the guest. No matter what >> guest OS I use (I've tried CentOS 7.1, Ubuntu 15.10, others) it only >> seems to be able to create a 2TB root filesystem. I've also tried >> making a 5GB root filesystem and specifying 8TB for ephemeral disk, but >> the ephemeral disk that shows up on /tmp in the guest is only 2TB as >> well. >> >> Is there a 2TB KVM limit on VM size? Max file size in ext4 and XFS is >> well beyond 2TB, so I don't think that's it... Maybe a limit in KVM or >> libvirt? >> >> The qemu xml file on the node does specify a 8TB disk.... >> >> Any insight most welcome!! >> >> cheers, >> erich From qasims at plumgrid.com Sat Nov 21 19:09:46 2015 From: qasims at plumgrid.com (Qasim Sarfraz) Date: Sun, 22 Nov 2015 00:09:46 +0500 Subject: [Rdo-list] Skipping resources while running stack update Message-ID: Folks, I have an overcloud stack with three controller and three computes. I want to scale out the cluster by adding new compute nodes. But I don't want to update the resources on already installed overcloud nodes. Is there a way to do this? -- Regards, Qasim Sarfraz -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Mon Nov 23 10:42:21 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 23 Nov 2015 10:42:21 +0000 Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md In-Reply-To: <883ECEDA-D59B-4FBC-BD66-A2EF90843B2C@namecheap.com> References: <56448D6D.90704@redhat.com> <192411615.14182004.1447343705347.JavaMail.zimbra@redhat.com> <56463E27.7010909@redhat.com> <56464E7D.1020907@redhat.com> <564651C2.3070409@redhat.com> , <883ECEDA-D59B-4FBC-BD66-A2EF90843B2C@namecheap.com> Message-ID: Alessandro, I did neutron work flow check on controllers 1,2 hosting HA neutron router. FIRST [root at hacontroller1 ~(keystone_admin)]# ovs-ofctl show br-eth0 OFPT_FEATURES_REPLY (xid=0x2): dpid:0000baf0db1a854f n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 1(eth0): addr:52:54:00:aa:0e:fc config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(phy-br-eth0): addr:46:c0:e0:30:72:92 <====== config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max LOCAL(br-eth0): addr:ba:f0:db:1a:85:4f config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 [root at hacontroller1 ~(keystone_admin)]# ovs-ofctl dump-flows br-eth0 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=15577.057s, table=0, n_packets=50441, n_bytes=3262529, idle_age=2, priority=4,in_port=2,dl_vlan=3 actions=strip_vlan,NORMAL <===== cookie=0x0, duration=15765.938s, table=0, n_packets=31225, n_bytes=1751795, idle_age=0, priority=2,in_port=2 actions=drop cookie=0x0, duration=15765.974s, table=0, n_packets=39982, n_bytes=42838752, idle_age=1, priority=0 actions=NORMAL Check `ovs-vsctl show` Bridge br-int fail_mode: secure Port "tapc8488877-45" tag: 4 Interface "tapc8488877-45" type: internal Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "tap14aa6eeb-70" tag: 2 Interface "tap14aa6eeb-70" type: internal Port "qr-8f5b3f4a-45" tag: 2 Interface "qr-8f5b3f4a-45" type: internal Port "int-br-eth0" Interface "int-br-eth0" type: patch options: {peer="phy-br-eth0"} Port "qg-34893aa0-17" <===== tag: 3 SECOND [root at hacontroller2 ~(keystone_demo)]# ovs-ofctl show br-eth0 OFPT_FEATURES_REPLY (xid=0x2): dpid:0000b6bfa2bafd45 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 1(eth0): addr:52:54:00:73:df:29 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(phy-br-eth0): addr:be:89:61:87:56:20 <======= config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max LOCAL(br-eth0): addr:b6:bf:a2:ba:fd:45 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 [root at hacontroller2 ~(keystone_demo)]# ovs-ofctl dump-flows br-eth0 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=15810.746s, table=0, n_packets=0, n_bytes=0, idle_age=15810, priority=4,in_port=2,dl_vlan=2 actions=strip_vlan,NORMAL <======== cookie=0x0, duration=16105.662s, table=0, n_packets=31849, n_bytes=1786827, idle_age=0, priority=2,in_port=2 actions=drop cookie=0x0, duration=16105.696s, table=0, n_packets=39762, n_bytes=2100763, idle_age=0, priority=0 actions=NORMAL Check `ovs-vsctl show` Bridge br-int fail_mode: secure Port "qg-34893aa0-17" tag: 2 <===== Interface "qg-34893aa0-17" type: internal It looks like qrouter's namespace output interface qg-xxxxxx sends vlan tagged packets to eth0 (which has VLAN=yes) , but OVS bridge br-eth0 is not aware of vlan tagging (as you wrote) , it strips tags before sending packets outside into external flat network. In case of external network provider qg-xxxxxx are on Br-int, that is normal. That's why your patch works so stable. If my logic is wrong,please, let me know. Thank you once again. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Nov 23 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 23 Nov 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20151123150003.86AC660A3FD9@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2015-11-25 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Packaging ](https://etherpad.openstack.org/p/RDO-Packaging) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Nov 23 19:03:43 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 23 Nov 2015 14:03:43 -0500 Subject: [Rdo-list] OpenStack meetups, week of November 23rd Message-ID: <5653630F.4030002@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Monday November 23 in Guadalajara, MX: De Marconi a Zaqar, la evoluci?n del sistema de mensajer?a y notificaciones - http://www.meetup.com/OpenStack-GDL/events/226680581/ * Wednesday November 25 in Buenos Aires, AR: Flavio Percoco en Buenos Aires - http://www.meetup.com/openstack-argentina/events/226912425/ * Thursday November 26 in Amersfoort, NL: OpenStack Orchestratie 2015 - http://www.meetup.com/Openstack-Netherlands/events/222368737/ * Thursday November 26 in Roma, RM, IT: From Liberty to Mitaka - http://www.meetup.com/OpenStack-User-Group-Italia/events/226324667/ * Saturday November 28 in Bangalore, IN: Red Hat Openstack and Ceph Meetup, Pune - http://www.meetup.com/Indian-OpenStack-User-Group/events/226100785/ * Monday November 30 in Sydney, AU: Australian OpenStack User Group - Quarterly Brisbane Meetup - http://www.meetup.com/Australian-OpenStack-User-Group/events/224772759/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Nov 23 21:24:49 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 23 Nov 2015 16:24:49 -0500 Subject: [Rdo-list] RDO blog roundup, week of November 23 Message-ID: <56538421.80400@redhat.com> Here's what RDO engineers have been blogging about lately: Automated API testing workflow by Tristan Cacqueray Services exposed to a network of any sort are in risk of security exploits. The API is the primary target of such attacks, and it is often abused by input that developers did not anticipate. ? read more at http://tm3.org/3y RDO Community Day @ FOSDEM by Rich Bowen We're pleased to announce that we'll be holding an RDO Community Day in conjunction with the CentOS Dojo on the day before FOSDEM. This event will be held at the IBM Client Center in Brussels, Belgium, on Friday, January 29th, 2016. ? read more at http://tm3.org/3z Translating Between RDO/RHOS and Upstream OpenStack releases by Adam Young There is a straight forward mapping between the version numbers used for RDO and Red Hat Enterprise Linux OpenStack Platform release numbers, and the upstream releases of OpenStack. I can never keep them straight. So, I write code. ? read more at http://tm3.org/3- Does cloud-native have to mean all-in? by Gordon Haff Cloud-native application architectures promise improved business agility and the ability to innovate more rapidly than ever before. However, many existing conventional applications will provide important business value for many years. Does an organization have to commit 100% to one architecture versus another to realize true business benefits? ? read more at http://tm3.org/40 -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From weiler at soe.ucsc.edu Mon Nov 23 22:56:07 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Mon, 23 Nov 2015 14:56:07 -0800 Subject: [Rdo-list] Possible bug? Horizon/Floating IPs Message-ID: <56539987.4040501@soe.ucsc.edu> Just thought I'd throw this out there as a possible bug... I'm running RHEL 7.1 and OpenStack Kilo RDO. It seems that when I terminate an instance through Horizon that has an associated floating IP, the floating IP is *not* disassociated upon the instance's termination. I have to manually disassociate the floating ip after I terminate the instance through Horizon via: neutron floatingip-disassociate e28051c5-7fb1-4887-ade9-f1b062523ad7 for example. Then it frees up. Back when I was playing with the Icehouse release of RDO OpenStack, the floating IPs were released automatically when I terminated an instance through Horizon, so I was surprised when I did not see the same behaviour here. [root at os-con-01 ~]# rpm -q python-django-horizon python-django-horizon-2015.1.0-5.el7.noarch Just a heads up... cheers, erich From Kevin.Fox at pnnl.gov Mon Nov 23 23:59:28 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Mon, 23 Nov 2015 23:59:28 +0000 Subject: [Rdo-list] Possible bug? Horizon/Floating IPs In-Reply-To: <56539987.4040501@soe.ucsc.edu> References: <56539987.4040501@soe.ucsc.edu> Message-ID: <1A3C52DFCD06494D8528644858247BF01B83EB97@EX10MBOX03.pnnl.gov> Autorelease is kind of dangerous. There is no way to get an ip back, and it may be associated in dns and you may give it back to the pool before unregistering. I always have disabled it on all my clouds to fail safe for the users. Kevin ________________________________ From: rdo-list-bounces at redhat.com on behalf of Erich Weiler Sent: Monday, November 23, 2015 2:56:07 PM To: rdo-list at redhat.com Subject: [Rdo-list] Possible bug? Horizon/Floating IPs Just thought I'd throw this out there as a possible bug... I'm running RHEL 7.1 and OpenStack Kilo RDO. It seems that when I terminate an instance through Horizon that has an associated floating IP, the floating IP is *not* disassociated upon the instance's termination. I have to manually disassociate the floating ip after I terminate the instance through Horizon via: neutron floatingip-disassociate e28051c5-7fb1-4887-ade9-f1b062523ad7 for example. Then it frees up. Back when I was playing with the Icehouse release of RDO OpenStack, the floating IPs were released automatically when I terminated an instance through Horizon, so I was surprised when I did not see the same behaviour here. [root at os-con-01 ~]# rpm -q python-django-horizon python-django-horizon-2015.1.0-5.el7.noarch Just a heads up... cheers, erich _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From weiler at soe.ucsc.edu Tue Nov 24 00:10:45 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Mon, 23 Nov 2015 16:10:45 -0800 Subject: [Rdo-list] Possible bug? Horizon/Floating IPs In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B83EB97@EX10MBOX03.pnnl.gov> References: <56539987.4040501@soe.ucsc.edu> <1A3C52DFCD06494D8528644858247BF01B83EB97@EX10MBOX03.pnnl.gov> Message-ID: <5653AB05.8060209@soe.ucsc.edu> Hi Kevin, Thanks for the reply! I think we may be talking about two different things however... I'm not looking to release the IP back to the public pool, I'm just looking to disassociate from a deleted VM when the instance is terminated. After I terminate the instance, it still shows as stuck to the deleted instance, which is kind of pointless. After I disassociate the floating IP via 'neutron floatingip-disassociate', it releases the IP back to my own allocated IP list, so I most definitely can re-use that IP for another instance easily. I agree that if I unallocated it with 'neutron floatingip-delete', then I may not get it back, but with 'neutron floatingip-disassociate' I still own it as a user of a project. Just checking if the decision was made to stop dissociating the floating IPs upon instance termination after Icehouse, or if it was overlooked in Kilo? Thanks for the reply! cheers, erich On 11/23/2015 03:59 PM, Fox, Kevin M wrote: > Autorelease is kind of dangerous. There is no way to get an ip back, and > it may be associated in dns and you may give it back to the pool before > unregistering. I always have disabled it on all my clouds to fail safe > for the users. > > Kevin * > * > ------------------------------------------------------------------------ > *From:* rdo-list-bounces at redhat.com on behalf of Erich Weiler > *Sent:* Monday, November 23, 2015 2:56:07 PM > *To:* rdo-list at redhat.com > *Subject:* [Rdo-list] Possible bug? Horizon/Floating IPs > > Just thought I'd throw this out there as a possible bug... I'm running > RHEL 7.1 and OpenStack Kilo RDO. > > It seems that when I terminate an instance through Horizon that has an > associated floating IP, the floating IP is *not* disassociated upon the > instance's termination. I have to manually disassociate the floating ip > after I terminate the instance through Horizon via: > > neutron floatingip-disassociate e28051c5-7fb1-4887-ade9-f1b062523ad7 > > for example. Then it frees up. Back when I was playing with the > Icehouse release of RDO OpenStack, the floating IPs were released > automatically when I terminated an instance through Horizon, so I was > surprised when I did not see the same behaviour here. > > [root at os-con-01 ~]# rpm -q python-django-horizon > python-django-horizon-2015.1.0-5.el7.noarch > > Just a heads up... > > cheers, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From Kevin.Fox at pnnl.gov Tue Nov 24 00:21:53 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 24 Nov 2015 00:21:53 +0000 Subject: [Rdo-list] Possible bug? Horizon/Floating IPs In-Reply-To: <5653AB05.8060209@soe.ucsc.edu> References: <56539987.4040501@soe.ucsc.edu> <1A3C52DFCD06494D8528644858247BF01B83EB97@EX10MBOX03.pnnl.gov>, <5653AB05.8060209@soe.ucsc.edu> Message-ID: <1A3C52DFCD06494D8528644858247BF01B83EBD2@EX10MBOX03.pnnl.gov> There is a config value in horizon to autorelease floating ips. Ive never seen nova not autodissassociate on delete. That may be a misconfiguration between nova and neutron? I dont think horizon is involved in that path. Maybe try from the cli to double check? Thanks, Kevin ________________________________ From: Erich Weiler Sent: Monday, November 23, 2015 4:10:45 PM To: Fox, Kevin M; rdo-list at redhat.com Subject: Re: [Rdo-list] Possible bug? Horizon/Floating IPs Hi Kevin, Thanks for the reply! I think we may be talking about two different things however... I'm not looking to release the IP back to the public pool, I'm just looking to disassociate from a deleted VM when the instance is terminated. After I terminate the instance, it still shows as stuck to the deleted instance, which is kind of pointless. After I disassociate the floating IP via 'neutron floatingip-disassociate', it releases the IP back to my own allocated IP list, so I most definitely can re-use that IP for another instance easily. I agree that if I unallocated it with 'neutron floatingip-delete', then I may not get it back, but with 'neutron floatingip-disassociate' I still own it as a user of a project. Just checking if the decision was made to stop dissociating the floating IPs upon instance termination after Icehouse, or if it was overlooked in Kilo? Thanks for the reply! cheers, erich On 11/23/2015 03:59 PM, Fox, Kevin M wrote: > Autorelease is kind of dangerous. There is no way to get an ip back, and > it may be associated in dns and you may give it back to the pool before > unregistering. I always have disabled it on all my clouds to fail safe > for the users. > > Kevin * > * > ------------------------------------------------------------------------ > *From:* rdo-list-bounces at redhat.com on behalf of Erich Weiler > *Sent:* Monday, November 23, 2015 2:56:07 PM > *To:* rdo-list at redhat.com > *Subject:* [Rdo-list] Possible bug? Horizon/Floating IPs > > Just thought I'd throw this out there as a possible bug... I'm running > RHEL 7.1 and OpenStack Kilo RDO. > > It seems that when I terminate an instance through Horizon that has an > associated floating IP, the floating IP is *not* disassociated upon the > instance's termination. I have to manually disassociate the floating ip > after I terminate the instance through Horizon via: > > neutron floatingip-disassociate e28051c5-7fb1-4887-ade9-f1b062523ad7 > > for example. Then it frees up. Back when I was playing with the > Icehouse release of RDO OpenStack, the floating IPs were released > automatically when I terminated an instance through Horizon, so I was > surprised when I did not see the same behaviour here. > > [root at os-con-01 ~]# rpm -q python-django-horizon > python-django-horizon-2015.1.0-5.el7.noarch > > Just a heads up... > > cheers, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From weiler at soe.ucsc.edu Tue Nov 24 00:31:43 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Mon, 23 Nov 2015 16:31:43 -0800 Subject: [Rdo-list] Possible bug? Horizon/Floating IPs In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B83EBD2@EX10MBOX03.pnnl.gov> References: <56539987.4040501@soe.ucsc.edu> <1A3C52DFCD06494D8528644858247BF01B83EB97@EX10MBOX03.pnnl.gov> <5653AB05.8060209@soe.ucsc.edu> <1A3C52DFCD06494D8528644858247BF01B83EBD2@EX10MBOX03.pnnl.gov> Message-ID: <5653AFEF.2000900@soe.ucsc.edu> Hmmm.. When I delete an instance from the command line via: nova delete 33190d5c-da79-41f7-bc55-899f8f15cb7c It deletes the instance, but the floating IP remains stuck to that deleted instance even after it's deleted. I guess it has nothing to do with horizon.... I wonder where I misconfigured it... On 11/23/2015 04:21 PM, Fox, Kevin M wrote: > There is a config value in horizon to autorelease floating ips. Ive > never seen nova not autodissassociate on delete. That may be a > misconfiguration between nova and neutron? I dont think horizon is > involved in that path. Maybe try from the cli to double check? > > Thanks, > Kevin * > * > ------------------------------------------------------------------------ > *From:* Erich Weiler > *Sent:* Monday, November 23, 2015 4:10:45 PM > *To:* Fox, Kevin M; rdo-list at redhat.com > *Subject:* Re: [Rdo-list] Possible bug? Horizon/Floating IPs > > Hi Kevin, > > Thanks for the reply! I think we may be talking about two different > things however... I'm not looking to release the IP back to the public > pool, I'm just looking to disassociate from a deleted VM when the > instance is terminated. After I terminate the instance, it still shows > as stuck to the deleted instance, which is kind of pointless. > > After I disassociate the floating IP via 'neutron > floatingip-disassociate', it releases the IP back to my own allocated IP > list, so I most definitely can re-use that IP for another instance > easily. I agree that if I unallocated it with 'neutron > floatingip-delete', then I may not get it back, but with 'neutron > floatingip-disassociate' I still own it as a user of a project. > > Just checking if the decision was made to stop dissociating the floating > IPs upon instance termination after Icehouse, or if it was overlooked in > Kilo? > > Thanks for the reply! > > cheers, > erich > > On 11/23/2015 03:59 PM, Fox, Kevin M wrote: >> Autorelease is kind of dangerous. There is no way to get an ip back, and >> it may be associated in dns and you may give it back to the pool before >> unregistering. I always have disabled it on all my clouds to fail safe >> for the users. >> >> Kevin * >> * >> ------------------------------------------------------------------------ >> *From:* rdo-list-bounces at redhat.com on behalf of Erich Weiler >> *Sent:* Monday, November 23, 2015 2:56:07 PM >> *To:* rdo-list at redhat.com >> *Subject:* [Rdo-list] Possible bug? Horizon/Floating IPs >> >> Just thought I'd throw this out there as a possible bug... I'm running >> RHEL 7.1 and OpenStack Kilo RDO. >> >> It seems that when I terminate an instance through Horizon that has an >> associated floating IP, the floating IP is *not* disassociated upon the >> instance's termination. I have to manually disassociate the floating ip >> after I terminate the instance through Horizon via: >> >> neutron floatingip-disassociate e28051c5-7fb1-4887-ade9-f1b062523ad7 >> >> for example. Then it frees up. Back when I was playing with the >> Icehouse release of RDO OpenStack, the floating IPs were released >> automatically when I terminated an instance through Horizon, so I was >> surprised when I did not see the same behaviour here. >> >> [root at os-con-01 ~]# rpm -q python-django-horizon >> python-django-horizon-2015.1.0-5.el7.noarch >> >> Just a heads up... >> >> cheers, >> erich >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >>https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From mrunge at redhat.com Tue Nov 24 15:11:22 2015 From: mrunge at redhat.com (Matthias Runge) Date: Tue, 24 Nov 2015 16:11:22 +0100 Subject: [Rdo-list] Possible bug? Horizon/Floating IPs In-Reply-To: <5653AFEF.2000900@soe.ucsc.edu> References: <56539987.4040501@soe.ucsc.edu> <1A3C52DFCD06494D8528644858247BF01B83EB97@EX10MBOX03.pnnl.gov> <5653AB05.8060209@soe.ucsc.edu> <1A3C52DFCD06494D8528644858247BF01B83EBD2@EX10MBOX03.pnnl.gov> <5653AFEF.2000900@soe.ucsc.edu> Message-ID: <56547E1A.9060204@redhat.com> On 24/11/15 01:31, Erich Weiler wrote: > Hmmm.. When I delete an instance from the command line via: > > nova delete 33190d5c-da79-41f7-bc55-899f8f15cb7c > > It deletes the instance, but the floating IP remains stuck to that > deleted instance even after it's deleted. I guess it has nothing to do > with horizon.... > > I wonder where I misconfigured it... > My reasoning here and with my horizon upstream hat on: I see horizon as alternate input method to cli. Thus horizon should not implement additional functionality here for basic tasks. Matthias From radecki.rafal at gmail.com Tue Nov 24 15:18:50 2015 From: radecki.rafal at gmail.com (=?UTF-8?Q?Rafa=C5=82_Radecki?=) Date: Tue, 24 Nov 2015 16:18:50 +0100 Subject: [Rdo-list] Manual creation of stripped lvm volume in cinder. Message-ID: Hi All. I am using RDO Juno on CentOS 7 and currently am trying to change an existing volume attached to an openstack instance. The cinder node is a separate one from the compute node with mentioned instance. On cinder node I am using LVM backend with ISCSI. I noticed that when I create a volume in cinder it is created as a linear one in LVM and I would like to change it to a stripped one to take advantage of multiple physical disks available. Is there a way to manually create a LVM volume, notify iscsi daemon on cinder node to export it and the mount it on the target instance? Any howto about that? ;) Or maybe is there a way to tell cinder to create stripped logical volumes in LVM backend by default (instead of linear ones)? BR, Rafal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cems at ebi.ac.uk Tue Nov 24 15:39:33 2015 From: cems at ebi.ac.uk (Charles Short) Date: Tue, 24 Nov 2015 15:39:33 +0000 Subject: [Rdo-list] Possible bug? Horizon/Floating IPs In-Reply-To: <56539987.4040501@soe.ucsc.edu> References: <56539987.4040501@soe.ucsc.edu> Message-ID: <565484B5.3000808@ebi.ac.uk> Hi, We had a 'similar' persistent floating ip issue a while ago that was reported to RH and fixed. All caused by HA routers. https://bugs.launchpad.net/neutron/+bug/1505700 https://review.openstack.org/#/c/234247/ Summary - Created a new user/project/router. Created an instance with a floating ip. The floating ip correctly appeared on the qrouter and the instance was accessible via ssh externally. We disassociated the floating ip from the instance BUT the floating ip was still bound to the qrouter. We released the floating ip BUT the floating ip was still bound to the qrouter. We logged into Horizon as a different user/project (different qrouter). We managed to allocate the floating ip still bound to the other qrouter to this qrouter. This ip was then associated to an instance. So the same floating ip was bound to two different qrouters and effectively associated to two different instances in two separate projects. When you ssh to the floating ip you connected to either instance depending on the ARP cache. Charles On 23/11/2015 22:56, Erich Weiler wrote: > Just thought I'd throw this out there as a possible bug... I'm > running RHEL 7.1 and OpenStack Kilo RDO. > > It seems that when I terminate an instance through Horizon that has an > associated floating IP, the floating IP is *not* disassociated upon > the instance's termination. I have to manually disassociate the > floating ip after I terminate the instance through Horizon via: > > neutron floatingip-disassociate e28051c5-7fb1-4887-ade9-f1b062523ad7 > > for example. Then it frees up. Back when I was playing with the > Icehouse release of RDO OpenStack, the floating IPs were released > automatically when I terminated an instance through Horizon, so I was > surprised when I did not see the same behaviour here. > > [root at os-con-01 ~]# rpm -q python-django-horizon > python-django-horizon-2015.1.0-5.el7.noarch > > Just a heads up... > > cheers, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Charles Short Cloud Engineer Virtualization and Cloud Team European Bioinformatics Institute (EMBL-EBI) Tel: +44 (0)1223 494205 From Gal.Uzan at emc.com Tue Nov 24 16:23:07 2015 From: Gal.Uzan at emc.com (Uzan, Gal) Date: Tue, 24 Nov 2015 16:23:07 +0000 Subject: [Rdo-list] liberty packstack Error: Could not find data item CONFIG_USE_SUBNETS in any Hiera data file Message-ID: <3B061B7CCBAB1B44A65805A2C8DB72230126BD@MX202CL02.corp.emc.com> Hi, During installation of liberty RDO with packstack, installation fails Exact same actions worked just fine yesterday I can see in the repo that some packages were updated : http://mirror.centos.org/centos/7/cloud/x86_64/openstack-liberty/?C=M;O=D There also the same problem reported in Q&A : https://ask.openstack.org/en/question/85014/error-could-not-find-data-item-config_use_subnets-in-any-hiera-data-file/ [root at lgdrm141 ~]# packstack --answer-file=answers.txt Welcome to the Packstack setup utility The installation log file is available at: /var/tmp/packstack/20151124-165422-J_gajl/openstack-setup.log Installing: Clean Up [ DONE ] Discovering ip protocol version [ DONE ] Setting up ssh keys [ DONE ] Preparing servers [ DONE ] Pre installing Puppet and discovering hosts' details [ DONE ] Adding pre install manifest e Adding post install manifest entries [ DONE ] {some more steps} Copying Puppet modules and manifests [ DONE ] 10.103.232.27_prescript.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 10.103.232.27_prescript.pp Error: Could not find data item CONFIG_USE_SUBNETS in any Hiera data file and no default supplied at /var/tmp/packstack/7d10fe5796764c818d31e968493482a3/manifests/10.103.232.27_prescript.pp:2 on node lgdrm141.xiodrm.lab.emc.com You will find full trace in log /var/tmp/packstack/20151124-165422-J_gajl/manifests/10.103.232.27_prescript.pp.log Please check log file /var/tmp/packstack/20151124-165422-J_gajl/openstack-setup.log for more information Additional information: * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 10.103.232.27. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://10.103.232.27/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. Any help is appreciated Thanks, Gal Uzan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Tue Nov 24 16:56:04 2015 From: dms at redhat.com (David Moreau Simard) Date: Tue, 24 Nov 2015 11:56:04 -0500 Subject: [Rdo-list] liberty packstack Error: Could not find data item CONFIG_USE_SUBNETS in any Hiera data file In-Reply-To: <3B061B7CCBAB1B44A65805A2C8DB72230126BD@MX202CL02.corp.emc.com> References: <3B061B7CCBAB1B44A65805A2C8DB72230126BD@MX202CL02.corp.emc.com> Message-ID: Hi, This is a known issue and is being looked at in https://bugzilla.redhat.com/show_bug.cgi?id=1284978. The root cause of the problem is an update from EPEL on the Hiera package, a workaround is included in the bug while the fix is deployed. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Tue, Nov 24, 2015 at 11:23 AM, Uzan, Gal wrote: > Hi, > > During installation of liberty RDO with packstack, installation fails > > Exact same actions worked just fine yesterday > > > > I can see in the repo that some packages were updated : > http://mirror.centos.org/centos/7/cloud/x86_64/openstack-liberty/?C=M;O=D > > There also the same problem reported in Q&A : > https://ask.openstack.org/en/question/85014/error-could-not-find-data-item-config_use_subnets-in-any-hiera-data-file/ > > > > > > [root at lgdrm141 ~]# packstack --answer-file=answers.txt > > Welcome to the Packstack setup utility > > The installation log file is available at: > /var/tmp/packstack/20151124-165422-J_gajl/openstack-setup.log > > Installing: > > Clean Up [ DONE ] > > Discovering ip protocol version [ DONE ] > > Setting up ssh keys [ DONE ] > > Preparing servers [ DONE ] > > Pre installing Puppet and discovering hosts' details [ DONE ] > > Adding pre install manifest e > > Adding post install manifest entries [ DONE ] > > {some more steps} > > Copying Puppet modules and manifests [ DONE ] > > 10.103.232.27_prescript.pp: [ ERROR ] > > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 10.103.232.27_prescript.pp > > Error: Could not find data item CONFIG_USE_SUBNETS in any Hiera data file > and no default supplied at > /var/tmp/packstack/7d10fe5796764c818d31e968493482a3/manifests/10.103.232.27_prescript.pp:2 > on node lgdrm141.xiodrm.lab.emc.com > > You will find full trace in log > /var/tmp/packstack/20151124-165422-J_gajl/manifests/10.103.232.27_prescript.pp.log > > Please check log file > /var/tmp/packstack/20151124-165422-J_gajl/openstack-setup.log for more > information > > Additional information: > > * Time synchronization installation was skipped. Please note that > unsynchronized time on server instances might be problem for some OpenStack > components. > > * File /root/keystonerc_admin has been created on OpenStack client host > 10.103.232.27. To use the command line tools you need to source the file. > > * To access the OpenStack Dashboard browse to http://10.103.232.27/dashboard > . > > Please, find your login credentials stored in the keystonerc_admin in your > home directory. > > > > > > Any help is appreciated > > Thanks, > > Gal Uzan > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From Gal.Uzan at emc.com Tue Nov 24 17:01:51 2015 From: Gal.Uzan at emc.com (Uzan, Gal) Date: Tue, 24 Nov 2015 17:01:51 +0000 Subject: [Rdo-list] liberty packstack Error: Could not find data item CONFIG_USE_SUBNETS in any Hiera data file In-Reply-To: References: <3B061B7CCBAB1B44A65805A2C8DB72230126BD@MX202CL02.corp.emc.com> Message-ID: <3B061B7CCBAB1B44A65805A2C8DB722301291F@MX202CL02.corp.emc.com> Thank you David -----Original Message----- From: David Moreau Simard [mailto:dms at redhat.com] Sent: Tuesday, November 24, 2015 6:56 PM To: Uzan, Gal Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] liberty packstack Error: Could not find data item CONFIG_USE_SUBNETS in any Hiera data file Hi, This is a known issue and is being looked at in https://bugzilla.redhat.com/show_bug.cgi?id=1284978. The root cause of the problem is an update from EPEL on the Hiera package, a workaround is included in the bug while the fix is deployed. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Tue, Nov 24, 2015 at 11:23 AM, Uzan, Gal wrote: > Hi, > > During installation of liberty RDO with packstack, installation fails > > Exact same actions worked just fine yesterday > > > > I can see in the repo that some packages were updated : > http://mirror.centos.org/centos/7/cloud/x86_64/openstack-liberty/?C=M; > O=D > > There also the same problem reported in Q&A : > https://ask.openstack.org/en/question/85014/error-could-not-find-data- > item-config_use_subnets-in-any-hiera-data-file/ > > > > > > [root at lgdrm141 ~]# packstack --answer-file=answers.txt > > Welcome to the Packstack setup utility > > The installation log file is available at: > /var/tmp/packstack/20151124-165422-J_gajl/openstack-setup.log > > Installing: > > Clean Up [ DONE ] > > Discovering ip protocol version [ DONE ] > > Setting up ssh keys [ DONE ] > > Preparing servers [ DONE ] > > Pre installing Puppet and discovering hosts' details [ DONE ] > > Adding pre install manifest e > > Adding post install manifest entries [ DONE ] > > {some more steps} > > Copying Puppet modules and manifests [ DONE ] > > 10.103.232.27_prescript.pp: [ ERROR ] > > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 10.103.232.27_prescript.pp > > Error: Could not find data item CONFIG_USE_SUBNETS in any Hiera data > file and no default supplied at > /var/tmp/packstack/7d10fe5796764c818d31e968493482a3/manifests/10.103.2 > 32.27_prescript.pp:2 > on node lgdrm141.xiodrm.lab.emc.com > > You will find full trace in log > /var/tmp/packstack/20151124-165422-J_gajl/manifests/10.103.232.27_pres > cript.pp.log > > Please check log file > /var/tmp/packstack/20151124-165422-J_gajl/openstack-setup.log for more > information > > Additional information: > > * Time synchronization installation was skipped. Please note that > unsynchronized time on server instances might be problem for some > OpenStack components. > > * File /root/keystonerc_admin has been created on OpenStack client > host 10.103.232.27. To use the command line tools you need to source the file. > > * To access the OpenStack Dashboard browse to > http://10.103.232.27/dashboard . > > Please, find your login credentials stored in the keystonerc_admin in > your home directory. > > > > > > Any help is appreciated > > Thanks, > > Gal Uzan > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From chkumar246 at gmail.com Wed Nov 25 11:58:22 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 25 Nov 2015 17:28:22 +0530 Subject: [Rdo-list] RDO Bugs Statistics on 2015-11-25 Message-ID: # RDO Bugs on 2015-11-25 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 343 - Fixed (MODIFIED, POST, ON_QA): 200 ## Number of open bugs by component dib-utils [ 1] diskimage-builder [ 4] ++ distribution [ 14] +++++++++ dnsmasq [ 1] Documentation [ 4] ++ instack [ 4] ++ instack-undercloud [ 28] +++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 5] +++ openstack-cinder [ 14] +++++++++ openstack-foreman-inst... [ 2] + openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 2] + openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 9] ++++++ openstack-manila [ 10] ++++++ openstack-neutron [ 11] +++++++ openstack-nova [ 19] +++++++++++++ openstack-packstack [ 58] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 12] ++++++++ openstack-selinux [ 11] +++++++ openstack-swift [ 3] ++ openstack-tripleo [ 28] +++++++++++++++++++ openstack-tripleo-heat... [ 5] +++ openstack-tripleo-imag... [ 2] + openstack-tuskar [ 3] ++ openstack-utils [ 4] ++ openvswitch [ 1] Package Review [ 5] +++ python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 3] ++ python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 2] + rdo-manager [ 49] +++++++++++++++++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (343 bugs) ### dib-utils (1 bug) [1283812 ] http://bugzilla.redhat.com/1283812 (NEW) Component: dib-utils Last change: 2015-11-20 Summary: local_interface=bond0.120 in undercloud.conf create broken network configuration ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-11-17 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-10-07 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (5 bugs) [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (9 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1280530 ] http://bugzilla.redhat.com/1280530 (NEW) Component: openstack-keystone Last change: 2015-11-12 Summary: Fernet tokens cannot read key files with SELInuxz enabeld [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1284871 ] http://bugzilla.redhat.com/1284871 (NEW) Component: openstack-keystone Last change: 2015-11-24 Summary: /usr/share/keystone/wsgi-keystone.conf is missing group=keystone [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-manila (10 bugs) [1278918 ] http://bugzilla.redhat.com/1278918 (NEW) Component: openstack-manila Last change: 2015-11-06 Summary: manila-api fails to start without updates from upstream stable/liberty [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1277787 ] http://bugzilla.redhat.com/1277787 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: Glusterfs_driver: Export location for Glusterfs NFS- Ganesha is incorrect [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1277792 ] http://bugzilla.redhat.com/1277792 (NEW) Component: openstack-manila Last change: 2015-11-04 Summary: glusterfs_driver: Access-deny for glusterfs driver should be dynamic [1278919 ] http://bugzilla.redhat.com/1278919 (NEW) Component: openstack-manila Last change: 2015-11-12 Summary: AvailabilityZoneFilter is not working in manila- scheduler [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (11 bugs) [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2015-11-23 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-11-19 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-11-12 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1281308 ] http://bugzilla.redhat.com/1281308 (NEW) Component: openstack-neutron Last change: 2015-11-12 Summary: QoS policy is not enforced when using a previously used port [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1280258 ] http://bugzilla.redhat.com/1280258 (NEW) Component: openstack-neutron Last change: 2015-11-11 Summary: tenants seem like they are able to detach admin enforced QoS policies from ports or networks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (19 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2015-10-19 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1278808 ] http://bugzilla.redhat.com/1278808 (NEW) Component: openstack-nova Last change: 2015-11-06 Summary: Guest fails to use more than 1 vCPU with smpboot: do_boot_cpu failed(-1) to wakeup [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Installation of openstack-nova-compute fails on PowerKVM ### openstack-packstack (58 bugs) [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1284182 ] http://bugzilla.redhat.com/1284182 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Unable start Keystone, core dump [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-11-06 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-10-27 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-10-08 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1279642 ] http://bugzilla.redhat.com/1279642 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run fails when running with DEMO [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1279641 ] http://bugzilla.redhat.com/1279641 (NEW) Component: openstack-packstack Last change: 2015-11-09 Summary: Packstack run does not install keystoneauth1 [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-11-21 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1276277 ] http://bugzilla.redhat.com/1276277 (NEW) Component: openstack-packstack Last change: 2015-10-31 Summary: packstack --allinone fails on CentOS 7 x86_64 1503-01 [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1284984 ] http://bugzilla.redhat.com/1284984 (NEW) Component: openstack-packstack Last change: 2015-11-24 Summary: Could not find data item CONFIG_USE_SUBNETS in any Hiera data [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail ### openstack-puppet-modules (12 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-puppet-modules Last change: 2015-11-23 Summary: puppet module for manila should include service type - shareV2 [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (11 bugs) [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1284879 ] http://bugzilla.redhat.com/1284879 (NEW) Component: openstack-selinux Last change: 2015-11-24 Summary: Keystone via mod_wsgi is missing permission to read /etc/keystone/fernet-keys [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-29 Summary: Nova rootwrap-daemon requires a selinux exception [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (3 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-10-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (28 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1282328 ] http://bugzilla.redhat.com/1282328 (ASSIGNED) Component: openstack-tripleo Last change: 2015-11-23 Summary: Facter version 3+ is required for tripleo to provide IPv6 support [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: missing python-proliantutils [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1284664 ] http://bugzilla.redhat.com/1284664 (NEW) Component: openstack-tripleo Last change: 2015-11-23 Summary: NtpServer is passed as string by "openstack overcloud deploy" [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2015-11-04 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (4 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1270615 ] http://bugzilla.redhat.com/1270615 (NEW) Component: openstack-utils Last change: 2015-10-11 Summary: openstack status still checking mysql not mariadb [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (5 bugs) [1283295 ] http://bugzilla.redhat.com/1283295 (NEW) Component: Package Review Last change: 2015-11-18 Summary: Review Request: CloudKitty - Rating as a Service [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2015-11-19 Summary: Review Request: openstack-mistral - workflow Service for OpenStack cloud [1268372 ] http://bugzilla.redhat.com/1268372 (ASSIGNED) Component: Package Review Last change: 2015-11-24 Summary: Review Request: openstack-app-catalog-ui - openstack horizon plugin for the openstack app-catalog [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2015-11-05 Summary: Review Request: Murano - is an application catalog for OpenStack [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2015-11-13 Summary: New Package: python-dracclient ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (3 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1281352 ] http://bugzilla.redhat.com/1281352 (NEW) Component: python-neutronclient Last change: 2015-11-12 Summary: Internal server error when running qos-bandwidth-limit- rule-update as a tenant Edit ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (2 bugs) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config [1282093 ] http://bugzilla.redhat.com/1282093 (NEW) Component: python-oslo-config Last change: 2015-11-14 Summary: please rebase oslo.log to 1.12.0 ### rdo-manager (49 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support IPv6 [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-10-20 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1271335 ] http://bugzilla.redhat.com/1271335 (NEW) Component: rdo-manager Last change: 2015-11-09 Summary: [RFE] Support explicit configuration of L2 population [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1272180 ] http://bugzilla.redhat.com/1272180 (ASSIGNED) Component: rdo-manager Last change: 2015-11-13 Summary: Horizon doesn't load when deploying without pacemaker [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] Support enabling the port security extension [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-11-16 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-11-18 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-31 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (200 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (8 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-15 Summary: package ceilometermiddleware missing [1265721 ] http://bugzilla.redhat.com/1265721 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-15 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-15 Summary: Ceilometer dbsync failing during HA deployment [1265818 ] http://bugzilla.redhat.com/1265818 (MODIFIED) Component: openstack-ceilometer Last change: 2015-11-05 Summary: ceilometer polling agent does not start ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (5 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1278962 ] http://bugzilla.redhat.com/1278962 (ON_QA) Component: openstack-glance Last change: 2015-11-13 Summary: python-cryptography requires pyasn1>=0.1.8 but only 0.1.6 is available in Centos [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-11-17 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (15 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2015-10-26 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1281920 ] http://bugzilla.redhat.com/1281920 (POST) Component: openstack-neutron Last change: 2015-11-16 Summary: neutron-server will not start: fails with pbr version issue [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (62 bugs) [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1259354 ] http://bugzilla.redhat.com/1259354 (MODIFIED) Component: openstack-packstack Last change: 2015-11-10 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1284978 ] http://bugzilla.redhat.com/1284978 (MODIFIED) Component: openstack-packstack Last change: 2015-11-25 Summary: packstack --allione fails on applying prescript.pp due to new hiera package [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-11-12 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (13 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1219406 ] http://bugzilla.redhat.com/1219406 (MODIFIED) Component: openstack-selinux Last change: 2015-11-06 Summary: Glance over nfs fails due to selinux [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (2 bugs) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account [1272572 ] http://bugzilla.redhat.com/1272572 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-11-22 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon ### openstack-trove (3 bugs) [1278608 ] http://bugzilla.redhat.com/1278608 (MODIFIED) Component: openstack-trove Last change: 2015-11-06 Summary: trove-api fails to start [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies [1219069 ] http://bugzilla.redhat.com/1219069 (POST) Component: openstack-trove Last change: 2015-11-05 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (8 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-11-04 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (10 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2015-11-08 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Wed Nov 25 13:51:31 2015 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 25 Nov 2015 08:51:31 -0500 (EST) Subject: [Rdo-list] How to enable all extension in Kilo(rdo deploy) version? In-Reply-To: <201511251725221535546@heetian.com> References: <201511251725221535546@heetian.com> Message-ID: <606642598.24538594.1448459491682.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "quan hping" > To: "sgordon" > > Hi! > Recently I have read the page > :http://developer.openstack.org/api-ref-compute-v2.1.html > "API v2.1 must enable all extensions all the time. It uses > micro-version headers to expose any additional functionality." > How to enable all extension in Kilo(rdo deploy) version or how to know > all extension are enabled ? > Thanks! Please direct RDO inquiries to rdo-list at redhat.com. All extensions listed/exposed by the API endpoint are enabled, there is no concept of individually enabling/disabling extensions that I am aware of - if they are there then they are enabled. -Steve > Extension list on my Openstack environment. > [root at controller ~]# openstack extension list|cut -c 1-160|more > +-----------------------------------------------+---------------------------------+----------------------------------------------------------------------------- > | Name | Alias > | | Description > +-----------------------------------------------+---------------------------------+----------------------------------------------------------------------------- > | OpenStack S3 API | s3tokens > | | OpenStack S3 API. > | OpenStack Keystone Endpoint Filter API | OS-EP-FILTER > | | OpenStack Keystone Endpoint Filter API. > | OpenStack Revoke API | OS-REVOKE > | | OpenStack revoked token reporting mechanism. > | OpenStack Federation APIs | OS-FEDERATION > | | OpenStack Identity Providers Mechanism. > | OpenStack Keystone Admin | OS-KSADM > | | OpenStack extensions to Keystone v2.0 API > | enabling Administrative Operations > | OpenStack Simple Certificate API | OS-SIMPLE-CERT > | | OpenStack simple certificate retrieval extension > | OpenStack OAUTH1 API | OS-OAUTH1 > | | OpenStack OAuth 1.0a Delegated Auth Mechanism. > | OpenStack EC2 API | OS-EC2 > | | OpenStack EC2 Credentials backend. > | Multinic | NMN > | | Multiple network support. > | DiskConfig | OS-DCF > | | Disk Management Extension. > | ExtendedAvailabilityZone | OS-EXT-AZ > | | Extended Availability Zone support. > | ImageSize | OS-EXT-IMG-SIZE > | | Adds image size to image listings. > | ExtendedIps | OS-EXT-IPS > | | Adds type parameter to the ip list. > | ExtendedIpsMac | OS-EXT-IPS-MAC > | | Adds mac address parameter to the ip list. > | ExtendedServerAttributes | OS-EXT-SRV-ATTR > | | Extended Server Attributes support. > | ExtendedStatus | OS-EXT-STS > | | Extended Status support. > | ExtendedVIFNet | OS-EXT-VIF-NET > | | Adds network id parameter to the virtual interface > | list. > | FlavorDisabled | OS-FLV-DISABLED > | | Support to show the disabled status of a flavor. > | FlavorExtraData | OS-FLV-EXT-DATA > | | Provide additional data for flavors. > | SchedulerHints | OS-SCH-HNT > | | Pass arbitrary key/value pairs to the scheduler. > | ServerUsage | OS-SRV-USG > | | Adds launched_at and terminated_at on Servers. > | AdminActions | os-admin-actions > | | Enable admin-only server actions > ... > > > quan_hping at heetian.com > -- Steve Gordon, Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From rbowen at redhat.com Wed Nov 25 14:13:04 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 25 Nov 2015 09:13:04 -0500 Subject: [Rdo-list] RDO Day @ FOSDEM Message-ID: <5655C1F0.4010609@redhat.com> I'd like to remind you that we have planned a full day of RDO at FOSDEM, in conjunction with the CentOS Dojo. This means that we have just slightly over 2 months to plan, and many of us will be taking time off during much of that. So we need to get on it. So, a reminder - the information about the event is here: https://www.rdoproject.org/blog/2015/11/rdo-community-day-fosdem/ In particular, if you're planning to be there, we need to have talk/discussion suggestions, submitted in the Google Form at http://goo.gl/forms/oDjI2BpCtm So far we have only two submissions. Remember that when you submit a discussion idea, you're not necessarily saying that you'll present for an hour, but, rather, that you're willing to be on hand to facilitate the discussion. Next week, I'll probably start bugging individual ones of you to take sessions, if we don't have more by then. Thanks! --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From me at coolsvap.net Thu Nov 26 07:03:55 2015 From: me at coolsvap.net (Swapnil Kulkarni) Date: Thu, 26 Nov 2015 12:33:55 +0530 Subject: [Rdo-list] Ceilometer-alarm is failing with ceilometer-common package version mismatch Message-ID: While building kolla images the ceilometer-alarm package is failing with error trace [1] [1] http://paste.openstack.org/show/480079/ Best Regards, Swapnil Kulkarni irc : coolsvap -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu Nov 26 16:15:34 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 26 Nov 2015 17:15:34 +0100 Subject: [Rdo-list] Ceilometer-alarm is failing with ceilometer-common package version mismatch In-Reply-To: References: Message-ID: 2015-11-26 8:03 GMT+01:00 Swapnil Kulkarni : > While building kolla images the ceilometer-alarm package is failing with > error trace [1] -alarm subpackage is gone in Mitaka https://review.gerrithub.io/253189 you need to switch to Aodh Cheers, Alan From me at coolsvap.net Thu Nov 26 16:41:26 2015 From: me at coolsvap.net (Swapnil Kulkarni) Date: Thu, 26 Nov 2015 22:11:26 +0530 Subject: [Rdo-list] Ceilometer-alarm is failing with ceilometer-common package version mismatch In-Reply-To: References: Message-ID: Hi Alan, I got the update :) on #rdo today Thanks for the info again. Best Regards, Swapnil Kulkarni irc : coolsvap On Thu, Nov 26, 2015 at 9:45 PM, Alan Pevec wrote: > 2015-11-26 8:03 GMT+01:00 Swapnil Kulkarni : > > While building kolla images the ceilometer-alarm package is failing with > > error trace [1] > > -alarm subpackage is gone in Mitaka https://review.gerrithub.io/253189 > you need to switch to Aodh > > Cheers, > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsydor at gmail.com Sat Nov 28 16:18:47 2015 From: bsydor at gmail.com (Bohdan Sydor) Date: Sat, 28 Nov 2015 16:18:47 +0000 Subject: [Rdo-list] Kilo packstack fails when CONFIG_MARIADB_INSTALL=n Message-ID: Hello, I'm trying to install Kilo with Packstack. I have a running Galera cluster, so I don't want Packstack to install it. I set CONFIG_MARIADB_INSTALL=n. The Mariadb service is reachable on the VIP with the root credentials as specified in CONFIG_MARIADB_PW. Unfortunately, when I run Packstack it fails on _mariadb.pp manifest: Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class ::remote::db at /var/tmp/packstack/6fbf13295dee4623b9f2e3386bfd2b58/manifests/192.168.159.112_mariadb.pp:11 on node ctl0.mysite. I can find the class remote::db at /usr/share/openstack-puppet/modules/remote/manifests/mysql.pp though. I'm not sure what I'm missing. Any hints highly appreciated. -- Thanks, Bohdan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsydor at gmail.com Sun Nov 29 11:11:18 2015 From: bsydor at gmail.com (Bohdan Sydor) Date: Sun, 29 Nov 2015 11:11:18 +0000 Subject: [Rdo-list] Kilo packstack fails when CONFIG_MARIADB_INSTALL=n In-Reply-To: References: Message-ID: On Sun, Nov 29, 2015 at 4:11 AM Mohammed Arafa wrote: > would your objective be met if you used rdo-manager instead? > > I don't know. I haven't considered using RDO Manager in this case. As a workaround I modified the file ./packstack/puppet/templates/mariadb_noinstall.pp: Removed the line: class { '::remote::db': } and replaced with the content of the original remote::db class. It seems like there's a problem with the path to puppet modules. -- Regards, Bohdan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Sun Nov 29 03:11:20 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sat, 28 Nov 2015 22:11:20 -0500 Subject: [Rdo-list] Kilo packstack fails when CONFIG_MARIADB_INSTALL=n In-Reply-To: References: Message-ID: would your objective be met if you used rdo-manager instead? On Sat, Nov 28, 2015 at 11:18 AM, Bohdan Sydor wrote: > Hello, > > I'm trying to install Kilo with Packstack. I have a running Galera > cluster, so I don't want Packstack to install it. I > set CONFIG_MARIADB_INSTALL=n. > The Mariadb service is reachable on the VIP with the root credentials as > specified in CONFIG_MARIADB_PW. > > Unfortunately, when I run Packstack it fails on _mariadb.pp manifest: > > Error: Puppet::Parser::AST::Resource failed with error ArgumentError: > Could not find declared class ::remote::db at > /var/tmp/packstack/6fbf13295dee4623b9f2e3386bfd2b58/manifests/192.168.159.112_mariadb.pp:11 > on node ctl0.mysite. > > I can find the class remote::db at > /usr/share/openstack-puppet/modules/remote/manifests/mysql.pp though. > > I'm not sure what I'm missing. > > Any hints highly appreciated. > > > -- > > Thanks, > > Bohdan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Sun Nov 29 18:02:11 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sun, 29 Nov 2015 13:02:11 -0500 Subject: [Rdo-list] [rdo-manager] doc typo? environment setup - baremetal Message-ID: i was reading this page https://repos.fedorapeople.org/repos/openstack-m/docs/internal/master/environments/baremetal.html#minimum-system-requirements and i discovered it stated it needed a total of 3 physical servers, then looking at the diagram, it shows 2 compute nodes to bring up to a total of 3 physical servers. i have managed to setup up a minimum environment with 1 controller and 1 compute so i guess this is a fyi or bug report thanks --- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Nov 30 04:30:21 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sun, 29 Nov 2015 23:30:21 -0500 Subject: [Rdo-list] [rdo-manager] tuskar and tempest status Message-ID: hello i recall being told recently that either tuskar and/or tempest was not ready for rdo-manager and liberty. i was wondering what the status was of these 2 components also, what is the status of the horizon replacement. will that make it to mitaka? -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Mon Nov 30 11:18:15 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 30 Nov 2015 12:18:15 +0100 Subject: [Rdo-list] [openstack-dev] [Magnum] Liberty RPMs for RDO In-Reply-To: <1448877286.2624.24.camel@cern.ch> References: <1448877286.2624.24.camel@cern.ch> Message-ID: Hi Mathieu, 2015-11-30 10:54 GMT+01:00 Mathieu Velten : > Hi, > > Let me first introduce myself : I am currently working at CERN to help > evaluate and deploy Magnum. > > In this regard Ricardo recently sends an email regarding Puppet > modules, this one is about RPMs of Magnum for CentOS with RDO. Nice, looking forward to review it! > You can find here a repository containing the source and binary RPMs > for magnum and python-magnumclient. > http://linuxsoft.cern.ch/internal/repos/magnum7-testing/ This one is 403 ? > The version 1.0.0.0b2.dev4 is the Magnum Liberty release and the > 1.1.0.0-5 version is the Mitaka M1 release using Liberty dependencies > (one client commit regarding keystone auth and one server commit > regarding oslo.config have been reverted). > > Let me know how I can contribute the spec files to somewhere more > suitable. Let's discuss this on rdo-list (CCed) Cheers, Alan From hguemar at fedoraproject.org Mon Nov 30 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 30 Nov 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20151130150003.3629760A3FC6@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2015-12-02 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Packaging ](https://etherpad.openstack.org/p/RDO-Packaging) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From dradez at redhat.com Mon Nov 30 16:15:23 2015 From: dradez at redhat.com (Dan Radez) Date: Mon, 30 Nov 2015 11:15:23 -0500 Subject: [Rdo-list] [apex] OPNFV Project Apex Update In-Reply-To: <564F8D58.1080704@redhat.com> References: <564F8D58.1080704@redhat.com> Message-ID: <565C761B.6060902@redhat.com> Just an update on the documentation and the builds. We consumed a patch that distributes docs a little cleaner. You can now find docs on artifacts.opnfv.org I've updated the links on the wiki page to point to artifacts.opnfv: http://wiki.opnfv.org/apex We also got an updated build of OpenDaylight that appears to be bad. Watch the daily build for an updated build that's working. let me know if you have any questions Dan On 11/20/2015 04:15 PM, Dan Radez wrote: > OPNFV Apex is a project that is working towards using RDO Project's RDO > Manager installation tool to deploy OpenStack and OpenDaylight according > to the Requirements and Standards that are defined by the OPNFV Project. > > In the past couple weeks we have merged a large number of patches > working towards our second release, Bramaputra, due in February. > > More work and patching will be come before the Bramaputra release, > though we have come to a point that many people have been waiting for: > an installation document! Currently there is one out standing patch that > is comprised of mostly the installation doc itself: > https://gerrit.opnfv.org/gerrit/#/c/3455/ > > The install doc will be able to be viewed via OPNFV's gitweb once that > patch merges. > https://gerrit.opnfv.org/gerrit/gitweb?p=apex.git;a=blob;f=docs/src/installation-instructions.rst > Alternatively you can view in on my github: > https://github.com/radez/apex/blob/master/docs/src/installation-instructions.rst > > Our daily builds are uploaded to artifacts.opnfv.org. Last night's > build, that the installation documentation can be used with, is > available at: > http://artifacts.opnfv.org/apex//opnfv-2015-11-20_03-00-46.iso > or > http://artifacts.opnfv.org/apex//opnfv-apex-2.2-20151120030046.noarch.rpm > > The iso includes CentOS 7 and the rpm. Alternatively, The rpm can be > installed onto a Virtualization Host install of CentOS 7. Just a note, > the baremetal deployment has not been fully tested or documented yet. I > would recommend starting with the Virtualized deployment. > > We will be adding information and links to the Apex page on the OPNFV > Wiki page: https://wiki.opnfv.org/apex as we collect and publish more. > The installation instructions link on the wiki page is the same included > above and is currently broken. It will link properly once the above > patch merges which should happen in the next day or two. > > I you have questions or need help with an opnfv-apex deployment please > ask on the opnfv-users at list.opnfv.org list. We'll be watching this list > and happy to help you get started if needed. > > Dan Radez > Sr Software Engineer > Red Hat Inc. > freenode: radez > From mkkang at isi.edu Mon Nov 30 19:37:27 2015 From: mkkang at isi.edu (Mikyung Kang) Date: Mon, 30 Nov 2015 11:37:27 -0800 (PST) Subject: [Rdo-list] [[RDO-Manager] undercloud install - heiry In-Reply-To: <6798170.1870.1448911900893.JavaMail.mkang@guest246.east.isi.edu> Message-ID: <25206544.1873.1448912247326.JavaMail.mkang@guest246.east.isi.edu> Hello, One week ago, I could install undercloud and deploy overcloud successfully. I'm trying to install undercloud from clean CentOS7.1 OS again using the other network interface and the other IP range, but I got this error. I didn't get this before. ... ++ iptables -t nat -N BOOTSTACK_MASQ_NEW ++ NETWORK=192.3.2.0/24 ++ iptables -t nat -A BOOTSTACK_MASQ_NEW -s 192.3.2.0/24 -d 192.168.122.1 -j RETURN ++ iptables -t nat -A BOOTSTACK_MASQ_NEW -s 192.3.2.0/24 '!' -d 192.3.2.0/24 -j MASQUERADE ++ iptables -t nat -A POSTROUTING -s 192.3.2.0/24 -o eth0 -j MASQUERADE ++ iptables -t nat -I POSTROUTING -j BOOTSTACK_MASQ_NEW ++ iptables -t nat -F BOOTSTACK_MASQ iptables: No chain/target/match by that name. ++ true ++ iptables -t nat -D POSTROUTING -j BOOTSTACK_MASQ iptables v1.4.21: Couldn't load target `BOOTSTACK_MASQ':No such file or directory Try `iptables -h' or 'iptables --help' for more information. ++ true ++ iptables -t nat -X BOOTSTACK_MASQ iptables: No chain/target/match by that name. ++ true ++ iptables -t nat -E BOOTSTACK_MASQ_NEW BOOTSTACK_MASQ ++ iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited + iptables-save dib-run-parts Mon Nov 30 14:32:04 EST 2015 80-seedstack-masquerade completed dib-run-parts Mon Nov 30 14:32:04 EST 2015 Running /usr/libexec/os-refresh-config/post-configure.d/98-undercloud-setup + OK_FILE=/opt/stack/.undercloud-setup + '[' -f /opt/stack/.undercloud-setup ']' + source /root/tripleo-undercloud-passwords +++ sudo hiera admin_password Failed to start Hiera: RuntimeError: Config file /etc/puppetlabs/code/hiera.yaml not found ++ UNDERCLOUD_ADMIN_PASSWORD= [2015-11-30 14:32:04,503] (os-refresh-config) [ERROR] during post-configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit status 1] [2015-11-30 14:32:04,503] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 562, in install _run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 494, in _run_orc _run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 325, in _run_live_command raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1 ... I just followed RDO step as follows: sudo useradd stack sudo passwd stack echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack sudo chmod 0440 /etc/sudoers.d/stack su - stack sudo hostnamectl set-hostname gpu6.east.isi.edu sudo hostnamectl set-hostname --transient gpu6.east.isi.edu sudo vim /etc/hosts sudo yum -y upgrade sudo yum -y install epel-release sudo yum install -y http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm sudo yum install -y python-tripleoclient cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf openstack undercloud install Could you please help me to resolve this? Thanks, Mikyung From mkkang at isi.edu Mon Nov 30 19:37:22 2015 From: mkkang at isi.edu (Mikyung Kang) Date: Mon, 30 Nov 2015 11:37:22 -0800 (PST) Subject: [Rdo-list] [RDO-Manager] deploy In-Reply-To: <15751834.2267.1447795954089.JavaMail.mkang@guest246.east.isi.edu> Message-ID: <19203334.1872.1448912240432.JavaMail.mkang@guest246.east.isi.edu> The RDO deployment works fine w/o any problem. Our OS image was broken to access iPXE. All RDO steps are working fine w/o any modification. [stack at gpu6 ~]$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | c71fdb92-1af1-4630-85ae-aabcc63aa812 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 | | 7999b4a2-80f5-449a-915c-9db29ad0297b | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.9 | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ Thanks, Mikyung ----- Original Message ----- From: "Mikyung Kang" To: "Dan Sneddon" Cc: rdo-list at redhat.com Sent: Tuesday, November 17, 2015 4:32:38 PM Subject: Re: [Rdo-list] [RDO-Manager] deploy Hi Dan, Thanks for the description. As you described, iptables is running and MAC addresses for overcloud nodes are added as DROP rule properly. Only one interface is attached to the provisioning interface. But, overcloud nodes still load agent.kernel/ramdisk images, not deploy_kernel/ramdisk. Should I disable dhcp server @ other machine and setup new dhcp server on the same undercloud node? This is my log: [stack at gpu6 ~]$ openstack baremetal introspection bulk start Setting available nodes to manageable... Starting introspection of node: ffe9edca-fa5e-45bf-97df-f49a2cce0c92 Starting introspection of node: 1dc404db-0352-4355-ba64-67fae456f12a Waiting for introspection to finish... Introspection for UUID ffe9edca-fa5e-45bf-97df-f49a2cce0c92 finished successfully. Introspection for UUID 1dc404db-0352-4355-ba64-67fae456f12a finished successfully. Setting manageable nodes to available... Node ffe9edca-fa5e-45bf-97df-f49a2cce0c92 has been set to available. Node 1dc404db-0352-4355-ba64-67fae456f12a has been set to available. Introspection completed. [stack at gpu6 ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | ffe9edca-fa5e-45bf-97df-f49a2cce0c92 | None | None | power off | available | False | | 1dc404db-0352-4355-ba64-67fae456f12a | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ [stack at gpu6 ~]$ openstack baremetal introspection bulk status +--------------------------------------+----------+-------+ | Node UUID | Finished | Error | +--------------------------------------+----------+-------+ | ffe9edca-fa5e-45bf-97df-f49a2cce0c92 | True | None | | 1dc404db-0352-4355-ba64-67fae456f12a | True | None | +--------------------------------------+----------+-------+ Chain ironic-inspector (1 references) target prot opt source destination DROP all -- anywhere anywhere MAC 00:9C:02:A7:EA:36 DROP all -- anywhere anywhere MAC 00:9C:02:A5:4A:DA ACCEPT all -- anywhere anywhere [stack at gpu6 ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates hang....? [root at gpu6 tftpboot]# nova list +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | 9a0c3ba9-5502-4bd7-a7f1-c2109200c19e | overcloud-controller-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.9 | | 0e740a33-fbca-4690-a938-980fbe623223 | overcloud-novacompute-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.8 | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ [root at gpu6 tftpboot]# ls -al /httpboot/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/ total 92524 drwxr-xr-x. 2 ironic ironic 87 Nov 17 16:17 . drwxr-xr-x. 5 ironic ironic 4096 Nov 17 16:17 .. -rw-r--r--. 1 ironic ironic 956 Nov 17 16:17 config -rw-r--r--. 3 ironic ironic 5029328 Nov 6 14:44 deploy_kernel -rw-r--r--. 3 ironic ironic 50630736 Nov 6 14:44 deploy_ramdisk -rw-r--r--. 3 ironic ironic 5029328 Nov 6 14:44 kernel -rw-r--r--. 3 ironic ironic 34038813 Nov 6 14:44 ramdisk [root at gpu6 tftpboot]# ls -al /httpboot/1dc404db-0352-4355-ba64-67fae456f12a/ total 92524 drwxr-xr-x. 2 ironic ironic 87 Nov 17 16:17 . drwxr-xr-x. 5 ironic ironic 4096 Nov 17 16:17 .. -rw-r--r--. 1 ironic ironic 956 Nov 17 16:17 config -rw-r--r--. 3 ironic ironic 5029328 Nov 6 14:44 deploy_kernel -rw-r--r--. 3 ironic ironic 50630736 Nov 6 14:44 deploy_ramdisk -rw-r--r--. 3 ironic ironic 5029328 Nov 6 14:44 kernel -rw-r--r--. 3 ironic ironic 34038813 Nov 6 14:44 ramdisk [root at gpu6 tftpboot]# cat /httpboot/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/config #!ipxe dhcp goto deploy :deploy kernel http://192.0.2.1:8088/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/deploy_kernel selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn.2008-10.org.openstack:ffe9edca-fa5e-45bf-97df-f49a2cce0c92 deployment_id=ffe9edca-fa5e-45bf-97df-f49a2cce0c92 deployment_key=LIUI374KDMT55F8ATYY56BIDFWY0RRA1 ironic_api_url=http://192.0.2.1:6385 troubleshoot=0 text nofb nomodeset vga=normal boot_option=local ip=${ip}:${next-server}:${gateway}:${netmask} BOOTIF=${mac} ipa-api-url=http://192.0.2.1:6385 ipa-driver-name=pxe_ipmitool coreos.configdrive=0 initrd http://192.0.2.1:8088/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/deploy_ramdisk boot :boot_partition kernel http://192.0.2.1:8088/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/kernel root={{ ROOT }} ro text nofb nomodeset vga=normal initrd http://192.0.2.1:8088/ffe9edca-fa5e-45bf-97df-f49a2cce0c92/ramdisk boot :boot_whole_disk kernel chain.c32 append mbr:{{ DISK_IDENTIFIER }} boot [root at gpu6 tftpboot]# ironic node-list +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | ffe9edca-fa5e-45bf-97df-f49a2cce0c92 | None | 0e740a33-fbca-4690-a938-980fbe623223 | power on | wait call-back | False | | 1dc404db-0352-4355-ba64-67fae456f12a | None | 9a0c3ba9-5502-4bd7-a7f1-c2109200c19e | power on | wait call-back | False | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ Thanks, Mikyung ----- Original Message ----- From: "Dan Sneddon" To: "Mikyung Kang" , rdo-list at redhat.com Sent: Tuesday, November 17, 2015 3:58:37 PM Subject: Re: [Rdo-list] [RDO-Manager] deploy On 11/17/2015 12:32 PM, Mikyung Kang wrote: > Hello, > > I'm trying RDO-manager:Liberty version on CentOS7.1. > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > After adding /tftpboot/pxelinux.cfg/default [Using IPA] as follows, up to introspection step, it's OK (error=None, finished=True). > > [root at test tftpboot]# cat pxelinux.cfg/default (10.0.1.6 = undercloud IP) > default introspect > label introspect > kernel agent.kernel > append initrd=agent.ramdisk ipa-inspection-callback-url=http://10.0.1.6:5050/v1/continue systemd.journald.forward_to_console=yes > ipappend 3 > > But, when deploying 1 controller and 1 compute, those systems couldn't be booted from right deploy images. > > I can see two instances are spawned (1 controller-node instance and 1 compute-node instance) based on the default heat template. Then, the provisioning state is changed from available to deploying. On this deploying step, I can see deploy images/config are put to each instance's UUID directory @/httpboot/ directory. And then, the provisioning state is changed from [deploying] to [wait call-back]. Even though ipmitool turns on the system, those systems can't find deploy images. > > Actually, I have another dhcp server @other machine. It includes RDO testbeds' MAC and IP. So, I setup RDO testbeds' next-server as RDO undercloud IP @dhcpd.conf. Then, overcloud nodes could boot from agent.kernel/ramdisk from undercloud:/tftpboot properly. But, I don't know how overcloud nodes can get deploy/overcloud images. > > If above pxelinux.cfg/default is put as-is @undercloud, agent kernel/ramdisk is loaded again, not from deploy image. Then deploying step can't be proceeded further and then goes to timeout error. If that default file is removed, system is unable to locate tftp configuration. How can I make controller/compute boot from right deploy images? Should I setup something for the httpboot/ipxe? > > Thanks, > Mikyung > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > What is supposed to happen when introspection completes is that the Undercloud will add the MAC address of the newly-discovered system to iptables in order to block DHCP requests from reaching ironic-discovery's dnsmasq. If that doesn't happen, then you get a loop where the discovery image boots instead of the deploy image. Check your iptables and make sure that you see the MAC addresses added to the "discovery" chain, like this: Chain discovery (1 references) target prot opt source destination DROP all -- anywhere anywhere MAC 00:21:BA:17:0D:2B DROP all -- anywhere anywhere MAC 00:3C:A6:BB:68:FC DROP all -- anywhere anywhere MAC 00:92:5D:AE:62:37 Also, make sure that iptables is running, and that you don't have more than one interface attached to the provisioning network on the overcloud nodes. If you do, there is a workaround, but it's cleanest to just make sure you have only one interface attached to the provisioning interface. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From rbowen at redhat.com Mon Nov 30 20:25:36 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 30 Nov 2015 15:25:36 -0500 Subject: [Rdo-list] RDO Community Day @ FOSDEM - CFP deadline approaching Message-ID: <565CB0C0.7030804@redhat.com> With many people taking a lot of time off towards the end of the year, we would like to finalize the schedule for the RDO Community Day @ FOSDEM as soon as possible. If you would like to propose a session for the event, please do so by the end of this week. You can send in your proposal on the Google Form at http://goo.gl/forms/oDjI2BpCtm The event is now also listed on https://fosdem.org/2016/fringe/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Nov 30 21:43:43 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 30 Nov 2015 16:43:43 -0500 Subject: [Rdo-list] OpenStack Meetups, week of November 30th Message-ID: <565CC30F.5080709@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Monday November 30 in Mountain View, CA, US: OpenStack & Beyond Podcast - Predictions for 2016 - http://www.meetup.com/Cloud-Online-Meetup/events/227108687/ * Monday November 30 in Tel Aviv-Yafo, IL: [ONLINE] OpenStack & Beyond Ep 6 - OpenStack Predictions for 2016 - http://www.meetup.com/OpenStack-Israel/events/227108478/ * Tuesday December 01 in Amsterdam, NL: Fuga Public OpenStack Cloud Training - http://www.meetup.com/Web-en-Cloud-Hosting-Meetup-for-everyone/events/226808157/ * Tuesday December 01 in Bangalore, IN: Open Stack & OVSDB - Demystified By Anil Vishnoi - http://www.meetup.com/Bangalore-SDN-and-NFV-meetup/events/227027691/ * Tuesday December 01 in Melbourne, AU: Containers on OpenStack and using Ansible to manage OpenStack - Melbourne - http://www.meetup.com/HP-Helion-Australia/events/225659514/ * Wednesday December 02 in Budapest, HU: OpenStack 2015 december - http://www.meetup.com/OpenStack-Hungary-Meetup-Group/events/226673589/ * Wednesday December 02 in Houten, NL: OpenStack Roadshow - http://www.meetup.com/CRI-Service-Kennissessies/events/226831442/ * Wednesday December 02 in M?nchen, DE: OpenStack Munich - User stories Meetup - http://www.meetup.com/OpenStack-Munich/events/226702968/ * Wednesday December 02 in Amsterdam, NL: Openstack & Ceph is back! Hosted by DMC - http://www.meetup.com/Openstack-Amsterdam/events/226707668/ * Thursday December 03 in Hadley, MA, US: Discuss Meetup and Intro to OpenStack - http://www.meetup.com/OpenStack-Western-MA/events/226547151/ * Thursday December 03 in Melbourne, AU: Containers on OpenStack and using Ansible to manage OpenStack - Sydney - http://www.meetup.com/HP-Helion-Australia/events/226954627/ * Thursday December 03 in Raleigh, NC, US: Meetup + Eat Pizza + Talk "OpenStack" for Agile Clouds - http://www.meetup.com/Raleigh-Triange-Building-Agile-Clouds-with-OpenStack/events/226279871/ * Friday December 04 in Tokyo, JP: ??OpenStack???? ?24???? - http://www.meetup.com/Japan-OpenStack-User-Group/events/227046129/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From trown at redhat.com Mon Nov 30 22:11:43 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 30 Nov 2015 17:11:43 -0500 Subject: [Rdo-list] [[RDO-Manager] undercloud install - heiry In-Reply-To: <25206544.1873.1448912247326.JavaMail.mkang@guest246.east.isi.edu> References: <25206544.1873.1448912247326.JavaMail.mkang@guest246.east.isi.edu> Message-ID: <565CC99F.7090901@redhat.com> On 11/30/2015 02:37 PM, Mikyung Kang wrote: > Hello, > > One week ago, I could install undercloud and deploy overcloud successfully. > I'm trying to install undercloud from clean CentOS7.1 OS again using the other network interface and the other IP range, but I got this error. I didn't get this before. > > ... > ++ iptables -t nat -N BOOTSTACK_MASQ_NEW > ++ NETWORK=192.3.2.0/24 > ++ iptables -t nat -A BOOTSTACK_MASQ_NEW -s 192.3.2.0/24 -d 192.168.122.1 -j RETURN > ++ iptables -t nat -A BOOTSTACK_MASQ_NEW -s 192.3.2.0/24 '!' -d 192.3.2.0/24 -j MASQUERADE > ++ iptables -t nat -A POSTROUTING -s 192.3.2.0/24 -o eth0 -j MASQUERADE > ++ iptables -t nat -I POSTROUTING -j BOOTSTACK_MASQ_NEW > ++ iptables -t nat -F BOOTSTACK_MASQ > iptables: No chain/target/match by that name. > ++ true > ++ iptables -t nat -D POSTROUTING -j BOOTSTACK_MASQ > iptables v1.4.21: Couldn't load target `BOOTSTACK_MASQ':No such file or directory > > Try `iptables -h' or 'iptables --help' for more information. > ++ true > ++ iptables -t nat -X BOOTSTACK_MASQ > iptables: No chain/target/match by that name. > ++ true > ++ iptables -t nat -E BOOTSTACK_MASQ_NEW BOOTSTACK_MASQ > ++ iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited > + iptables-save > dib-run-parts Mon Nov 30 14:32:04 EST 2015 80-seedstack-masquerade completed > dib-run-parts Mon Nov 30 14:32:04 EST 2015 Running /usr/libexec/os-refresh-config/post-configure.d/98-undercloud-setup > + OK_FILE=/opt/stack/.undercloud-setup > + '[' -f /opt/stack/.undercloud-setup ']' > + source /root/tripleo-undercloud-passwords > +++ sudo hiera admin_password > Failed to start Hiera: RuntimeError: Config file /etc/puppetlabs/code/hiera.yaml not found This is caused by an update to the hiera package in EPEL. > ++ UNDERCLOUD_ADMIN_PASSWORD= > [2015-11-30 14:32:04,503] (os-refresh-config) [ERROR] during post-configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit status 1] > > [2015-11-30 14:32:04,503] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 562, in install > _run_orc(instack_env) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 494, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 325, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > ... > > I just followed RDO step as follows: > > sudo useradd stack > sudo passwd stack > echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack > sudo chmod 0440 /etc/sudoers.d/stack > su - stack > sudo hostnamectl set-hostname gpu6.east.isi.edu > sudo hostnamectl set-hostname --transient gpu6.east.isi.edu > sudo vim /etc/hosts > sudo yum -y upgrade > sudo yum -y install epel-release I think we actually do not need EPEL. Could you try without the above step? > sudo yum install -y http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > sudo yum install -y python-tripleoclient > cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf > openstack undercloud install > > > Could you please help me to resolve this? > > Thanks, > Mikyung > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From trown at redhat.com Mon Nov 30 22:15:00 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 30 Nov 2015 17:15:00 -0500 Subject: [Rdo-list] [rdo-manager] tuskar and tempest status In-Reply-To: References: Message-ID: <565CCA64.6060106@redhat.com> On 11/29/2015 11:30 PM, Mohammed Arafa wrote: > hello > > i recall being told recently that either tuskar and/or tempest was not > ready for rdo-manager and liberty. i was wondering what the status was of > these 2 components > Tempest is in the liberty repos. We use it for validation in the CI. Tuskar is still a WIP in upstream TripleO. > also, what is the status of the horizon replacement. will that make it to > mitaka? What is the horizon replacement? Do you mean the web GUI for Tuskar? If so, I think some form of it will land in Mitaka. > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From gcerami at redhat.com Mon Nov 30 22:38:30 2015 From: gcerami at redhat.com (Gabriele Cerami) Date: Mon, 30 Nov 2015 23:38:30 +0100 Subject: [Rdo-list] openstack puppet modules package from new repositories Message-ID: <1448923110.3176.32.camel@redhat.com> Hi, since a couple of months the CI jobs in https://ci.centos.org/view/Openstack%20Puppet%20Modules/ have been syncing repositories in https://github.com/rdo-puppet-modules/ from upstream repositories, relaunching acceptance tests (and launching other tests) on each change that comes from any of the upstream modules (git and gerrit) inside CentOS/RDO environments. Not all tests are ready, but we'd like to add an additional final test at the end of the flow, that launches packstack with a package created taking each individual repository. We already have a job here https://ci.centos.org/view/Openstack%20Puppet%20Modules/job/opm-ci-mids tream-package/ (a bit neglected now) that tries to do that, but we are missing a working spec file. We were using a draft spec file that successfully created a valid but useless package (file paths are all wrong) in the past, just for PoC. https://github.com/rdo-puppet-modules/gate/blob/master/specs/openstack- puppet-modules.spec Can someone take a look at this spec, suggest the necessary modifications, and help in pushing forward the adoption of the multiple repositories as sources for the official package ? Thanks.