From kchamart at redhat.com Mon Sep 1 06:24:32 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 1 Sep 2014 11:54:32 +0530 Subject: [Rdo-list] M3 test day In-Reply-To: <20140828155146.GA29741@hattop.hq.kanerai.net> References: <53FE2090.2010101@redhat.com> <20140828032747.GB7583@tesla.redhat.com> <20140828155146.GA29741@hattop.hq.kanerai.net> Message-ID: <20140901062432.GC6950@tesla.redhat.com> On Thu, Aug 28, 2014 at 11:51:46AM -0400, Dusty Mabe wrote: > On Thu, Aug 28, 2014 at 08:57:47AM +0530, Kashyap Chamarthy wrote: > > On Wed, Aug 27, 2014 at 02:16:48PM -0400, Rich Bowen wrote: > > > We've been asked by the Fedora Cloud folks if they can help us do a test day > > > for the M3 packages. M3 release date is September 4 - > > > https://wiki.openstack.org/wiki/Juno_Release_Schedule - so could we > > > tentatively look at the week of the 22nd to do a test day? > > > > Sounds good me. Just that we'd to ensure packagers are aware of this > > date too. > > How about, tentatively, Thursday September 25th? Sure (Rich, that works for you too, right?). If anyone else has objections I'm sure they'll raise here. PS: /me will be mostly away/sporadically available that week (till first week of Oct) due to some prior planned commitments. -- /kashyap From benjamin.ernst.lipp at cern.ch Tue Sep 2 16:11:32 2014 From: benjamin.ernst.lipp at cern.ch (Benjamin Lipp) Date: Tue, 2 Sep 2014 18:11:32 +0200 Subject: [Rdo-list] =?utf-8?b?4oCcR2V0IGludm9sdmVk4oCdIHBhZ2Ugb24gb3BlbnN0?= =?utf-8?q?ack=2Eredhat=2Ecom?= Message-ID: <5405EC34.6010305@cern.ch> Hi, I just had a hard time finding out if there are any plans on integrating OpenStack Trove in Packstack and wanted to give you some feedback on the website(s) of Packstack. Finally, I found everything like Bugtracker, Gerrit, ? on https://wiki.openstack.org/wiki/Packstack Maybe you might want to update https://openstack.redhat.com/Get_involved with this information (I just checked, I don't have edit rights). On https://github.com/stackforge/packstack a link to https://wiki.openstack.org/wiki/Packstack or https://openstack.redhat.com/Get_involved might be useful, because right now someone ending up on this Github repository will be lost on a dead end. You mention different IRC channels on https://wiki.openstack.org/wiki/Packstack and https://openstack.redhat.com/Get_involved as well. Kind regards, Benjamin From sgordon at redhat.com Mon Sep 1 15:37:34 2014 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 1 Sep 2014 11:37:34 -0400 (EDT) Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image In-Reply-To: <53FDB5AE.1050907@karan.org> References: <53FDB5AE.1050907@karan.org> Message-ID: <878732901.1223632.1409585854897.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Karanbir Singh" > To: Rdo-list at redhat.com > > hi > > I've just pushed a GenericCloud image, that will become the gold > standard to build all varients and environ specific images from. > Requesting people to help test this image : > > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > ( ~ 922 MB) Hi Karanbir, Noticed a query via twitter [1] - what's the reason for the size blow out on these? The equivalent RHEL 7 qcow2 is 415 MB and the RHEL 6 one is 323 MB. Thanks, Steve [1] https://twitter.com/scott_lowe/status/506288406549131265 > http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > ( 261 MB) > > Sha256's; > > 3c049c21c19fb194cefdddbac2e4eb6a82664c043c7f2c7261bbeb32ec64023f > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 > 4a16ca316d075b30e8fdc36946ebfd76c44b6882747a6e0c0e2a47a8885323b1 > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2.xz > > please note: these images contain unsigned content ( cloud-init and > cloud-utils-* ), and are therefore unsuiteable for use beyond validation > on your environment. > > regards, > > -- > Karanbir Singh > +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh > GnuPG Key : http://www.karan.org/publickey.asc > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From lars at redhat.com Mon Sep 1 17:53:23 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 1 Sep 2014 13:53:23 -0400 Subject: [Rdo-list] Getting the CentOS-7 fix's rolled into rdo In-Reply-To: <5401A72E.6050604@karan.org> References: <5401A72E.6050604@karan.org> Message-ID: <20140901175322.GE7913@redhat.com> On Sat, Aug 30, 2014 at 11:27:58AM +0100, Karanbir Singh wrote: > hi guys, > > Is there a timeline on when we can expect the CentOS-7 workarounds to > get rolled into packstack/rdo itself ? Is there a list of these workarounds somewhere I can take a look at? -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Tue Sep 2 13:09:34 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 02 Sep 2014 09:09:34 -0400 Subject: [Rdo-list] Getting the CentOS-7 fix's rolled into rdo In-Reply-To: <20140901175322.GE7913@redhat.com> References: <5401A72E.6050604@karan.org> <20140901175322.GE7913@redhat.com> Message-ID: <5405C18E.8080806@redhat.com> On 09/01/2014 01:53 PM, Lars Kellogg-Stedman wrote: > On Sat, Aug 30, 2014 at 11:27:58AM +0100, Karanbir Singh wrote: >> hi guys, >> >> Is there a timeline on when we can expect the CentOS-7 workarounds to >> get rolled into packstack/rdo itself ? > Is there a list of these workarounds somewhere I can take a look at? There are several of them listed at http://drbacchus.com/rdo-on-centos-7 Also, from my note last week: https://bugzilla.redhat.com/show_bug.cgi?id=1117035 has been (as far as I understand) fixed upstream for a couple of weeks. And the other stuff (enumerated in Gael's comments at https://bugzilla.redhat.com/show_bug.cgi?id=1117035#c4 ) is in various states of readiness. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Tue Sep 2 13:14:22 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 02 Sep 2014 09:14:22 -0400 Subject: [Rdo-list] =?utf-8?b?4oCcR2V0IGludm9sdmVk4oCdIHBhZ2Ugb24gb3BlbnN0?= =?utf-8?q?ack=2Eredhat=2Ecom?= In-Reply-To: <5405EC34.6010305@cern.ch> References: <5405EC34.6010305@cern.ch> Message-ID: <5405C2AE.4090209@redhat.com> On 09/02/2014 12:11 PM, Benjamin Lipp wrote: > Hi, > > I just had a hard time finding out if there are any plans on integrating > OpenStack Trove in Packstack and wanted to give you some feedback on the > website(s) of Packstack. > > Finally, I found everything like Bugtracker, Gerrit, ? on > https://wiki.openstack.org/wiki/Packstack > > Maybe you might want to update https://openstack.redhat.com/Get_involved > with this information (I just checked, I don't have edit rights). The page was set as protected, and I've fixed that. You should be able to edit now if you want. > On https://github.com/stackforge/packstack a link to > https://wiki.openstack.org/wiki/Packstack or > https://openstack.redhat.com/Get_involved might be useful, because right > now someone ending up on this Github repository will be lost on a dead end. > > You mention different IRC channels on > https://wiki.openstack.org/wiki/Packstack and > https://openstack.redhat.com/Get_involved as well. > I'll try to get these changes made today if you don't beat me to it. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Tue Sep 2 13:28:08 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 02 Sep 2014 09:28:08 -0400 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter, September 2014 Message-ID: <5405C5E8.8000402@redhat.com> Thanks again for being part of the RDO community! Here's some of what's going on in the community. Hangouts Later this week, Lars Kellogg-Stedman will be doing a hangout on Heat, the OpenStack orchestration manager. He'll be teaching us how to deploy things using Heat. That presentation will happen this Friday, September 5th, 10am Eastern US time, and you can watch it live on YouTube at https://plus.google.com/events/c9u4sjn7ksb8jrmma7vd25aok94 and, if the time isn't convenient for you, you can watch it at that same address after the fact. Drop by #rdo-hangout on Freenode IRC for questions and discussion during the event, or come to #rdo any time. You can watch any past RDO hangouts at http://openstack.redhat.com/Hangouts Meetups Almost every day, there are several OpenStack meetups happening, somewhere in the world. I've started sending a weekly list of upcoming meetups to the RDO-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list - so that you can be aware of what nearby events you can attend. The RDO events page - http://openstack.redhat.com/Events - is being updated each week with the upcoming meetups where we think RDO enthusiasts might be present. Please update that page with events and meetups that you'll be attending. If you do go to an OpenStack meetup, please let me know. Take pictures. Write a blog post. Tell us about it so that we can spread the word. There's a ton of information online about OpenStack, but nothing quite measures up to sitting down with other people and learning what they're doing with OpenStack, and helping one another solve problems. Conferences Be sure you have these events on your calendar, particularly if you're in Europe. A few days ago the schedule for the OpenStack summit was published, at https://openstacksummitnovember2014paris.sched.org/ and it looks like a great schedule. I'm particularly looking forward to hearing Zane and Steven talk about Heat - http://sched.co/1qeMMfz - and hearing Julien, Eoghan and Dina talk about Ceilometer - http://sched.co/1lu0c8Y The OpenStack Summit will be in Paris, November 3 - 7. This is *the* event for OpenStack, and if you only go to one event in a year, it should be this one. Everyone in the OpenStack ecosystem will be there for three days of technical content, and three days of the OpenStack Kilo developer summit, where the next release of OpenStack, code-named Kilo, will be discussed. RDO will be at OpenStack Summit in strength, so this is the place to be to find out about RDO, as well as to learn what the next iteration of OpenStack is going to look like. You can register for the OpenStack Summit at http://openstack.org/summit/ See you in Paris! LinuxCon Europe will be held October 13 - 15, in Dusseldorf, Germany. RDO will be there, as will various other of our friends in the OpenStack ecosystem. It'll be a great opportunity to learn more about OpenStack - there's a ton of OpenStack content under the 'CloudOpen' umbrella at the event. You can see the schedule for that part of the conference http://lccoelce14.sched.org/type/cloudopen And it's a great opportunity to see RDO in action. We'll be demoing RDO on CentOS7, and we'll have people on hand to answer your questions and show you around. A few weeks ago we were in Chicago for LinuxCon North America. Thanks to all of you who dropped by to see us either at the Red Hat booth or at the OpenStack booth. It was great talking with you about how you're using OpenStack in your organizations. Blog posts If you're on the rdo-list mailing list, you'll have noticed that I've started posting weekly roundups of the blog posts from RDO engineers and other RDO enthusiasts. You can find the most recent editions of this on the RDO blog: Week of August 11th: http://openstack.redhat.com/forum/discussion/978/blog-roundup-august-11-17-2014 Week of August 18th: http://openstack.redhat.com/forum/discussion/979/rdo-blog-roundup-week-of-august-18 You can best keep up with these by subscribing to the rdo-list mailing list (See information below) or by following the RDO blog at http://openstack.redhat.com/blog/ In Closing ... Once again, you can always keep up to date a variety of ways: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * Facebook - http://facebook.com/rdocommunity - Yes, we're on Facebook too now! * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter * RDO Q&A - http://ask.openstack.org/ * IRC - #rdo on Freenode.irc.net Thanks again for being part of the RDO community! -- Rich Bowen, OpenStack Community Liaison rbowen at redhat.com http://openstack.redhat.com _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From rdo-info at redhat.com Tue Sep 2 14:24:32 2014 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 2 Sep 2014 14:24:32 +0000 Subject: [Rdo-list] [RDO] RDO Blog roundup, week of August 25 Message-ID: <0000014836c0b59f-3a8907ca-74e7-48d3-9e45-81cbf1ea811c-000000@email.amazonses.com> rbowen started a discussion. RDO Blog roundup, week of August 25 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/981/rdo-blog-roundup-week-of-august-25 Have a great day! From rbowen at redhat.com Tue Sep 2 15:08:21 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 02 Sep 2014 11:08:21 -0400 Subject: [Rdo-list] Meetups in the coming week (Sep 2, 2014) Message-ID: <5405DD65.5030502@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts will be gathering. If you know of others, please do add them to http://openstack.redhat.com/Events If you attend any of these meetups, please take pictures, and send me some. If you blog about the events (and you should), please send me that, too. * Openstack Amsterdam September Meetup & Openstack 101, Wednesday September 3, Openstack & Ceph User Group, Amsterdam - http://www.meetup.com/Openstack-Amsterdam/events/202482492/ * Introduction to Ceph by Sheldon Mustard, at the Manchester Linux Openstack and Ceph Usergroup. Thursday, September 4 - http://www.meetup.com/Manchester-Linux-Openstack-and-Ceph-Usergroup/events/203930912/ * Deploying things with Heat, Friday, September 5th, Google Hangout - https://plus.google.com/events/c9u4sjn7ksb8jrmma7vd25aok94 * Explore RHEL 7 With Techgrills, Sunday, September 7th, Dehli - http://www.meetup.com/iShare-By-Techgrills/events/201146652/ * Red Hat :: Software Developer Meetup, Wednesday, September 10th, Helsinki - http://www.meetup.com/RedHatFinland/events/182796782/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From dustymabe at gmail.com Tue Sep 2 19:41:24 2014 From: dustymabe at gmail.com (Dusty Mabe) Date: Tue, 2 Sep 2014 15:41:24 -0400 Subject: [Rdo-list] M3 test day In-Reply-To: <20140901062432.GC6950@tesla.redhat.com> References: <53FE2090.2010101@redhat.com> <20140828032747.GB7583@tesla.redhat.com> <20140828155146.GA29741@hattop.hq.kanerai.net> <20140901062432.GC6950@tesla.redhat.com> Message-ID: <20140902194124.GA29669@hattop.hq.kanerai.net> On Mon, Sep 01, 2014 at 11:54:32AM +0530, Kashyap Chamarthy wrote: > On Thu, Aug 28, 2014 at 11:51:46AM -0400, Dusty Mabe wrote: > > On Thu, Aug 28, 2014 at 08:57:47AM +0530, Kashyap Chamarthy wrote: > > > On Wed, Aug 27, 2014 at 02:16:48PM -0400, Rich Bowen wrote: > > > > We've been asked by the Fedora Cloud folks if they can help us do a test day > > > > for the M3 packages. M3 release date is September 4 - > > > > https://wiki.openstack.org/wiki/Juno_Release_Schedule - so could we > > > > tentatively look at the week of the 22nd to do a test day? > > > > > > Sounds good me. Just that we'd to ensure packagers are aware of this > > > date too. > > > > How about, tentatively, Thursday September 25th? > > Sure (Rich, that works for you too, right?). If anyone else has > objections I'm sure they'll raise here. > > PS: /me will be mostly away/sporadically available that week (till first > week of Oct) due to some prior planned commitments. Looks like I have a conflict on the 25th. Sorry this just came up. 22-24th work for me if we elect to change it from the 25th. - Dusty From rbowen at redhat.com Tue Sep 2 20:01:12 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 02 Sep 2014 16:01:12 -0400 Subject: [Rdo-list] M3 test day In-Reply-To: <20140902194124.GA29669@hattop.hq.kanerai.net> References: <53FE2090.2010101@redhat.com> <20140828032747.GB7583@tesla.redhat.com> <20140828155146.GA29741@hattop.hq.kanerai.net> <20140901062432.GC6950@tesla.redhat.com> <20140902194124.GA29669@hattop.hq.kanerai.net> Message-ID: <54062208.8090402@redhat.com> On 09/02/2014 03:41 PM, Dusty Mabe wrote: > On Mon, Sep 01, 2014 at 11:54:32AM +0530, Kashyap Chamarthy wrote: >> On Thu, Aug 28, 2014 at 11:51:46AM -0400, Dusty Mabe wrote: >>> On Thu, Aug 28, 2014 at 08:57:47AM +0530, Kashyap Chamarthy wrote: >>>> On Wed, Aug 27, 2014 at 02:16:48PM -0400, Rich Bowen wrote: >>>>> We've been asked by the Fedora Cloud folks if they can help us do a test day >>>>> for the M3 packages. M3 release date is September 4 - >>>>> https://wiki.openstack.org/wiki/Juno_Release_Schedule - so could we >>>>> tentatively look at the week of the 22nd to do a test day? >>>> Sounds good me. Just that we'd to ensure packagers are aware of this >>>> date too. >>> How about, tentatively, Thursday September 25th? >> Sure (Rich, that works for you too, right?). If anyone else has >> objections I'm sure they'll raise here. >> >> PS: /me will be mostly away/sporadically available that week (till first >> week of Oct) due to some prior planned commitments. > Looks like I have a conflict on the 25th. Sorry this just came up. > 22-24th work for me if we elect to change it from the 25th. Tentatively, that's fine, but we still haven't heard anything from the packagers and QE folks, so I'm not yet certain. Anyone else want to pipe up as to interest in helping out with a test day on any of these dates? -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From whayutin at redhat.com Wed Sep 3 19:25:41 2014 From: whayutin at redhat.com (whayutin) Date: Wed, 03 Sep 2014 15:25:41 -0400 Subject: [Rdo-list] Getting the CentOS-7 fix's rolled into rdo In-Reply-To: <5401A72E.6050604@karan.org> References: <5401A72E.6050604@karan.org> Message-ID: <1409772341.2983.17.camel@localhost.localdomain> On Sat, 2014-08-30 at 11:27 +0100, Karanbir Singh wrote: > hi guys, > > Is there a timeline on when we can expect the CentOS-7 workarounds to > get rolled into packstack/rdo itself ? > Prior to running the packstack installer we run the following three workarounds. https://github.com/redhat-openstack/khaleesi/blob/master/workarounds/workarounds-pre-run-packstack.yml#L41 code is here: https://github.com/redhat-openstack/khaleesi/tree/master/roles/workarounds Install is here.. https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production-centos-70-aio-packstack-neutron-gre-rabbitmq/ CI notes/docs here.. https://prod-rdojenkins.rhcloud.com/ Thanks From rbowen at redhat.com Fri Sep 5 13:41:27 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 05 Sep 2014 09:41:27 -0400 Subject: [Rdo-list] Reminder: Heat Google hangout in 30 minutes Message-ID: <5409BD87.9050002@redhat.com> Just a quick reminder that Lars Kellogg-Stedman will be giving a Google Hangout about Heat in about 20 minutes at https://plus.google.com/events/c9u4sjn7ksb8jrmma7vd25aok94 If you miss it, you can watch it at that same location later. Questions and discussion will be on the #rdo-hangout channel on Freenode during the event. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rdo-info at redhat.com Fri Sep 5 14:46:49 2014 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 5 Sep 2014 14:46:49 +0000 Subject: [Rdo-list] [RDO] ICYMI: Deploying things with Heat (Hangout) Message-ID: <0000014846483136-f7abd29f-f26a-4923-8ecc-1a7f93e382df-000000@email.amazonses.com> rbowen started a discussion. ICYMI: Deploying things with Heat (Hangout) --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/982/icymi-deploying-things-with-heat-hangout Have a great day! From apevec at gmail.com Fri Sep 5 16:13:41 2014 From: apevec at gmail.com (Alan Pevec) Date: Fri, 5 Sep 2014 18:13:41 +0200 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image (GRE Issue) In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDD156.8080409@karan.org> Message-ID: ...snip... > dhcp-option=26,1454 > > It forces any new created VM's ( Ubuntu,F20) MTU to be set to 1454 > Deployed CentOS 7 VM had MTU 1500 what was verified via load > without ssh-keypair wit post creation script assigning password to > user "centos". I believe that current image would have problems on > GRE Systems. This issue was fixed in NetworkManager-0.9.9.1-25.git20140326.4dba720.el7_0 which was imported in git.c.o: https://git.centos.org/commit/rpms!NetworkManager/69d374bc89058971025b19e668497d335a166290 Karanbir, is image build not including updates? Cheers, Alan From herrold at owlriver.com Fri Sep 5 17:21:33 2014 From: herrold at owlriver.com (R P Herrold) Date: Fri, 5 Sep 2014 13:21:33 -0400 (EDT) Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image (GRE Issue) In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDD156.8080409@karan.org> Message-ID: On Fri, 5 Sep 2014, Alan Pevec wrote: > Karanbir, is image build not including updates? I see such a SRPM on the centos mirror, from my local sub-copy: -rw-r--r-- 1 admin administ 3368994 Jul 10 04:54 ../centos/centos-7/7.0.1406/SRPMS/updates/Source/SPackages/NetworkManager-0.9.9.1-25.git20140326.4dba720.el7_0.src.rpm checking if the patch is included -- Russ herrold From herrold at owlriver.com Fri Sep 5 17:28:28 2014 From: herrold at owlriver.com (R P Herrold) Date: Fri, 5 Sep 2014 13:28:28 -0400 (EDT) Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image (GRE Issue) In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDD156.8080409@karan.org> Message-ID: On Fri, 5 Sep 2014, R P Herrold wrote: > On Fri, 5 Sep 2014, Alan Pevec wrote: > > > Karanbir, is image build not including updates? > I see such a SRPM on the centos mirror, from my local > sub-copy: which contains the relevant patch [herrold at centos-6 NetworkManager]$ rpm -qlp --nogpg \ NetworkManager-0.9.9.1-25.git20140326.4dba720.el7_0.src.rpm | \ grep 0022-rh1112020-crash-reading-bridge-sysctl.patch 0022-rh1112020-crash-reading-bridge-sysctl.patch As to the creation of the image in question, I cannot find the process on the centos git, so, answering the question of: were updates applied? will need to await KB ... -- Russ herrold From apevec at gmail.com Fri Sep 5 22:33:30 2014 From: apevec at gmail.com (Alan Pevec) Date: Sat, 6 Sep 2014 00:33:30 +0200 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image (GRE Issue) In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDD156.8080409@karan.org> Message-ID: > 0022-rh1112020-crash-reading-bridge-sysctl.patch relevant patch for MTU issue is 0023-rh1093231-mtu-fix.patch > As to the creation of the image in question, I cannot find the > process on the centos git, so, answering the question of: > were updates applied? > will need to await KB ... Yeah, I expect KB will document kickstart and the build process somewhere. In the meantime I've inspected CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 and it contains unpatched NetworkManager-0.9.9.1-13.git20140326.4dba720.el7 Cheers, Alan From mail-lists at karan.org Fri Sep 5 23:38:32 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Sat, 06 Sep 2014 00:38:32 +0100 Subject: [Rdo-list] Request for testing: CentOS-7-x86_64 Generic Cloud image (GRE Issue) In-Reply-To: References: <53FDB5AE.1050907@karan.org> <53FDD156.8080409@karan.org> Message-ID: <540A4978.6060509@karan.org> On 09/05/2014 11:33 PM, Alan Pevec wrote: >> 0022-rh1112020-crash-reading-bridge-sysctl.patch > > relevant patch for MTU issue is 0023-rh1093231-mtu-fix.patch > >> As to the creation of the image in question, I cannot find the >> process on the centos git, so, answering the question of: >> were updates applied? >> will need to await KB ... > > Yeah, I expect KB will document kickstart and the build process somewhere. > In the meantime I've inspected > CentOS-7-x86_64-GenericCloud-20140826_02.qcow2 and it contains > unpatched NetworkManager-0.9.9.1-13.git20140326.4dba720.el7 > I hope to deliver a 'set' of images, one that represents state as on date of release for CentOS-7, and another that will include all updates from that point till date-of-build. Ideally moving to a state where we get updated images weekly. the default symlink will point at the latest build, with updates applied. People who know what they are doing can then just grab whatever point in time they desire. New set of images coming this Monday, I'll post urls and sha sum's once they are live. If we can get one final set of testing around them, we can then move to release -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From ichi.sara at gmail.com Mon Sep 8 08:26:48 2014 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Mon, 8 Sep 2014 10:26:48 +0200 Subject: [Rdo-list] nova-docker Message-ID: Hey guys, I have installed an all-in-one openstack vm using packstack. Now i'm trying to install docker and get nova use it. I'm following this tuto https://wiki.openstack.org/wiki/Docker. When I try the docker search or the docker pull samalba/hipache commandes it gives me this error :2014/09/08 12:02:07 Get https://index.docker.io/v1/repositories/samalba/hipache/images: dial tcp 162.242.195.84:443: connection timed out . I think it's because of the proxy. Have anywone any idea on where I can set the proxy variable for the docker to use it? Thanks in advance. Sara -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Sep 8 10:29:48 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 8 Sep 2014 15:59:48 +0530 Subject: [Rdo-list] nova-docker In-Reply-To: References: Message-ID: <20140908102948.GE14391@tesla.pnq.redhat.com> On Mon, Sep 08, 2014 at 10:26:48AM +0200, ICHIBA Sara wrote: > Hey guys, > > I have installed an all-in-one openstack vm using packstack. Now i'm > trying to install docker and get nova use it. > > I'm following this tuto https://wiki.openstack.org/wiki/Docker. When I > try the docker search or the docker pull samalba/hipache commandes it > gives me this error :2014/09/08 12:02:07 Get > https://index.docker.io/v1/repositories/samalba/hipache/images: dial > tcp 162.242.195.84:443: connection timed out . > > I think it's because of the proxy. Have anywone any idea on where I > can set the proxy variable for the docker to use it? I'm not a Docker user, but just a related comment: Lars has written[1] a couple of blogposts related to Nova/Docker and Heat, where he talks[2] about configuring Docker to listen on a TCP socket: [1] http://blog.oddbit.com/2014/08/28/novadocker-and-environment-var/ [2] http://blog.oddbit.com/2014/08/30/docker-plugin-for-openstack-he/ Hope that helps. -- /kashyap From ichi.sara at gmail.com Mon Sep 8 11:11:21 2014 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Mon, 8 Sep 2014 13:11:21 +0200 Subject: [Rdo-list] nova-docker In-Reply-To: <20140908102948.GE14391@tesla.pnq.redhat.com> References: <20140908102948.GE14391@tesla.pnq.redhat.com> Message-ID: Thank you Kashyap for your response, in order to configure docker to use the proxy , I found that we should set the proxy's variables into the file /etc/sysconfig/docker export HTTP_PROXY="http://:" export HTTPS_PROXY="http://:" once i did this , the two commands worked just fine. 2014-09-08 12:29 GMT+02:00 Kashyap Chamarthy : > On Mon, Sep 08, 2014 at 10:26:48AM +0200, ICHIBA Sara wrote: > > Hey guys, > > > > I have installed an all-in-one openstack vm using packstack. Now i'm > > trying to install docker and get nova use it. > > > > I'm following this tuto https://wiki.openstack.org/wiki/Docker. When I > > try the docker search or the docker pull samalba/hipache commandes it > > gives me this error :2014/09/08 12:02:07 Get > > https://index.docker.io/v1/repositories/samalba/hipache/images: dial > > tcp 162.242.195.84:443: connection timed out . > > > > I think it's because of the proxy. Have anywone any idea on where I > > can set the proxy variable for the docker to use it? > > I'm not a Docker user, but just a related comment: Lars has written[1] a > couple of blogposts related to Nova/Docker and Heat, where he talks[2] > about configuring Docker to listen on a TCP socket: > > > [1] http://blog.oddbit.com/2014/08/28/novadocker-and-environment-var/ > [2] http://blog.oddbit.com/2014/08/30/docker-plugin-for-openstack-he/ > > > Hope that helps. > > -- > /kashyap > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Mon Sep 8 11:46:03 2014 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Mon, 8 Sep 2014 13:46:03 +0200 Subject: [Rdo-list] nova-docker In-Reply-To: References: <20140908102948.GE14391@tesla.pnq.redhat.com> Message-ID: hello again, Now I can launch instances in nova using the docker images I uploaded. The problem is that none of this containers is working. In the console it's said that the instances are runing but when I try to access the console I got those messages displayed . If do have any idea that may help me , i'm ready to try it. By the way, when I start normal vms and not containers , the VM starts normally without problem and I can ping it and do everything. [image: Images int?gr?es 1] 2014-09-08 13:11 GMT+02:00 ICHIBA Sara : > Thank you Kashyap for your response, > > in order to configure docker to use the proxy , I found that we should set > the proxy's variables into the file /etc/sysconfig/docker > > export HTTP_PROXY="http://:" > export HTTPS_PROXY="http://:" > > > once i did this , the two commands worked just fine. > > 2014-09-08 12:29 GMT+02:00 Kashyap Chamarthy : > >> On Mon, Sep 08, 2014 at 10:26:48AM +0200, ICHIBA Sara wrote: >> > Hey guys, >> > >> > I have installed an all-in-one openstack vm using packstack. Now i'm >> > trying to install docker and get nova use it. >> > >> > I'm following this tuto https://wiki.openstack.org/wiki/Docker. When I >> > try the docker search or the docker pull samalba/hipache commandes it >> > gives me this error :2014/09/08 12:02:07 Get >> > https://index.docker.io/v1/repositories/samalba/hipache/images: dial >> > tcp 162.242.195.84:443: connection timed out . >> > >> > I think it's because of the proxy. Have anywone any idea on where I >> > can set the proxy variable for the docker to use it? >> >> I'm not a Docker user, but just a related comment: Lars has written[1] a >> couple of blogposts related to Nova/Docker and Heat, where he talks[2] >> about configuring Docker to listen on a TCP socket: >> >> >> [1] http://blog.oddbit.com/2014/08/28/novadocker-and-environment-var/ >> [2] http://blog.oddbit.com/2014/08/30/docker-plugin-for-openstack-he/ >> >> >> Hope that helps. >> >> -- >> /kashyap >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 21190 bytes Desc: not available URL: From ichi.sara at gmail.com Mon Sep 8 11:49:53 2014 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Mon, 8 Sep 2014 13:49:53 +0200 Subject: [Rdo-list] nova-docker In-Reply-To: References: <20140908102948.GE14391@tesla.pnq.redhat.com> Message-ID: 2014-09-08 13:11 GMT+02:00 ICHIBA Sara : > Thank you Kashyap for your response, > > in order to configure docker to use the proxy , I found that we should set > the proxy's variables into the file /etc/sysconfig/docker > > export HTTP_PROXY="http://:" > export HTTPS_PROXY="http://:" > > > once i did this , the two commands worked just fine. > > 2014-09-08 12:29 GMT+02:00 Kashyap Chamarthy : > >> On Mon, Sep 08, 2014 at 10:26:48AM +0200, ICHIBA Sara wrote: >> > Hey guys, >> > >> > I have installed an all-in-one openstack vm using packstack. Now i'm >> > trying to install docker and get nova use it. >> > >> > I'm following this tuto https://wiki.openstack.org/wiki/Docker. When I >> > try the docker search or the docker pull samalba/hipache commandes it >> > gives me this error :2014/09/08 12:02:07 Get >> > https://index.docker.io/v1/repositories/samalba/hipache/images: dial >> > tcp 162.242.195.84:443: connection timed out . >> > >> > I think it's because of the proxy. Have anywone any idea on where I >> > can set the proxy variable for the docker to use it? >> >> I'm not a Docker user, but just a related comment: Lars has written[1] a >> couple of blogposts related to Nova/Docker and Heat, where he talks[2] >> about configuring Docker to listen on a TCP socket: >> >> >> [1] http://blog.oddbit.com/2014/08/28/novadocker-and-environment-var/ >> [2] http://blog.oddbit.com/2014/08/30/docker-plugin-for-openstack-he/ >> >> >> Hope that helps. >> >> -- >> /kashyap >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: boot_container_failed.PNG Type: image/png Size: 14136 bytes Desc: not available URL: From rdo-info at redhat.com Mon Sep 8 15:58:43 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 8 Sep 2014 15:58:43 +0000 Subject: [Rdo-list] [RDO] RDO Blog roundup, week of September 1 Message-ID: <0000014855fd1908-6f3e2301-aa56-428e-a461-5cb864229b14-000000@email.amazonses.com> rbowen started a discussion. RDO Blog roundup, week of September 1 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/983/rdo-blog-roundup-week-of-september-1 Have a great day! From lars at redhat.com Mon Sep 8 16:10:55 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 8 Sep 2014 12:10:55 -0400 Subject: [Rdo-list] nova-docker In-Reply-To: References: <20140908102948.GE14391@tesla.pnq.redhat.com> Message-ID: <20140908161055.GA30170@redhat.com> On Mon, Sep 08, 2014 at 01:46:03PM +0200, ICHIBA Sara wrote: > problem is that none of this containers is working. In the console it's > said that the instances are runing but when I try to access the console I > got those messages displayed... nova-docker does not have support for remote consoles at this time (since the remote console support is built around VNC access, and a docker container does not have console that can be accessed using this mechanism). In theory the "nova console-log" output may provide a log of stdout from the Docker container (I have not tested this myself, and I don't have a nova-docker environment handy at the moment). -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Mon Sep 8 16:41:28 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 08 Sep 2014 12:41:28 -0400 Subject: [Rdo-list] Fwd: [Openstack] Docs Bug Squash Day tomorrow, 9/9/14 In-Reply-To: References: Message-ID: <540DDC38.6010408@redhat.com> Tomorrow is the OpenStack documentation bug-squashing day. See below for details. If you have any interest in documentation, this is a great way to get more deeply involved in the OpenStack project. --Rich -------- Original Message -------- Subject: [Openstack] Docs Bug Squash Day tomorrow, 9/9/14 Date: Mon, 8 Sep 2014 09:51:41 -0500 From: Anne Gentle To: openstack-docs at lists.openstack.org , openstack at lists.openstack.org Hi all,? We're planning a docs bug squash day tomorrow round-the-planet. With over 650 doc bugs there's plenty to go around. Last time we squashed over 100 bugs. Refer to? https://wiki.openstack.org/wiki/Documentation/BugDay for details. We have documentarians standing by in #openstack-doc.? Sometimes just finding a bug to work on is a challenge. Search for Confirmed or Triaged, and make sure Nobody is assigned, then go to town! openstack-manuals:? http://is.gd/r07MXm openstack-api-site:? http://is.gd/Icn3h9 As a reminder, triaging bugs is super helpful. If you know the fix, but can't set up the docs environment for any reason, please just comment in the bug itself. Search for New bugs and triage those first. Thanks -- looking forward to tomorrow (cleaning up still in anticipation)! Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From ichi.sara at gmail.com Mon Sep 8 17:17:20 2014 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Mon, 8 Sep 2014 19:17:20 +0200 Subject: [Rdo-list] nova-docker In-Reply-To: <20140908161055.GA30170@redhat.com> References: <20140908102948.GE14391@tesla.pnq.redhat.com> <20140908161055.GA30170@redhat.com> Message-ID: Actually nova is not taking into account the docker driver. Now that I've disabled the KVM hypervisor (which i forgot to do earlier) the containers fail to run even if I installed docker and configured nova to use its driver as the guide [1] dictates. [1]: https://wiki.openstack.org/wiki/Docker 2014-09-08 18:10 GMT+02:00 Lars Kellogg-Stedman : > On Mon, Sep 08, 2014 at 01:46:03PM +0200, ICHIBA Sara wrote: > > problem is that none of this containers is working. In the console it's > > said that the instances are runing but when I try to access the console I > > got those messages displayed... > > nova-docker does not have support for remote consoles at this time > (since the remote console support is built around VNC access, and a > docker container does not have console that can be accessed using this > mechanism). > > In theory the "nova console-log" output may provide a log of stdout > from the Docker container (I have not tested this myself, and I don't > have a nova-docker environment handy at the moment). > > -- > Lars Kellogg-Stedman | larsks @ > {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Mon Sep 8 18:22:47 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 8 Sep 2014 14:22:47 -0400 Subject: [Rdo-list] nova-docker In-Reply-To: References: <20140908102948.GE14391@tesla.pnq.redhat.com> <20140908161055.GA30170@redhat.com> Message-ID: <20140908182247.GB30170@redhat.com> On Mon, Sep 08, 2014 at 07:17:20PM +0200, ICHIBA Sara wrote: > Actually nova is not taking into account the docker driver. Now that I've > disabled the KVM hypervisor (which i forgot to do earlier) the containers > fail to run even if I installed docker and configured nova to use its > driver as the guide [1] dictates. What does "failed to run" mean? What sort of errors do you see in the nova compute log (possibly /var/log/nova/nova-compute.log)? Have you verified that Docker is working correctly on these systems? -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Mon Sep 8 18:39:48 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 08 Sep 2014 14:39:48 -0400 Subject: [Rdo-list] Meetups in the coming week (Sep 8, 2014) Message-ID: <540DF7F4.6010507@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts will be gathering. If you know of others, please do add them to http://openstack.redhat.com/Events If you attend any of these meetups, please take pictures, and send me some. If you blog about the events (and you should), please send me that, too. * Red Hat :: Software Developer Meetup, Wednesday, September 10th, Helsinki - http://www.meetup.com/RedHatFinland/events/182796782/ * Openstack and Open Throttle: a multi-vendor Openstack event at K1 Speed, Thursday, September 11, Santa Clara - http://www.meetup.com/San-Francisco-Silicon-Valley-OpenStack-Meetup/events/204925472 * OpenStack India Meetup, Saturday September 13, Pune - http://www.meetup.com/Indian-OpenStack-User-Group/events/200883082/ * OpenStack WorkShop by UnitedStack, Tuesday, September 16th, Shanghai - http://www.meetup.com/China-OpenStack-User-Group/events/203178112/ (The folks at UnitedStack don't actually use RDO, but they're running OpenStack on CentOS, so they have many overlapping interests.) -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Tue Sep 9 07:30:16 2014 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Tue, 9 Sep 2014 09:30:16 +0200 Subject: [Rdo-list] nova-docker In-Reply-To: <20140908182247.GB30170@redhat.com> References: <20140908102948.GE14391@tesla.pnq.redhat.com> <20140908161055.GA30170@redhat.com> <20140908182247.GB30170@redhat.com> Message-ID: It failed because loading docker's driver failed . I think this is my fault as I installed the driver as root and thus it has been installed in the wrong directory and nova is looking for it in another directory 2014-09-08 20:22 GMT+02:00 Lars Kellogg-Stedman : > On Mon, Sep 08, 2014 at 07:17:20PM +0200, ICHIBA Sara wrote: > > Actually nova is not taking into account the docker driver. Now that I've > > disabled the KVM hypervisor (which i forgot to do earlier) the containers > > fail to run even if I installed docker and configured nova to use its > > driver as the guide [1] dictates. > > What does "failed to run" mean? What sort of errors do you see in the > nova compute log (possibly /var/log/nova/nova-compute.log)? Have you > verified that Docker is working correctly on these systems? > > -- > Lars Kellogg-Stedman | larsks @ > {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasaposha at gmail.com Tue Sep 9 12:49:56 2014 From: rasaposha at gmail.com (Rasanjaya Subasinghe) Date: Tue, 9 Sep 2014 18:19:56 +0530 Subject: [Rdo-list] icehouse ldap integration Message-ID: <3A561BE1-5D7C-42B2-928F-6DB924964A5E@gmail.com> Hi, I tried to configure openstack ice house with LDAP and all things are goes well execp neutron issue, this is the issue which appears on the server.log file of neutron service. Can you guide me for this matter? thanks for the help. Cheers, Rasanjaya -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2014-09-09 at 6.04.30 PM.png Type: image/png Size: 802463 bytes Desc: not available URL: From kchamart at redhat.com Tue Sep 9 13:57:59 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 9 Sep 2014 19:27:59 +0530 Subject: [Rdo-list] icehouse ldap integration In-Reply-To: <3A561BE1-5D7C-42B2-928F-6DB924964A5E@gmail.com> References: <3A561BE1-5D7C-42B2-928F-6DB924964A5E@gmail.com> Message-ID: <20140909135759.GM14391@tesla.pnq.redhat.com> On Tue, Sep 09, 2014 at 06:19:56PM +0530, Rasanjaya Subasinghe wrote: > > Hi, > I tried to configure openstack ice house with LDAP and all things are > goes well execp neutron issue, this is the issue which appears on the > server.log file of neutron service. > > Can you guide me for this matter? thanks for the help. This information you've provided is not sufficient to give any meaningful response. At a minimum, if anyone have to help you diagnose your issue, you need to provide: - Describe in more detail what you mean by "configure openstack ice house with LDAP". - What is the test you're trying to perform? An exact reproducer would be very useful. - What is the exact error message you see? Contextual logs/errors from Keystone/Nova. - Exact versions of Keystone, and other relevant packages. - What OS? Fedora? CentOS? Something else? - Probably, provide config files for /etc/keystone/keystone.conf and relevant Neutron config files (preferably uploaded somewhere in *plain text*). -- /kashyap From kchamart at redhat.com Wed Sep 10 06:07:55 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 10 Sep 2014 11:37:55 +0530 Subject: [Rdo-list] Fwd: icehouse ldap integration In-Reply-To: References: <20140909135759.GM14391@tesla.pnq.redhat.com> Message-ID: <20140910060755.GA10920@tesla.redhat.com> [Please don't drop the list for technical threads like this.] On Tue, Sep 09, 2014 at 08:09:51PM +0530, Rasanjaya Subasinghe wrote: > > Hi Kashyap, > Its Centos6.5 and control and 3 compute node setup in-house cloud and without LDAP keystone settings(driver=keystone.identity.backends.ldap.Identity) everything working fine. those are, > 1.Instance spawn perfectly, > 2.live migration work perfectly. > then try to configure keystone with LDAP driver gives that error on neutron server.log. > 3.This setup up is tested on without ml2 and even ml2 test end with same issue. > I will attached the LDAP file and neutron file. > *keystone version 0.9.0 > > below shows the neutron error show on compute.log I'm afrad, you'd have to provide some kind of reproducer for anyone to be able to take a look at this as you have multiple machines in play. -- /kashyap From rasaposha at gmail.com Wed Sep 10 06:22:32 2014 From: rasaposha at gmail.com (Rasanjaya Subasinghe) Date: Wed, 10 Sep 2014 11:52:32 +0530 Subject: [Rdo-list] icehouse ldap integration In-Reply-To: References: <20140909135759.GM14391@tesla.pnq.redhat.com> Message-ID: On Sep 9, 2014, at 8:09 PM, Rasanjaya Subasinghe wrote: > > Hi Kashyap, > Its Centos6.5 and control and 3 compute node setup in-house cloud and without LDAP keystone settings(driver=keystone.identity.backends.ldap.Identity) everything working fine. those are, > 1.Instance spawn perfectly, > 2.live migration work perfectly. > then try to configure keystone with LDAP driver gives that error on neutron server.log. > 3.This setup up is tested on without ml2 and even ml2 test end with same issue. > I will attached the LDAP file and neutron file. > *keystone version 0.9.0 > > > > below shows the neutron error show on compute.log > > > > cheers, > thanks > Begin forwarded message: > >> From: Kashyap Chamarthy >> Subject: Re: [Rdo-list] icehouse ldap integration >> Date: September 9, 2014 at 7:27:59 PM GMT+5:30 >> To: Rasanjaya Subasinghe >> Cc: rdo-list at redhat.com >> >> On Tue, Sep 09, 2014 at 06:19:56PM +0530, Rasanjaya Subasinghe wrote: >>> >>> Hi, >>> I tried to configure openstack ice house with LDAP and all things are >>> goes well execp neutron issue, this is the issue which appears on the >>> server.log file of neutron service. >>> >>> Can you guide me for this matter? thanks for the help. >> >> This information you've provided is not sufficient to give any >> meaningful response. >> >> At a minimum, if anyone have to help you diagnose your issue, you need >> to provide: >> >> - Describe in more detail what you mean by "configure >> openstack ice house with LDAP". >> - What is the test you're trying to perform? An exact reproducer would >> be very useful. >> - What is the exact error message you see? Contextual logs/errors from >> Keystone/Nova. >> - Exact versions of Keystone, and other relevant packages. >> - What OS? Fedora? CentOS? Something else? >> - Probably, provide config files for /etc/keystone/keystone.conf and >> relevant Neutron config files (preferably uploaded somewhere in >> *plain text*). >> >> >> -- >> /kashyap > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasaposha at gmail.com Wed Sep 10 06:26:59 2014 From: rasaposha at gmail.com (Rasanjaya Subasinghe) Date: Wed, 10 Sep 2014 11:56:59 +0530 Subject: [Rdo-list] icehouse ldap integration In-Reply-To: References: <20140909135759.GM14391@tesla.pnq.redhat.com> Message-ID: Hi, Sorry for the inconvenience sir,I herewith attached the keystone.conf,neutron.conf and LDAP ldif file. Its Centos6.5 and control and 3 compute node setup in-house cloud and without LDAP keystone settings( driver=keystone.identity.backends.ldap.Identity) everything working fine. those are, 1.Instance spawn perfectly, 2.live migration work perfectly. then try to configure keystone with LDAP driver gives that error on neutron server.log. 3.This setup up is tested on without ml2 and even ml2 test end with same issue. I will attached the LDAP file and neutron file. *keystone version 0.9.0 below shows the neutron error show on compute.log On Wed, Sep 10, 2014 at 11:52 AM, Rasanjaya Subasinghe wrote: > > On Sep 9, 2014, at 8:09 PM, Rasanjaya Subasinghe > wrote: > > > Hi Kashyap, > Its Centos6.5 and control and 3 compute node setup in-house cloud and > without LDAP keystone settings( > driver=keystone.identity.backends.ldap.Identity) everything working fine. > those are, > 1.Instance spawn perfectly, > 2.live migration work perfectly. > then try to configure keystone with LDAP driver gives that error on > neutron server.log. > 3.This setup up is tested on without ml2 and even ml2 test end > with same issue. > I will attached the LDAP file and neutron file. > *keystone version 0.9.0 > > > > below shows the neutron error show on compute.log > > > > cheers, > thanks > Begin forwarded message: > > *From: *Kashyap Chamarthy > *Subject: **Re: [Rdo-list] icehouse ldap integration* > *Date: *September 9, 2014 at 7:27:59 PM GMT+5:30 > *To: *Rasanjaya Subasinghe > *Cc: *rdo-list at redhat.com > > On Tue, Sep 09, 2014 at 06:19:56PM +0530, Rasanjaya Subasinghe wrote: > > > Hi, > I tried to configure openstack ice house with LDAP and all things are > goes well execp neutron issue, this is the issue which appears on the > server.log file of neutron service. > > Can you guide me for this matter? thanks for the help. > > > This information you've provided is not sufficient to give any > meaningful response. > > At a minimum, if anyone have to help you diagnose your issue, you need > to provide: > > - Describe in more detail what you mean by "configure > openstack ice house with LDAP". > - What is the test you're trying to perform? An exact reproducer would > be very useful. > - What is the exact error message you see? Contextual logs/errors from > Keystone/Nova. > - Exact versions of Keystone, and other relevant packages. > - What OS? Fedora? CentOS? Something else? > - Probably, provide config files for /etc/keystone/keystone.conf and > relevant Neutron config files (preferably uploaded somewhere in > *plain text*). > > > -- > /kashyap > > > > -- Rasanjaya Subasinghe -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: keystone.conf Type: application/octet-stream Size: 40030 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: neutron.conf Type: application/octet-stream Size: 19131 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: staging.ldif Type: application/octet-stream Size: 5212 bytes Desc: not available URL: From rasaposha at gmail.com Wed Sep 10 06:32:01 2014 From: rasaposha at gmail.com (Rasanjaya Subasinghe) Date: Wed, 10 Sep 2014 12:02:01 +0530 Subject: [Rdo-list] icehouse ldap integration In-Reply-To: References: <20140909135759.GM14391@tesla.pnq.redhat.com> Message-ID: Hi Kashyap, this is the configuration i have made for integrate with LDAP, 1. keystone.conf url = ldap://192.168.16.100 user = cn=admin,dc=example,dc=org password = 123 suffix = dc=example,dc=org user_tree_dn = ou=Users,dc=example,dc=org user_objectclass = inetOrgPerson user_id_attribute = cn user_name_attribute = cn user_pass_attribute = userPassword user_enabled_emulation = True user_enabled_emulation_dn = cn=enabled_users,ou=Users,dc=example,dc=org user_allow_create = False user_allow_update = False user_allow_delete = False tenant_tree_dn = ou=Groups,dc=example,dc=org tenant_objectclass = groupOfNames tenant_id_attribute = cn #tenant_domain_id_attribute = businessCategory #tenant_domain_id_attribute = cn tenant_member_attribute = member tenant_name_attribute = cn tenant_domain_id_attribute = None tenant_allow_create = False tenant_allow_update = False tenant_allow_delete = False role_tree_dn = ou=Roles,dc=example,dc=org role_objectclass = organizationalRole role_member_attribute = roleOccupant role_id_attribute = cn role_name_attribute = cn role_allow_create = False role_allow_update = False role_allow_delete = False *2.neutron.conf* [DEFAULT] # Print more verbose output (set logging level to INFO instead of default WARNING level). # verbose = True verbose = True # Print debugging output (set logging level to DEBUG instead of default WARNING level). # debug = False debug = True # Where to store Neutron state files. This directory must be writable by the # user executing the agent. # state_path = /var/lib/neutron # Where to store lock files # lock_path = $state_path/lock # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # log_date_format = %Y-%m-%d %H:%M:%S # use_syslog -> syslog # log_file and log_dir -> log_dir/log_file # (not log_file) and log_dir -> log_dir/{binary_name}.log # use_stderr -> stderr # (not user_stderr) and (not log_file) -> stdout # publish_errors -> notification system # use_syslog = False use_syslog = False # syslog_log_facility = LOG_USER # use_stderr = False # log_file = # log_dir = log_dir =/var/log/neutron # publish_errors = False # Address to bind the API server to # bind_host = 0.0.0.0 bind_host = 0.0.0.0 # Port the bind the API server to # bind_port = 9696 bind_port = 9696 # Path to the extensions. Note that this can be a colon-separated list of # paths. For example: # api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions # The __path__ of neutron.extensions is appended to this, so if your # extensions are in there you don't need to specify them here # api_extensions_path = # (StrOpt) Neutron core plugin entrypoint to be loaded from the # neutron.core_plugins namespace. See setup.cfg for the entrypoint names of the # plugins included in the neutron source distribution. For compatibility with # previous versions, the class name of a plugin can be specified instead of its # entrypoint name. # # core_plugin = core_plugin =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 # Example: core_plugin = ml2 # (ListOpt) List of service plugin entrypoints to be loaded from the # neutron.service_plugins namespace. See setup.cfg for the entrypoint names of # the plugins included in the neutron source distribution. For compatibility # with previous versions, the class name of a plugin can be specified instead # of its entrypoint name. # # service_plugins = service_plugins =neutron.services.firewall.fwaas_plugin.FirewallPlugin # Example: service_plugins = router,firewall,lbaas,vpnaas,metering # Paste configuration file # api_paste_config = /usr/share/neutron/api-paste.ini # The strategy to be used for auth. # Supported values are 'keystone'(default), 'noauth'. # auth_strategy = noauth auth_strategy = keystone # Base MAC address. The first 3 octets will remain unchanged. If the # 4h octet is not 00, it will also be used. The others will be # randomly generated. # 3 octet # base_mac = fa:16:3e:00:00:00 base_mac = fa:16:3e:00:00:00 # 4 octet # base_mac = fa:16:3e:4f:00:00 # Maximum amount of retries to generate a unique MAC address # mac_generation_retries = 16 mac_generation_retries = 16 # DHCP Lease duration (in seconds) # dhcp_lease_duration = 86400 dhcp_lease_duration = 86400 # Allow sending resource operation notification to DHCP agent # dhcp_agent_notification = True # Enable or disable bulk create/update/delete operations # allow_bulk = True allow_bulk = True # Enable or disable pagination # allow_pagination = False allow_pagination = False # Enable or disable sorting # allow_sorting = False allow_sorting = False # Enable or disable overlapping IPs for subnets # Attention: the following parameter MUST be set to False if Neutron is # being used in conjunction with nova security groups # allow_overlapping_ips = True allow_overlapping_ips = True # Ensure that configured gateway is on subnet # force_gateway_on_subnet = False # RPC configuration options. Defined in rpc __init__ # The messaging module to use, defaults to kombu. # rpc_backend = neutron.openstack.common.rpc.impl_kombu rpc_backend = neutron.openstack.common.rpc.impl_kombu # Size of RPC thread pool # rpc_thread_pool_size = 64 # Size of RPC connection pool # rpc_conn_pool_size = 30 # Seconds to wait for a response from call or multicall # rpc_response_timeout = 60 # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq. # rpc_cast_timeout = 30 # Modules of exceptions that are permitted to be recreated # upon receiving exception data from an rpc call. # allowed_rpc_exception_modules = neutron.openstack.common.exception, nova.exception # AMQP exchange to connect to if using RabbitMQ or QPID # control_exchange = neutron control_exchange = neutron # If passed, use a fake RabbitMQ provider # fake_rabbit = False # Configuration options if sending notifications via kombu rpc (these are # the defaults) # SSL version to use (valid only if SSL enabled) # kombu_ssl_version = # SSL key file (valid only if SSL enabled) # kombu_ssl_keyfile = # SSL cert file (valid only if SSL enabled) # kombu_ssl_certfile = # SSL certification authority file (valid only if SSL enabled) # kombu_ssl_ca_certs = # IP address of the RabbitMQ installation # rabbit_host = localhost rabbit_host = 192.168.32.20 # Password of the RabbitMQ server # rabbit_password = guest rabbit_password = guest # Port where RabbitMQ server is running/listening # rabbit_port = 5672 rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' # rabbit_hosts = localhost:5672 rabbit_hosts = 192.168.32.20:5672 # User ID used for RabbitMQ connections # rabbit_userid = guest rabbit_userid = guest # Location of a virtual RabbitMQ installation. # rabbit_virtual_host = / rabbit_virtual_host = / # Maximum retries with trying to connect to RabbitMQ # (the default of 0 implies an infinite retry count) # rabbit_max_retries = 0 # RabbitMQ connection retry interval # rabbit_retry_interval = 1 # Use HA queues in RabbitMQ (x-ha-policy: all). You need to # wipe RabbitMQ database when changing this option. (boolean value) # rabbit_ha_queues = false rabbit_ha_queues = False # QPID # rpc_backend=neutron.openstack.common.rpc.impl_qpid # Qpid broker hostname # qpid_hostname = localhost # Qpid broker port # qpid_port = 5672 # Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port' # qpid_hosts = localhost:5672 # Username for qpid connection # qpid_username = '' # Password for qpid connection # qpid_password = '' # Space separated list of SASL mechanisms to use for auth # qpid_sasl_mechanisms = '' # Seconds between connection keepalive heartbeats # qpid_heartbeat = 60 # Transport to use, either 'tcp' or 'ssl' # qpid_protocol = tcp # Disable Nagle algorithm # qpid_tcp_nodelay = True # ZMQ # rpc_backend=neutron.openstack.common.rpc.impl_zmq # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. # rpc_zmq_bind_address = * # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are created, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = neutron.openstack.common.notifier.no_op_notifier # Logging driver # notification_driver = neutron.openstack.common.notifier.log_notifier # RPC driver. # notification_driver = neutron.openstack.common.notifier.rpc_notifier # default_notification_level is used to form actual topic name(s) or to set logging level # default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s # notification_topics = notifications # Default maximum number of items returned in a single response, # value == infinite and value < 0 means no max limit, and value must # be greater than 0. If the number of items requested is greater than # pagination_max_limit, server will just return pagination_max_limit # of number of items. # pagination_max_limit = -1 # Maximum number of DNS nameservers per subnet # max_dns_nameservers = 5 # Maximum number of host routes per subnet # max_subnet_host_routes = 20 # Maximum number of fixed ips per port # max_fixed_ips_per_port = 5 # =========== items for agent management extension ============= # Seconds to regard the agent as down; should be at least twice # report_interval, to be sure the agent is down for good # agent_down_time = 75 agent_down_time = 75 # =========== end of items for agent management extension ===== # =========== items for agent scheduler extension ============= # Driver to use for scheduling network to DHCP agent # network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler # Driver to use for scheduling router to a default L3 agent # router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler # Driver to use for scheduling a loadbalancer pool to an lbaas agent # loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.agent_scheduler.ChanceScheduler # Allow auto scheduling networks to DHCP agent. It will schedule non-hosted # networks to first DHCP agent which sends get_active_networks message to # neutron server # network_auto_schedule = True # Allow auto scheduling routers to L3 agent. It will schedule non-hosted # routers to first L3 agent which sends sync_routers message to neutron server # router_auto_schedule = True # Number of DHCP agents scheduled to host a network. This enables redundant # DHCP agents for configured networks. # dhcp_agents_per_network = 1 dhcp_agents_per_network = 1 # =========== end of items for agent scheduler extension ===== # =========== WSGI parameters related to the API server ============== # Number of separate worker processes to spawn. The default, 0, runs the # worker thread in the current process. Greater than 0 launches that number of # child processes as workers. The parent process manages them. # api_workers = 0 api_workers = 0 # Number of separate RPC worker processes to spawn. The default, 0, runs the # worker thread in the current process. Greater than 0 launches that number of # child processes as RPC workers. The parent process manages them. # This feature is experimental until issues are addressed and testing has been # enabled for various plugins for compatibility. # rpc_workers = 0 # Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when # starting API server. Not supported on OS X. # tcp_keepidle = 600 # Number of seconds to keep retrying to listen # retry_until_window = 30 # Number of backlog requests to configure the socket with. # backlog = 4096 # Max header line to accommodate large tokens # max_header_line = 16384 # Enable SSL on the API server # use_ssl = False use_ssl = False # Certificate file to use when starting API server securely # ssl_cert_file = /path/to/certfile # Private key file to use when starting API server securely # ssl_key_file = /path/to/keyfile # CA certificate file to use when starting API server securely to # verify connecting clients. This is an optional parameter only required if # API clients need to authenticate to the API server using SSL certificates # signed by a trusted CA # ssl_ca_file = /path/to/cafile # ======== end of WSGI parameters related to the API server ========== # ======== neutron nova interactions ========== # Send notification to nova when port status is active. # notify_nova_on_port_status_changes = False notify_nova_on_port_status_changes = True # Send notifications to nova when port data (fixed_ips/floatingips) change # so nova can update it's cache. # notify_nova_on_port_data_changes = False notify_nova_on_port_data_changes = True # URL for connection to nova (Only supports one nova region currently). # nova_url = http://127.0.0.1:8774/v2 nova_url = http://192.168.32.20:8774/v2 # Name of nova region to use. Useful if keystone manages more than one region # nova_region_name = nova_region_name =RegionOne # Username for connection to nova in admin context # nova_admin_username = nova_admin_username =nova # The uuid of the admin nova tenant # nova_admin_tenant_id = nova_admin_tenant_id =d3e2355e31b449cca9dd57fa5073ec2f # Password for connection to nova in admin context. # nova_admin_password = nova_admin_password =secret # Authorization URL for connection to nova in admin context. # nova_admin_auth_url = nova_admin_auth_url =http://192.168.32.20:35357/v2.0 # Number of seconds between sending events to nova if there are any events to send # send_events_interval = 2 send_events_interval = 2 # ======== end of neutron nova interactions ========== rabbit_use_ssl=False [quotas] # Default driver to use for quota checks # quota_driver = neutron.db.quota_db.DbQuotaDriver # Resource name(s) that are supported in quota features # quota_items = network,subnet,port # Default number of resource allowed per tenant. A negative value means # unlimited. # default_quota = -1 # Number of networks allowed per tenant. A negative value means unlimited. # quota_network = 10 # Number of subnets allowed per tenant. A negative value means unlimited. # quota_subnet = 10 # Number of ports allowed per tenant. A negative value means unlimited. # quota_port = 50 # Number of security groups allowed per tenant. A negative value means # unlimited. # quota_security_group = 10 # Number of security group rules allowed per tenant. A negative value means # unlimited. # quota_security_group_rule = 100 # Number of vips allowed per tenant. A negative value means unlimited. # quota_vip = 10 # Number of pools allowed per tenant. A negative value means unlimited. # quota_pool = 10 # Number of pool members allowed per tenant. A negative value means unlimited. # The default is unlimited because a member is not a real resource consumer # on Openstack. However, on back-end, a member is a resource consumer # and that is the reason why quota is possible. # quota_member = -1 # Number of health monitors allowed per tenant. A negative value means # unlimited. # The default is unlimited because a health monitor is not a real resource # consumer on Openstack. However, on back-end, a member is a resource consumer # and that is the reason why quota is possible. # quota_health_monitors = -1 # Number of routers allowed per tenant. A negative value means unlimited. # quota_router = 10 # Number of floating IPs allowed per tenant. A negative value means unlimited. # quota_floatingip = 50 [agent] # Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real # root filter facility. # Change to "sudo" to skip the filtering and just run the comand directly # root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf # =========== items for agent management extension ============= # seconds between nodes reporting state to server; should be less than # agent_down_time, best if it is half or less than agent_down_time # report_interval = 30 report_interval = 30 # =========== end of items for agent management extension ===== [keystone_authtoken] # auth_host = 127.0.0.1 auth_host = 192.168.32.20 # auth_port = 35357 auth_port = 35357 # auth_protocol = http auth_protocol = http # admin_tenant_name = %SERVICE_TENANT_NAME% admin_tenant_name = services # admin_user = %SERVICE_USER% admin_user = neutron # admin_password = %SERVICE_PASSWORD% admin_password = secret auth_uri=http://192.168.32.20:5000/ [database] # This line MUST be changed to actually run the plugin. # Example: # connection = mysql://root:pass at 127.0.0.1:3306/neutron connection = mysql://neutron:secret at 192.168.32.20/ovs_neutron # Replace 127.0.0.1 above with the IP address of the database used by the # main neutron server. (Leave it as is if the database runs on this host.) # connection = sqlite:// # The SQLAlchemy connection string used to connect to the slave database # slave_connection = # Database reconnection retry times - in event connectivity is lost # set to -1 implies an infinite retry count # max_retries = 10 max_retries = 10 # Database reconnection interval in seconds - if the initial connection to the # database fails # retry_interval = 10 retry_interval = 10 # Minimum number of SQL connections to keep open in a pool # min_pool_size = 1 # Maximum number of SQL connections to keep open in a pool # max_pool_size = 10 # Timeout in seconds before idle sql connections are reaped # idle_timeout = 3600 idle_timeout = 3600 # If set, use this value for max_overflow with sqlalchemy # max_overflow = 20 # Verbosity of SQL debugging information. 0=None, 100=Everything # connection_debug = 0 # Add python stack traces to SQL as comment strings # connection_trace = False # If set, use this value for pool_timeout with sqlalchemy # pool_timeout = 10 [service_providers] # Specify service providers (drivers) for advanced services like loadbalancer, VPN, Firewall. # Must be in form: # service_provider=::[:default] # List of allowed service types includes LOADBALANCER, FIREWALL, VPN # Combination of and must be unique; must also be unique # This is multiline option, example for default provider: # service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default # example of non-default provider: # service_provider=FIREWALL:name2:firewall_driver_path # --- Reference implementations --- # service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default # In order to activate Radware's lbaas driver you need to uncomment the next line. # If you want to keep the HA Proxy as the default lbaas driver, remove the attribute default from the line below. # Otherwise comment the HA Proxy line # service_provider = LOADBALANCER:Radware:neutron.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver:default # uncomment the following line to make the 'netscaler' LBaaS provider available. # service_provider=LOADBALANCER:NetScaler:neutron.services.loadbalancer.drivers.netscaler.netscaler_driver.NetScalerPluginDriver # Uncomment the following line (and comment out the OpenSwan VPN line) to enable Cisco's VPN driver. # service_provider=VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default # Uncomment the line below to use Embrane heleos as Load Balancer service provider. # service_provider=LOADBALANCER:Embrane:neutron.services.loadbalancer.drivers.embrane.driver.EmbraneLbaas:default *3.Ldif.file for openLDAP* # extended LDIF # # LDAPv3 # base with scope subtree # filter: (objectclass=*) # requesting: ALL # # example.org dn: dc=example,dc=org objectClass: top objectClass: dcObject objectClass: organization o: example Inc dc: example # Groups, example.org dn: ou=Groups,dc=example,dc=org ou: Groups objectClass: organizationalUnit # Users, example.org dn: ou=Users,dc=example,dc=org ou: users objectClass: organizationalUnit # Roles, example.org dn: ou=Roles,dc=example,dc=org ou: roles objectClass: organizationalUnit # admin, Users, example.org dn: cn=admin,ou=Users,dc=example,dc=org cn: admin objectClass: inetOrgPerson objectClass: top sn: admin uid: admin userPassword: secret # demo, Users, example.org dn: cn=demo,ou=Users,dc=example,dc=org cn: demo objectClass: inetOrgPerson objectClass: top sn: demo uid: demo userPassword: demo # cinder, Users, example.org dn: cn=cinder,ou=Users,dc=example,dc=org cn: cinder objectClass: inetOrgPerson objectClass: top sn: cinder uid: cinder userPassword: secret # glance, Users, example.org dn: cn=glance,ou=Users,dc=example,dc=org cn: glance objectClass: inetOrgPerson objectClass: top sn: glance uid: glance userPassword: secret # nova, Users, example.org dn: cn=nova,ou=Users,dc=example,dc=org cn: nova objectClass: inetOrgPerson objectClass: top sn: nova uid: nova userPassword: secret # neutron, Users, example.org dn: cn=neutron,ou=Users,dc=example,dc=org cn: neutron objectClass: inetOrgPerson objectClass: top sn: neutron uid: neutron userPassword: secret # enabled_users, Users, example.org dn: cn=enabled_users,ou=Users,dc=example,dc=org cn: enabled_users member: cn=admin,ou=Users,dc=example,dc=org member: cn=demo,ou=Users,dc=example,dc=org member: cn=nova,ou=Users,dc=example,dc=org member: cn=glance,ou=Users,dc=example,dc=org member: cn=cinder,ou=Users,dc=example,dc=org member: cn=neutron,ou=Users,dc=example,dc=org objectClass: groupOfNames # demo, Groups, example.org dn: cn=demo,ou=Groups,dc=example,dc=org cn: demo objectClass: groupOfNames member: cn=admin,ou=Users,dc=example,dc=org member: cn=demo,ou=Users,dc=example,dc=org member: cn=nova,ou=Users,dc=example,dc=org member: cn=glance,ou=Users,dc=example,dc=org member: cn=cinder,ou=Users,dc=example,dc=org member: cn=neutron,ou=Users,dc=example,dc=org # Member, demo, Groups, example.org dn: cn=Member,cn=demo,ou=Groups,dc=example,dc=org cn: member description: Role associated with openstack users objectClass: organizationalRole roleOccupant: cn=demo,ou=Users,dc=example,dc=org # admin, demo, Groups, example.org dn: cn=admin,cn=demo,ou=Groups,dc=example,dc=org cn: admin description: Role associated with openstack users objectClass: organizationalRole roleOccupant: cn=admin,ou=Users,dc=example,dc=org roleOccupant: cn=nova,ou=Users,dc=example,dc=org roleOccupant: cn=glance,ou=Users,dc=example,dc=org roleOccupant: cn=cinder,ou=Users,dc=example,dc=org roleOccupant: cn=neutron,ou=Users,dc=example,dc=org # services, Groups, example.org dn: cn=services,ou=Groups,dc=example,dc=org cn: services objectClass: groupOfNames member: cn=admin,ou=Users,dc=example,dc=org member: cn=demo,ou=Users,dc=example,dc=org member: cn=nova,ou=Users,dc=example,dc=org member: cn=glance,ou=Users,dc=example,dc=org member: cn=cinder,ou=Users,dc=example,dc=org member: cn=neutron,ou=Users,dc=example,dc=org # admin, services, Groups, example.org dn: cn=admin,cn=services,ou=Groups,dc=example,dc=org cn: admin description: Role associated with openstack users objectClass: organizationalRole roleOccupant: cn=admin,ou=Users,dc=example,dc=org roleOccupant: cn=nova,ou=Users,dc=example,dc=org roleOccupant: cn=glance,ou=Users,dc=example,dc=org roleOccupant: cn=cinder,ou=Users,dc=example,dc=org roleOccupant: cn=neutron,ou=Users,dc=example,dc=org # admin, Groups, example.org dn: cn=admin,ou=Groups,dc=example,dc=org cn: admin objectClass: groupOfNames member: cn=admin,ou=Users,dc=example,dc=org member: cn=demo,ou=Users,dc=example,dc=org member: cn=nova,ou=Users,dc=example,dc=org member: cn=glance,ou=Users,dc=example,dc=org member: cn=cinder,ou=Users,dc=example,dc=org member: cn=neutron,ou=Users,dc=example,dc=org # admin, admin, Groups, example.org dn: cn=admin,cn=admin,ou=Groups,dc=example,dc=org cn: admin description: Role associated with openstack users objectClass: organizationalRole roleOccupant: cn=admin,ou=Users,dc=example,dc=org roleOccupant: cn=nova,ou=Users,dc=example,dc=org roleOccupant: cn=glance,ou=Users,dc=example,dc=org roleOccupant: cn=cinder,ou=Users,dc=example,dc=org roleOccupant: cn=neutron,ou=Users,dc=example,dc=org # Member, Roles, example.org dn: cn=Member,ou=Roles,dc=example,dc=org cn: member description: Role associated with openstack users objectClass: organizationalRole roleOccupant: cn=demo,ou=Users,dc=example,dc=org # admin, Roles, example.org dn: cn=admin,ou=Roles,dc=example,dc=org cn: admin description: Role associated with openstack users objectClass: organizationalRole roleOccupant: cn=admin,ou=Users,dc=example,dc=org roleOccupant: cn=nova,ou=Users,dc=example,dc=org roleOccupant: cn=glance,ou=Users,dc=example,dc=org roleOccupant: cn=cinder,ou=Users,dc=example,dc=org roleOccupant: cn=neutron,ou=Users,dc=example,dc=org On Wed, Sep 10, 2014 at 11:56 AM, Rasanjaya Subasinghe wrote: > > Hi, > Sorry for the inconvenience sir,I herewith attached the > keystone.conf,neutron.conf and LDAP ldif file. > Its Centos6.5 and control and 3 compute node setup in-house cloud and > without LDAP keystone settings( > driver=keystone.identity.backends.ldap.Identity) everything working fine. > those are, > 1.Instance spawn perfectly, > 2.live migration work perfectly. > then try to configure keystone with LDAP driver gives that error on > neutron server.log. > 3.This setup up is tested on without ml2 and even ml2 test end > with same issue. > I will attached the LDAP file and neutron file. > *keystone version 0.9.0 > > > > > > below shows the neutron error show on compute.log > > On Wed, Sep 10, 2014 at 11:52 AM, Rasanjaya Subasinghe < > rasaposha at gmail.com> wrote: > >> >> On Sep 9, 2014, at 8:09 PM, Rasanjaya Subasinghe >> wrote: >> >> >> Hi Kashyap, >> Its Centos6.5 and control and 3 compute node setup in-house cloud and >> without LDAP keystone settings( >> driver=keystone.identity.backends.ldap.Identity) everything working >> fine. those are, >> 1.Instance spawn perfectly, >> 2.live migration work perfectly. >> then try to configure keystone with LDAP driver gives that error on >> neutron server.log. >> 3.This setup up is tested on without ml2 and even ml2 test end >> with same issue. >> I will attached the LDAP file and neutron file. >> *keystone version 0.9.0 >> >> >> >> below shows the neutron error show on compute.log >> >> >> >> cheers, >> thanks >> Begin forwarded message: >> >> *From: *Kashyap Chamarthy >> *Subject: **Re: [Rdo-list] icehouse ldap integration* >> *Date: *September 9, 2014 at 7:27:59 PM GMT+5:30 >> *To: *Rasanjaya Subasinghe >> *Cc: *rdo-list at redhat.com >> >> On Tue, Sep 09, 2014 at 06:19:56PM +0530, Rasanjaya Subasinghe wrote: >> >> >> Hi, >> I tried to configure openstack ice house with LDAP and all things are >> goes well execp neutron issue, this is the issue which appears on the >> server.log file of neutron service. >> >> Can you guide me for this matter? thanks for the help. >> >> >> This information you've provided is not sufficient to give any >> meaningful response. >> >> At a minimum, if anyone have to help you diagnose your issue, you need >> to provide: >> >> - Describe in more detail what you mean by "configure >> openstack ice house with LDAP". >> - What is the test you're trying to perform? An exact reproducer would >> be very useful. >> - What is the exact error message you see? Contextual logs/errors from >> Keystone/Nova. >> - Exact versions of Keystone, and other relevant packages. >> - What OS? Fedora? CentOS? Something else? >> - Probably, provide config files for /etc/keystone/keystone.conf and >> relevant Neutron config files (preferably uploaded somewhere in >> *plain text*). >> >> >> -- >> /kashyap >> >> >> >> > > > -- > Rasanjaya Subasinghe > -- Rasanjaya Subasinghe -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasaposha at gmail.com Wed Sep 10 06:33:04 2014 From: rasaposha at gmail.com (Rasanjaya Subasinghe) Date: Wed, 10 Sep 2014 12:03:04 +0530 Subject: [Rdo-list] icehouse ldap integration In-Reply-To: References: <20140909135759.GM14391@tesla.pnq.redhat.com> Message-ID: Hi sir, I will provide more details for reproduce the issue. cheers On Wed, Sep 10, 2014 at 12:02 PM, Rasanjaya Subasinghe wrote: > Hi Kashyap, > > this is the configuration i have made for integrate with LDAP, > > 1. keystone.conf > > url = ldap://192.168.16.100 > > user = cn=admin,dc=example,dc=org > > password = 123 > > suffix = dc=example,dc=org > > > user_tree_dn = ou=Users,dc=example,dc=org > > user_objectclass = inetOrgPerson > > user_id_attribute = cn > > user_name_attribute = cn > > user_pass_attribute = userPassword > > user_enabled_emulation = True > > user_enabled_emulation_dn = cn=enabled_users,ou=Users,dc=example,dc=org > > user_allow_create = False > > user_allow_update = False > > user_allow_delete = False > > > tenant_tree_dn = ou=Groups,dc=example,dc=org > > tenant_objectclass = groupOfNames > > tenant_id_attribute = cn > > #tenant_domain_id_attribute = businessCategory > > #tenant_domain_id_attribute = cn > > tenant_member_attribute = member > > tenant_name_attribute = cn > > tenant_domain_id_attribute = None > > tenant_allow_create = False > > tenant_allow_update = False > > tenant_allow_delete = False > > > > role_tree_dn = ou=Roles,dc=example,dc=org > > role_objectclass = organizationalRole > > role_member_attribute = roleOccupant > > role_id_attribute = cn > > role_name_attribute = cn > > role_allow_create = False > > role_allow_update = False > > role_allow_delete = False > > *2.neutron.conf* > > [DEFAULT] > # Print more verbose output (set logging level to INFO instead of default > WARNING level). > # verbose = True > verbose = True > > # Print debugging output (set logging level to DEBUG instead of default > WARNING level). > # debug = False > debug = True > > # Where to store Neutron state files. This directory must be writable by > the > # user executing the agent. > # state_path = /var/lib/neutron > > # Where to store lock files > # lock_path = $state_path/lock > > # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s > # log_date_format = %Y-%m-%d %H:%M:%S > > # use_syslog -> syslog > # log_file and log_dir -> log_dir/log_file > # (not log_file) and log_dir -> log_dir/{binary_name}.log > # use_stderr -> stderr > # (not user_stderr) and (not log_file) -> stdout > # publish_errors -> notification system > > # use_syslog = False > use_syslog = False > # syslog_log_facility = LOG_USER > > # use_stderr = False > # log_file = > # log_dir = > log_dir =/var/log/neutron > > # publish_errors = False > > # Address to bind the API server to > # bind_host = 0.0.0.0 > bind_host = 0.0.0.0 > > # Port the bind the API server to > # bind_port = 9696 > bind_port = 9696 > > # Path to the extensions. Note that this can be a colon-separated list of > # paths. For example: > # api_extensions_path = > extensions:/path/to/more/extensions:/even/more/extensions > # The __path__ of neutron.extensions is appended to this, so if your > # extensions are in there you don't need to specify them here > # api_extensions_path = > > # (StrOpt) Neutron core plugin entrypoint to be loaded from the > # neutron.core_plugins namespace. See setup.cfg for the entrypoint names > of the > # plugins included in the neutron source distribution. For compatibility > with > # previous versions, the class name of a plugin can be specified instead > of its > # entrypoint name. > # > # core_plugin = > core_plugin > =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > # Example: core_plugin = ml2 > > # (ListOpt) List of service plugin entrypoints to be loaded from the > # neutron.service_plugins namespace. See setup.cfg for the entrypoint > names of > # the plugins included in the neutron source distribution. For > compatibility > # with previous versions, the class name of a plugin can be specified > instead > # of its entrypoint name. > # > # service_plugins = > service_plugins =neutron.services.firewall.fwaas_plugin.FirewallPlugin > # Example: service_plugins = router,firewall,lbaas,vpnaas,metering > > # Paste configuration file > # api_paste_config = /usr/share/neutron/api-paste.ini > > # The strategy to be used for auth. > # Supported values are 'keystone'(default), 'noauth'. > # auth_strategy = noauth > auth_strategy = keystone > > # Base MAC address. The first 3 octets will remain unchanged. If the > # 4h octet is not 00, it will also be used. The others will be > # randomly generated. > # 3 octet > # base_mac = fa:16:3e:00:00:00 > base_mac = fa:16:3e:00:00:00 > # 4 octet > # base_mac = fa:16:3e:4f:00:00 > > # Maximum amount of retries to generate a unique MAC address > # mac_generation_retries = 16 > mac_generation_retries = 16 > > # DHCP Lease duration (in seconds) > # dhcp_lease_duration = 86400 > dhcp_lease_duration = 86400 > > # Allow sending resource operation notification to DHCP agent > # dhcp_agent_notification = True > > # Enable or disable bulk create/update/delete operations > # allow_bulk = True > allow_bulk = True > # Enable or disable pagination > # allow_pagination = False > allow_pagination = False > # Enable or disable sorting > # allow_sorting = False > allow_sorting = False > # Enable or disable overlapping IPs for subnets > # Attention: the following parameter MUST be set to False if Neutron is > # being used in conjunction with nova security groups > # allow_overlapping_ips = True > allow_overlapping_ips = True > # Ensure that configured gateway is on subnet > # force_gateway_on_subnet = False > > > # RPC configuration options. Defined in rpc __init__ > # The messaging module to use, defaults to kombu. > # rpc_backend = neutron.openstack.common.rpc.impl_kombu > rpc_backend = neutron.openstack.common.rpc.impl_kombu > # Size of RPC thread pool > # rpc_thread_pool_size = 64 > # Size of RPC connection pool > # rpc_conn_pool_size = 30 > # Seconds to wait for a response from call or multicall > # rpc_response_timeout = 60 > # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq. > # rpc_cast_timeout = 30 > # Modules of exceptions that are permitted to be recreated > # upon receiving exception data from an rpc call. > # allowed_rpc_exception_modules = neutron.openstack.common.exception, > nova.exception > # AMQP exchange to connect to if using RabbitMQ or QPID > # control_exchange = neutron > control_exchange = neutron > > # If passed, use a fake RabbitMQ provider > # fake_rabbit = False > > # Configuration options if sending notifications via kombu rpc (these are > # the defaults) > # SSL version to use (valid only if SSL enabled) > # kombu_ssl_version = > # SSL key file (valid only if SSL enabled) > # kombu_ssl_keyfile = > # SSL cert file (valid only if SSL enabled) > # kombu_ssl_certfile = > # SSL certification authority file (valid only if SSL enabled) > # kombu_ssl_ca_certs = > # IP address of the RabbitMQ installation > # rabbit_host = localhost > rabbit_host = 192.168.32.20 > # Password of the RabbitMQ server > # rabbit_password = guest > rabbit_password = guest > # Port where RabbitMQ server is running/listening > # rabbit_port = 5672 > rabbit_port = 5672 > # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, > host2:5672) > # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' > # rabbit_hosts = localhost:5672 > rabbit_hosts = 192.168.32.20:5672 > # User ID used for RabbitMQ connections > # rabbit_userid = guest > rabbit_userid = guest > # Location of a virtual RabbitMQ installation. > # rabbit_virtual_host = / > rabbit_virtual_host = / > # Maximum retries with trying to connect to RabbitMQ > # (the default of 0 implies an infinite retry count) > # rabbit_max_retries = 0 > # RabbitMQ connection retry interval > # rabbit_retry_interval = 1 > # Use HA queues in RabbitMQ (x-ha-policy: all). You need to > # wipe RabbitMQ database when changing this option. (boolean value) > # rabbit_ha_queues = false > rabbit_ha_queues = False > > # QPID > # rpc_backend=neutron.openstack.common.rpc.impl_qpid > # Qpid broker hostname > # qpid_hostname = localhost > # Qpid broker port > # qpid_port = 5672 > # Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) > # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port' > # qpid_hosts = localhost:5672 > # Username for qpid connection > # qpid_username = '' > # Password for qpid connection > # qpid_password = '' > # Space separated list of SASL mechanisms to use for auth > # qpid_sasl_mechanisms = '' > # Seconds between connection keepalive heartbeats > # qpid_heartbeat = 60 > # Transport to use, either 'tcp' or 'ssl' > # qpid_protocol = tcp > # Disable Nagle algorithm > # qpid_tcp_nodelay = True > > # ZMQ > # rpc_backend=neutron.openstack.common.rpc.impl_zmq > # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or > IP. > # The "host" option should point or resolve to this address. > # rpc_zmq_bind_address = * > > # ============ Notification System Options ===================== > > # Notifications can be sent when network/subnet/port are created, updated > or deleted. > # There are three methods of sending notifications: logging (via the > # log_file directive), rpc (via a message queue) and > # noop (no notifications sent, the default) > > # Notification_driver can be defined multiple times > # Do nothing driver > # notification_driver = neutron.openstack.common.notifier.no_op_notifier > # Logging driver > # notification_driver = neutron.openstack.common.notifier.log_notifier > # RPC driver. > # notification_driver = neutron.openstack.common.notifier.rpc_notifier > > # default_notification_level is used to form actual topic name(s) or to > set logging level > # default_notification_level = INFO > > # default_publisher_id is a part of the notification payload > # host = myhost.com > # default_publisher_id = $host > > # Defined in rpc_notifier, can be comma separated values. > # The actual topic names will be %s.%(default_notification_level)s > # notification_topics = notifications > > # Default maximum number of items returned in a single response, > # value == infinite and value < 0 means no max limit, and value must > # be greater than 0. If the number of items requested is greater than > # pagination_max_limit, server will just return pagination_max_limit > # of number of items. > # pagination_max_limit = -1 > > # Maximum number of DNS nameservers per subnet > # max_dns_nameservers = 5 > > # Maximum number of host routes per subnet > # max_subnet_host_routes = 20 > > # Maximum number of fixed ips per port > # max_fixed_ips_per_port = 5 > > # =========== items for agent management extension ============= > # Seconds to regard the agent as down; should be at least twice > # report_interval, to be sure the agent is down for good > # agent_down_time = 75 > agent_down_time = 75 > # =========== end of items for agent management extension ===== > > # =========== items for agent scheduler extension ============= > # Driver to use for scheduling network to DHCP agent > # network_scheduler_driver = > neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler > # Driver to use for scheduling router to a default L3 agent > # router_scheduler_driver = > neutron.scheduler.l3_agent_scheduler.ChanceScheduler > router_scheduler_driver = > neutron.scheduler.l3_agent_scheduler.ChanceScheduler > # Driver to use for scheduling a loadbalancer pool to an lbaas agent > # loadbalancer_pool_scheduler_driver = > neutron.services.loadbalancer.agent_scheduler.ChanceScheduler > > # Allow auto scheduling networks to DHCP agent. It will schedule non-hosted > # networks to first DHCP agent which sends get_active_networks message to > # neutron server > # network_auto_schedule = True > > # Allow auto scheduling routers to L3 agent. It will schedule non-hosted > # routers to first L3 agent which sends sync_routers message to neutron > server > # router_auto_schedule = True > > # Number of DHCP agents scheduled to host a network. This enables redundant > # DHCP agents for configured networks. > # dhcp_agents_per_network = 1 > dhcp_agents_per_network = 1 > > # =========== end of items for agent scheduler extension ===== > > # =========== WSGI parameters related to the API server ============== > # Number of separate worker processes to spawn. The default, 0, runs the > # worker thread in the current process. Greater than 0 launches that > number of > # child processes as workers. The parent process manages them. > # api_workers = 0 > api_workers = 0 > > # Number of separate RPC worker processes to spawn. The default, 0, runs > the > # worker thread in the current process. Greater than 0 launches that > number of > # child processes as RPC workers. The parent process manages them. > # This feature is experimental until issues are addressed and testing has > been > # enabled for various plugins for compatibility. > # rpc_workers = 0 > > # Sets the value of TCP_KEEPIDLE in seconds to use for each server socket > when > # starting API server. Not supported on OS X. > # tcp_keepidle = 600 > > # Number of seconds to keep retrying to listen > # retry_until_window = 30 > > # Number of backlog requests to configure the socket with. > # backlog = 4096 > > # Max header line to accommodate large tokens > # max_header_line = 16384 > > # Enable SSL on the API server > # use_ssl = False > use_ssl = False > > # Certificate file to use when starting API server securely > # ssl_cert_file = /path/to/certfile > > # Private key file to use when starting API server securely > # ssl_key_file = /path/to/keyfile > > # CA certificate file to use when starting API server securely to > # verify connecting clients. This is an optional parameter only required if > # API clients need to authenticate to the API server using SSL certificates > # signed by a trusted CA > # ssl_ca_file = /path/to/cafile > # ======== end of WSGI parameters related to the API server ========== > > > # ======== neutron nova interactions ========== > # Send notification to nova when port status is active. > # notify_nova_on_port_status_changes = False > notify_nova_on_port_status_changes = True > > # Send notifications to nova when port data (fixed_ips/floatingips) change > # so nova can update it's cache. > # notify_nova_on_port_data_changes = False > notify_nova_on_port_data_changes = True > > # URL for connection to nova (Only supports one nova region currently). > # nova_url = http://127.0.0.1:8774/v2 > nova_url = http://192.168.32.20:8774/v2 > > # Name of nova region to use. Useful if keystone manages more than one > region > # nova_region_name = > nova_region_name =RegionOne > > # Username for connection to nova in admin context > # nova_admin_username = > nova_admin_username =nova > > # The uuid of the admin nova tenant > # nova_admin_tenant_id = > nova_admin_tenant_id =d3e2355e31b449cca9dd57fa5073ec2f > > # Password for connection to nova in admin context. > # nova_admin_password = > nova_admin_password =secret > > # Authorization URL for connection to nova in admin context. > # nova_admin_auth_url = > nova_admin_auth_url =http://192.168.32.20:35357/v2.0 > > # Number of seconds between sending events to nova if there are any events > to send > # send_events_interval = 2 > send_events_interval = 2 > > # ======== end of neutron nova interactions ========== > rabbit_use_ssl=False > > [quotas] > # Default driver to use for quota checks > # quota_driver = neutron.db.quota_db.DbQuotaDriver > > # Resource name(s) that are supported in quota features > # quota_items = network,subnet,port > > # Default number of resource allowed per tenant. A negative value means > # unlimited. > # default_quota = -1 > > # Number of networks allowed per tenant. A negative value means unlimited. > # quota_network = 10 > > # Number of subnets allowed per tenant. A negative value means unlimited. > # quota_subnet = 10 > > # Number of ports allowed per tenant. A negative value means unlimited. > # quota_port = 50 > > # Number of security groups allowed per tenant. A negative value means > # unlimited. > # quota_security_group = 10 > > # Number of security group rules allowed per tenant. A negative value means > # unlimited. > # quota_security_group_rule = 100 > > # Number of vips allowed per tenant. A negative value means unlimited. > # quota_vip = 10 > > # Number of pools allowed per tenant. A negative value means unlimited. > # quota_pool = 10 > > # Number of pool members allowed per tenant. A negative value means > unlimited. > # The default is unlimited because a member is not a real resource consumer > # on Openstack. However, on back-end, a member is a resource consumer > # and that is the reason why quota is possible. > # quota_member = -1 > > # Number of health monitors allowed per tenant. A negative value means > # unlimited. > # The default is unlimited because a health monitor is not a real resource > # consumer on Openstack. However, on back-end, a member is a resource > consumer > # and that is the reason why quota is possible. > # quota_health_monitors = -1 > > # Number of routers allowed per tenant. A negative value means unlimited. > # quota_router = 10 > > # Number of floating IPs allowed per tenant. A negative value means > unlimited. > # quota_floatingip = 50 > > [agent] > # Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real > # root filter facility. > # Change to "sudo" to skip the filtering and just run the comand directly > # root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > > # =========== items for agent management extension ============= > # seconds between nodes reporting state to server; should be less than > # agent_down_time, best if it is half or less than agent_down_time > # report_interval = 30 > report_interval = 30 > > # =========== end of items for agent management extension ===== > > [keystone_authtoken] > # auth_host = 127.0.0.1 > auth_host = 192.168.32.20 > # auth_port = 35357 > auth_port = 35357 > # auth_protocol = http > auth_protocol = http > # admin_tenant_name = %SERVICE_TENANT_NAME% > admin_tenant_name = services > # admin_user = %SERVICE_USER% > admin_user = neutron > # admin_password = %SERVICE_PASSWORD% > admin_password = secret > auth_uri=http://192.168.32.20:5000/ > > [database] > # This line MUST be changed to actually run the plugin. > # Example: > # connection = mysql://root:pass at 127.0.0.1:3306/neutron > connection = mysql://neutron:secret at 192.168.32.20/ovs_neutron > # Replace 127.0.0.1 above with the IP address of the database used by the > # main neutron server. (Leave it as is if the database runs on this host.) > # connection = sqlite:// > > # The SQLAlchemy connection string used to connect to the slave database > # slave_connection = > > # Database reconnection retry times - in event connectivity is lost > # set to -1 implies an infinite retry count > # max_retries = 10 > max_retries = 10 > > # Database reconnection interval in seconds - if the initial connection to > the > # database fails > # retry_interval = 10 > retry_interval = 10 > > # Minimum number of SQL connections to keep open in a pool > # min_pool_size = 1 > > # Maximum number of SQL connections to keep open in a pool > # max_pool_size = 10 > > # Timeout in seconds before idle sql connections are reaped > # idle_timeout = 3600 > idle_timeout = 3600 > > # If set, use this value for max_overflow with sqlalchemy > # max_overflow = 20 > > # Verbosity of SQL debugging information. 0=None, 100=Everything > # connection_debug = 0 > > # Add python stack traces to SQL as comment strings > # connection_trace = False > > # If set, use this value for pool_timeout with sqlalchemy > # pool_timeout = 10 > > [service_providers] > # Specify service providers (drivers) for advanced services like > loadbalancer, VPN, Firewall. > # Must be in form: > # service_provider=::[:default] > # List of allowed service types includes LOADBALANCER, FIREWALL, VPN > # Combination of and must be unique; must > also be unique > # This is multiline option, example for default provider: > # service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default > # example of non-default provider: > # service_provider=FIREWALL:name2:firewall_driver_path > # --- Reference implementations --- > # service_provider = > LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default > > service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default > # In order to activate Radware's lbaas driver you need to uncomment the > next line. > # If you want to keep the HA Proxy as the default lbaas driver, remove the > attribute default from the line below. > # Otherwise comment the HA Proxy line > # service_provider = > LOADBALANCER:Radware:neutron.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver:default > # uncomment the following line to make the 'netscaler' LBaaS provider > available. > # > service_provider=LOADBALANCER:NetScaler:neutron.services.loadbalancer.drivers.netscaler.netscaler_driver.NetScalerPluginDriver > # Uncomment the following line (and comment out the OpenSwan VPN line) to > enable Cisco's VPN driver. > # > service_provider=VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default > # Uncomment the line below to use Embrane heleos as Load Balancer service > provider. > # > service_provider=LOADBALANCER:Embrane:neutron.services.loadbalancer.drivers.embrane.driver.EmbraneLbaas:default > > *3.Ldif.file for openLDAP* > # extended LDIF > # > # LDAPv3 > # base with scope subtree > # filter: (objectclass=*) > # requesting: ALL > # > > # example.org > dn: dc=example,dc=org > objectClass: top > objectClass: dcObject > objectClass: organization > o: example Inc > dc: example > > # Groups, example.org > dn: ou=Groups,dc=example,dc=org > ou: Groups > objectClass: organizationalUnit > > # Users, example.org > dn: ou=Users,dc=example,dc=org > ou: users > objectClass: organizationalUnit > > # Roles, example.org > dn: ou=Roles,dc=example,dc=org > ou: roles > objectClass: organizationalUnit > > # admin, Users, example.org > dn: cn=admin,ou=Users,dc=example,dc=org > cn: admin > objectClass: inetOrgPerson > objectClass: top > sn: admin > uid: admin > userPassword: secret > > # demo, Users, example.org > dn: cn=demo,ou=Users,dc=example,dc=org > cn: demo > objectClass: inetOrgPerson > objectClass: top > sn: demo > uid: demo > userPassword: demo > > # cinder, Users, example.org > dn: cn=cinder,ou=Users,dc=example,dc=org > cn: cinder > objectClass: inetOrgPerson > objectClass: top > sn: cinder > uid: cinder > userPassword: secret > > # glance, Users, example.org > dn: cn=glance,ou=Users,dc=example,dc=org > cn: glance > objectClass: inetOrgPerson > objectClass: top > sn: glance > uid: glance > userPassword: secret > > # nova, Users, example.org > dn: cn=nova,ou=Users,dc=example,dc=org > cn: nova > objectClass: inetOrgPerson > objectClass: top > sn: nova > uid: nova > userPassword: secret > > # neutron, Users, example.org > dn: cn=neutron,ou=Users,dc=example,dc=org > cn: neutron > objectClass: inetOrgPerson > objectClass: top > sn: neutron > uid: neutron > userPassword: secret > > # enabled_users, Users, example.org > dn: cn=enabled_users,ou=Users,dc=example,dc=org > cn: enabled_users > member: cn=admin,ou=Users,dc=example,dc=org > member: cn=demo,ou=Users,dc=example,dc=org > member: cn=nova,ou=Users,dc=example,dc=org > member: cn=glance,ou=Users,dc=example,dc=org > member: cn=cinder,ou=Users,dc=example,dc=org > member: cn=neutron,ou=Users,dc=example,dc=org > objectClass: groupOfNames > > # demo, Groups, example.org > dn: cn=demo,ou=Groups,dc=example,dc=org > cn: demo > objectClass: groupOfNames > member: cn=admin,ou=Users,dc=example,dc=org > member: cn=demo,ou=Users,dc=example,dc=org > member: cn=nova,ou=Users,dc=example,dc=org > member: cn=glance,ou=Users,dc=example,dc=org > member: cn=cinder,ou=Users,dc=example,dc=org > member: cn=neutron,ou=Users,dc=example,dc=org > > > # Member, demo, Groups, example.org > dn: cn=Member,cn=demo,ou=Groups,dc=example,dc=org > cn: member > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=demo,ou=Users,dc=example,dc=org > > # admin, demo, Groups, example.org > dn: cn=admin,cn=demo,ou=Groups,dc=example,dc=org > cn: admin > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=admin,ou=Users,dc=example,dc=org > roleOccupant: cn=nova,ou=Users,dc=example,dc=org > roleOccupant: cn=glance,ou=Users,dc=example,dc=org > roleOccupant: cn=cinder,ou=Users,dc=example,dc=org > roleOccupant: cn=neutron,ou=Users,dc=example,dc=org > > > # services, Groups, example.org > dn: cn=services,ou=Groups,dc=example,dc=org > cn: services > objectClass: groupOfNames > member: cn=admin,ou=Users,dc=example,dc=org > member: cn=demo,ou=Users,dc=example,dc=org > member: cn=nova,ou=Users,dc=example,dc=org > member: cn=glance,ou=Users,dc=example,dc=org > member: cn=cinder,ou=Users,dc=example,dc=org > member: cn=neutron,ou=Users,dc=example,dc=org > > # admin, services, Groups, example.org > dn: cn=admin,cn=services,ou=Groups,dc=example,dc=org > cn: admin > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=admin,ou=Users,dc=example,dc=org > roleOccupant: cn=nova,ou=Users,dc=example,dc=org > roleOccupant: cn=glance,ou=Users,dc=example,dc=org > roleOccupant: cn=cinder,ou=Users,dc=example,dc=org > roleOccupant: cn=neutron,ou=Users,dc=example,dc=org > > # admin, Groups, example.org > dn: cn=admin,ou=Groups,dc=example,dc=org > cn: admin > objectClass: groupOfNames > member: cn=admin,ou=Users,dc=example,dc=org > member: cn=demo,ou=Users,dc=example,dc=org > member: cn=nova,ou=Users,dc=example,dc=org > member: cn=glance,ou=Users,dc=example,dc=org > member: cn=cinder,ou=Users,dc=example,dc=org > member: cn=neutron,ou=Users,dc=example,dc=org > > # admin, admin, Groups, example.org > dn: cn=admin,cn=admin,ou=Groups,dc=example,dc=org > cn: admin > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=admin,ou=Users,dc=example,dc=org > roleOccupant: cn=nova,ou=Users,dc=example,dc=org > roleOccupant: cn=glance,ou=Users,dc=example,dc=org > roleOccupant: cn=cinder,ou=Users,dc=example,dc=org > roleOccupant: cn=neutron,ou=Users,dc=example,dc=org > > # Member, Roles, example.org > dn: cn=Member,ou=Roles,dc=example,dc=org > cn: member > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=demo,ou=Users,dc=example,dc=org > > # admin, Roles, example.org > dn: cn=admin,ou=Roles,dc=example,dc=org > cn: admin > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=admin,ou=Users,dc=example,dc=org > roleOccupant: cn=nova,ou=Users,dc=example,dc=org > roleOccupant: cn=glance,ou=Users,dc=example,dc=org > roleOccupant: cn=cinder,ou=Users,dc=example,dc=org > roleOccupant: cn=neutron,ou=Users,dc=example,dc=org > > > On Wed, Sep 10, 2014 at 11:56 AM, Rasanjaya Subasinghe < > rasaposha at gmail.com> wrote: > >> >> Hi, >> Sorry for the inconvenience sir,I herewith attached the >> keystone.conf,neutron.conf and LDAP ldif file. >> Its Centos6.5 and control and 3 compute node setup in-house cloud and >> without LDAP keystone settings( >> driver=keystone.identity.backends.ldap.Identity) everything working >> fine. those are, >> 1.Instance spawn perfectly, >> 2.live migration work perfectly. >> then try to configure keystone with LDAP driver gives that error on >> neutron server.log. >> 3.This setup up is tested on without ml2 and even ml2 test end >> with same issue. >> I will attached the LDAP file and neutron file. >> *keystone version 0.9.0 >> >> >> >> >> >> below shows the neutron error show on compute.log >> >> On Wed, Sep 10, 2014 at 11:52 AM, Rasanjaya Subasinghe < >> rasaposha at gmail.com> wrote: >> >>> >>> On Sep 9, 2014, at 8:09 PM, Rasanjaya Subasinghe >>> wrote: >>> >>> >>> Hi Kashyap, >>> Its Centos6.5 and control and 3 compute node setup in-house cloud and >>> without LDAP keystone settings( >>> driver=keystone.identity.backends.ldap.Identity) everything working >>> fine. those are, >>> 1.Instance spawn perfectly, >>> 2.live migration work perfectly. >>> then try to configure keystone with LDAP driver gives that error on >>> neutron server.log. >>> 3.This setup up is tested on without ml2 and even ml2 test end >>> with same issue. >>> I will attached the LDAP file and neutron file. >>> *keystone version 0.9.0 >>> >>> >>> >>> below shows the neutron error show on compute.log >>> >>> >>> >>> cheers, >>> thanks >>> Begin forwarded message: >>> >>> *From: *Kashyap Chamarthy >>> *Subject: **Re: [Rdo-list] icehouse ldap integration* >>> *Date: *September 9, 2014 at 7:27:59 PM GMT+5:30 >>> *To: *Rasanjaya Subasinghe >>> *Cc: *rdo-list at redhat.com >>> >>> On Tue, Sep 09, 2014 at 06:19:56PM +0530, Rasanjaya Subasinghe wrote: >>> >>> >>> Hi, >>> I tried to configure openstack ice house with LDAP and all things are >>> goes well execp neutron issue, this is the issue which appears on the >>> server.log file of neutron service. >>> >>> Can you guide me for this matter? thanks for the help. >>> >>> >>> This information you've provided is not sufficient to give any >>> meaningful response. >>> >>> At a minimum, if anyone have to help you diagnose your issue, you need >>> to provide: >>> >>> - Describe in more detail what you mean by "configure >>> openstack ice house with LDAP". >>> - What is the test you're trying to perform? An exact reproducer would >>> be very useful. >>> - What is the exact error message you see? Contextual logs/errors from >>> Keystone/Nova. >>> - Exact versions of Keystone, and other relevant packages. >>> - What OS? Fedora? CentOS? Something else? >>> - Probably, provide config files for /etc/keystone/keystone.conf and >>> relevant Neutron config files (preferably uploaded somewhere in >>> *plain text*). >>> >>> >>> -- >>> /kashyap >>> >>> >>> >>> >> >> >> -- >> Rasanjaya Subasinghe >> > > > > -- > Rasanjaya Subasinghe > -- Rasanjaya Subasinghe -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasaposha at gmail.com Wed Sep 10 07:53:25 2014 From: rasaposha at gmail.com (Rasanjaya Subasinghe) Date: Wed, 10 Sep 2014 13:23:25 +0530 Subject: [Rdo-list] icehouse ldap integration In-Reply-To: References: <20140909135759.GM14391@tesla.pnq.redhat.com> Message-ID: Hi, Any luck sir.. cheers On Sep 10, 2014, at 12:03 PM, Rasanjaya Subasinghe wrote: > Hi sir, > > I will provide more details for reproduce the issue. > > cheers > > On Wed, Sep 10, 2014 at 12:02 PM, Rasanjaya Subasinghe wrote: > Hi Kashyap, > > this is the configuration i have made for integrate with LDAP, > > 1. keystone.conf > > url = ldap://192.168.16.100 > user = cn=admin,dc=example,dc=org > password = 123 > suffix = dc=example,dc=org > > user_tree_dn = ou=Users,dc=example,dc=org > user_objectclass = inetOrgPerson > user_id_attribute = cn > user_name_attribute = cn > user_pass_attribute = userPassword > user_enabled_emulation = True > user_enabled_emulation_dn = cn=enabled_users,ou=Users,dc=example,dc=org > user_allow_create = False > user_allow_update = False > user_allow_delete = False > > tenant_tree_dn = ou=Groups,dc=example,dc=org > tenant_objectclass = groupOfNames > tenant_id_attribute = cn > #tenant_domain_id_attribute = businessCategory > #tenant_domain_id_attribute = cn > tenant_member_attribute = member > tenant_name_attribute = cn > tenant_domain_id_attribute = None > tenant_allow_create = False > tenant_allow_update = False > tenant_allow_delete = False > > > role_tree_dn = ou=Roles,dc=example,dc=org > role_objectclass = organizationalRole > role_member_attribute = roleOccupant > role_id_attribute = cn > role_name_attribute = cn > role_allow_create = False > role_allow_update = False > role_allow_delete = False > > 2.neutron.conf > > [DEFAULT] > # Print more verbose output (set logging level to INFO instead of default WARNING level). > # verbose = True > verbose = True > > # Print debugging output (set logging level to DEBUG instead of default WARNING level). > # debug = False > debug = True > > # Where to store Neutron state files. This directory must be writable by the > # user executing the agent. > # state_path = /var/lib/neutron > > # Where to store lock files > # lock_path = $state_path/lock > > # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s > # log_date_format = %Y-%m-%d %H:%M:%S > > # use_syslog -> syslog > # log_file and log_dir -> log_dir/log_file > # (not log_file) and log_dir -> log_dir/{binary_name}.log > # use_stderr -> stderr > # (not user_stderr) and (not log_file) -> stdout > # publish_errors -> notification system > > # use_syslog = False > use_syslog = False > # syslog_log_facility = LOG_USER > > # use_stderr = False > # log_file = > # log_dir = > log_dir =/var/log/neutron > > # publish_errors = False > > # Address to bind the API server to > # bind_host = 0.0.0.0 > bind_host = 0.0.0.0 > > # Port the bind the API server to > # bind_port = 9696 > bind_port = 9696 > > # Path to the extensions. Note that this can be a colon-separated list of > # paths. For example: > # api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions > # The __path__ of neutron.extensions is appended to this, so if your > # extensions are in there you don't need to specify them here > # api_extensions_path = > > # (StrOpt) Neutron core plugin entrypoint to be loaded from the > # neutron.core_plugins namespace. See setup.cfg for the entrypoint names of the > # plugins included in the neutron source distribution. For compatibility with > # previous versions, the class name of a plugin can be specified instead of its > # entrypoint name. > # > # core_plugin = > core_plugin =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > # Example: core_plugin = ml2 > > # (ListOpt) List of service plugin entrypoints to be loaded from the > # neutron.service_plugins namespace. See setup.cfg for the entrypoint names of > # the plugins included in the neutron source distribution. For compatibility > # with previous versions, the class name of a plugin can be specified instead > # of its entrypoint name. > # > # service_plugins = > service_plugins =neutron.services.firewall.fwaas_plugin.FirewallPlugin > # Example: service_plugins = router,firewall,lbaas,vpnaas,metering > > # Paste configuration file > # api_paste_config = /usr/share/neutron/api-paste.ini > > # The strategy to be used for auth. > # Supported values are 'keystone'(default), 'noauth'. > # auth_strategy = noauth > auth_strategy = keystone > > # Base MAC address. The first 3 octets will remain unchanged. If the > # 4h octet is not 00, it will also be used. The others will be > # randomly generated. > # 3 octet > # base_mac = fa:16:3e:00:00:00 > base_mac = fa:16:3e:00:00:00 > # 4 octet > # base_mac = fa:16:3e:4f:00:00 > > # Maximum amount of retries to generate a unique MAC address > # mac_generation_retries = 16 > mac_generation_retries = 16 > > # DHCP Lease duration (in seconds) > # dhcp_lease_duration = 86400 > dhcp_lease_duration = 86400 > > # Allow sending resource operation notification to DHCP agent > # dhcp_agent_notification = True > > # Enable or disable bulk create/update/delete operations > # allow_bulk = True > allow_bulk = True > # Enable or disable pagination > # allow_pagination = False > allow_pagination = False > # Enable or disable sorting > # allow_sorting = False > allow_sorting = False > # Enable or disable overlapping IPs for subnets > # Attention: the following parameter MUST be set to False if Neutron is > # being used in conjunction with nova security groups > # allow_overlapping_ips = True > allow_overlapping_ips = True > # Ensure that configured gateway is on subnet > # force_gateway_on_subnet = False > > > # RPC configuration options. Defined in rpc __init__ > # The messaging module to use, defaults to kombu. > # rpc_backend = neutron.openstack.common.rpc.impl_kombu > rpc_backend = neutron.openstack.common.rpc.impl_kombu > # Size of RPC thread pool > # rpc_thread_pool_size = 64 > # Size of RPC connection pool > # rpc_conn_pool_size = 30 > # Seconds to wait for a response from call or multicall > # rpc_response_timeout = 60 > # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq. > # rpc_cast_timeout = 30 > # Modules of exceptions that are permitted to be recreated > # upon receiving exception data from an rpc call. > # allowed_rpc_exception_modules = neutron.openstack.common.exception, nova.exception > # AMQP exchange to connect to if using RabbitMQ or QPID > # control_exchange = neutron > control_exchange = neutron > > # If passed, use a fake RabbitMQ provider > # fake_rabbit = False > > # Configuration options if sending notifications via kombu rpc (these are > # the defaults) > # SSL version to use (valid only if SSL enabled) > # kombu_ssl_version = > # SSL key file (valid only if SSL enabled) > # kombu_ssl_keyfile = > # SSL cert file (valid only if SSL enabled) > # kombu_ssl_certfile = > # SSL certification authority file (valid only if SSL enabled) > # kombu_ssl_ca_certs = > # IP address of the RabbitMQ installation > # rabbit_host = localhost > rabbit_host = 192.168.32.20 > # Password of the RabbitMQ server > # rabbit_password = guest > rabbit_password = guest > # Port where RabbitMQ server is running/listening > # rabbit_port = 5672 > rabbit_port = 5672 > # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) > # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' > # rabbit_hosts = localhost:5672 > rabbit_hosts = 192.168.32.20:5672 > # User ID used for RabbitMQ connections > # rabbit_userid = guest > rabbit_userid = guest > # Location of a virtual RabbitMQ installation. > # rabbit_virtual_host = / > rabbit_virtual_host = / > # Maximum retries with trying to connect to RabbitMQ > # (the default of 0 implies an infinite retry count) > # rabbit_max_retries = 0 > # RabbitMQ connection retry interval > # rabbit_retry_interval = 1 > # Use HA queues in RabbitMQ (x-ha-policy: all). You need to > # wipe RabbitMQ database when changing this option. (boolean value) > # rabbit_ha_queues = false > rabbit_ha_queues = False > > # QPID > # rpc_backend=neutron.openstack.common.rpc.impl_qpid > # Qpid broker hostname > # qpid_hostname = localhost > # Qpid broker port > # qpid_port = 5672 > # Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) > # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port' > # qpid_hosts = localhost:5672 > # Username for qpid connection > # qpid_username = '' > # Password for qpid connection > # qpid_password = '' > # Space separated list of SASL mechanisms to use for auth > # qpid_sasl_mechanisms = '' > # Seconds between connection keepalive heartbeats > # qpid_heartbeat = 60 > # Transport to use, either 'tcp' or 'ssl' > # qpid_protocol = tcp > # Disable Nagle algorithm > # qpid_tcp_nodelay = True > > # ZMQ > # rpc_backend=neutron.openstack.common.rpc.impl_zmq > # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. > # The "host" option should point or resolve to this address. > # rpc_zmq_bind_address = * > > # ============ Notification System Options ===================== > > # Notifications can be sent when network/subnet/port are created, updated or deleted. > # There are three methods of sending notifications: logging (via the > # log_file directive), rpc (via a message queue) and > # noop (no notifications sent, the default) > > # Notification_driver can be defined multiple times > # Do nothing driver > # notification_driver = neutron.openstack.common.notifier.no_op_notifier > # Logging driver > # notification_driver = neutron.openstack.common.notifier.log_notifier > # RPC driver. > # notification_driver = neutron.openstack.common.notifier.rpc_notifier > > # default_notification_level is used to form actual topic name(s) or to set logging level > # default_notification_level = INFO > > # default_publisher_id is a part of the notification payload > # host = myhost.com > # default_publisher_id = $host > > # Defined in rpc_notifier, can be comma separated values. > # The actual topic names will be %s.%(default_notification_level)s > # notification_topics = notifications > > # Default maximum number of items returned in a single response, > # value == infinite and value < 0 means no max limit, and value must > # be greater than 0. If the number of items requested is greater than > # pagination_max_limit, server will just return pagination_max_limit > # of number of items. > # pagination_max_limit = -1 > > # Maximum number of DNS nameservers per subnet > # max_dns_nameservers = 5 > > # Maximum number of host routes per subnet > # max_subnet_host_routes = 20 > > # Maximum number of fixed ips per port > # max_fixed_ips_per_port = 5 > > # =========== items for agent management extension ============= > # Seconds to regard the agent as down; should be at least twice > # report_interval, to be sure the agent is down for good > # agent_down_time = 75 > agent_down_time = 75 > # =========== end of items for agent management extension ===== > > # =========== items for agent scheduler extension ============= > # Driver to use for scheduling network to DHCP agent > # network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler > # Driver to use for scheduling router to a default L3 agent > # router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler > router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler > # Driver to use for scheduling a loadbalancer pool to an lbaas agent > # loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.agent_scheduler.ChanceScheduler > > # Allow auto scheduling networks to DHCP agent. It will schedule non-hosted > # networks to first DHCP agent which sends get_active_networks message to > # neutron server > # network_auto_schedule = True > > # Allow auto scheduling routers to L3 agent. It will schedule non-hosted > # routers to first L3 agent which sends sync_routers message to neutron server > # router_auto_schedule = True > > # Number of DHCP agents scheduled to host a network. This enables redundant > # DHCP agents for configured networks. > # dhcp_agents_per_network = 1 > dhcp_agents_per_network = 1 > > # =========== end of items for agent scheduler extension ===== > > # =========== WSGI parameters related to the API server ============== > # Number of separate worker processes to spawn. The default, 0, runs the > # worker thread in the current process. Greater than 0 launches that number of > # child processes as workers. The parent process manages them. > # api_workers = 0 > api_workers = 0 > > # Number of separate RPC worker processes to spawn. The default, 0, runs the > # worker thread in the current process. Greater than 0 launches that number of > # child processes as RPC workers. The parent process manages them. > # This feature is experimental until issues are addressed and testing has been > # enabled for various plugins for compatibility. > # rpc_workers = 0 > > # Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when > # starting API server. Not supported on OS X. > # tcp_keepidle = 600 > > # Number of seconds to keep retrying to listen > # retry_until_window = 30 > > # Number of backlog requests to configure the socket with. > # backlog = 4096 > > # Max header line to accommodate large tokens > # max_header_line = 16384 > > # Enable SSL on the API server > # use_ssl = False > use_ssl = False > > # Certificate file to use when starting API server securely > # ssl_cert_file = /path/to/certfile > > # Private key file to use when starting API server securely > # ssl_key_file = /path/to/keyfile > > # CA certificate file to use when starting API server securely to > # verify connecting clients. This is an optional parameter only required if > # API clients need to authenticate to the API server using SSL certificates > # signed by a trusted CA > # ssl_ca_file = /path/to/cafile > # ======== end of WSGI parameters related to the API server ========== > > > # ======== neutron nova interactions ========== > # Send notification to nova when port status is active. > # notify_nova_on_port_status_changes = False > notify_nova_on_port_status_changes = True > > # Send notifications to nova when port data (fixed_ips/floatingips) change > # so nova can update it's cache. > # notify_nova_on_port_data_changes = False > notify_nova_on_port_data_changes = True > > # URL for connection to nova (Only supports one nova region currently). > # nova_url = http://127.0.0.1:8774/v2 > nova_url = http://192.168.32.20:8774/v2 > > # Name of nova region to use. Useful if keystone manages more than one region > # nova_region_name = > nova_region_name =RegionOne > > # Username for connection to nova in admin context > # nova_admin_username = > nova_admin_username =nova > > # The uuid of the admin nova tenant > # nova_admin_tenant_id = > nova_admin_tenant_id =d3e2355e31b449cca9dd57fa5073ec2f > > # Password for connection to nova in admin context. > # nova_admin_password = > nova_admin_password =secret > > # Authorization URL for connection to nova in admin context. > # nova_admin_auth_url = > nova_admin_auth_url =http://192.168.32.20:35357/v2.0 > > # Number of seconds between sending events to nova if there are any events to send > # send_events_interval = 2 > send_events_interval = 2 > > # ======== end of neutron nova interactions ========== > rabbit_use_ssl=False > > [quotas] > # Default driver to use for quota checks > # quota_driver = neutron.db.quota_db.DbQuotaDriver > > # Resource name(s) that are supported in quota features > # quota_items = network,subnet,port > > # Default number of resource allowed per tenant. A negative value means > # unlimited. > # default_quota = -1 > > # Number of networks allowed per tenant. A negative value means unlimited. > # quota_network = 10 > > # Number of subnets allowed per tenant. A negative value means unlimited. > # quota_subnet = 10 > > # Number of ports allowed per tenant. A negative value means unlimited. > # quota_port = 50 > > # Number of security groups allowed per tenant. A negative value means > # unlimited. > # quota_security_group = 10 > > # Number of security group rules allowed per tenant. A negative value means > # unlimited. > # quota_security_group_rule = 100 > > # Number of vips allowed per tenant. A negative value means unlimited. > # quota_vip = 10 > > # Number of pools allowed per tenant. A negative value means unlimited. > # quota_pool = 10 > > # Number of pool members allowed per tenant. A negative value means unlimited. > # The default is unlimited because a member is not a real resource consumer > # on Openstack. However, on back-end, a member is a resource consumer > # and that is the reason why quota is possible. > # quota_member = -1 > > # Number of health monitors allowed per tenant. A negative value means > # unlimited. > # The default is unlimited because a health monitor is not a real resource > # consumer on Openstack. However, on back-end, a member is a resource consumer > # and that is the reason why quota is possible. > # quota_health_monitors = -1 > > # Number of routers allowed per tenant. A negative value means unlimited. > # quota_router = 10 > > # Number of floating IPs allowed per tenant. A negative value means unlimited. > # quota_floatingip = 50 > > [agent] > # Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real > # root filter facility. > # Change to "sudo" to skip the filtering and just run the comand directly > # root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > > # =========== items for agent management extension ============= > # seconds between nodes reporting state to server; should be less than > # agent_down_time, best if it is half or less than agent_down_time > # report_interval = 30 > report_interval = 30 > > # =========== end of items for agent management extension ===== > > [keystone_authtoken] > # auth_host = 127.0.0.1 > auth_host = 192.168.32.20 > # auth_port = 35357 > auth_port = 35357 > # auth_protocol = http > auth_protocol = http > # admin_tenant_name = %SERVICE_TENANT_NAME% > admin_tenant_name = services > # admin_user = %SERVICE_USER% > admin_user = neutron > # admin_password = %SERVICE_PASSWORD% > admin_password = secret > auth_uri=http://192.168.32.20:5000/ > > [database] > # This line MUST be changed to actually run the plugin. > # Example: > # connection = mysql://root:pass at 127.0.0.1:3306/neutron > connection = mysql://neutron:secret at 192.168.32.20/ovs_neutron > # Replace 127.0.0.1 above with the IP address of the database used by the > # main neutron server. (Leave it as is if the database runs on this host.) > # connection = sqlite:// > > # The SQLAlchemy connection string used to connect to the slave database > # slave_connection = > > # Database reconnection retry times - in event connectivity is lost > # set to -1 implies an infinite retry count > # max_retries = 10 > max_retries = 10 > > # Database reconnection interval in seconds - if the initial connection to the > # database fails > # retry_interval = 10 > retry_interval = 10 > > # Minimum number of SQL connections to keep open in a pool > # min_pool_size = 1 > > # Maximum number of SQL connections to keep open in a pool > # max_pool_size = 10 > > # Timeout in seconds before idle sql connections are reaped > # idle_timeout = 3600 > idle_timeout = 3600 > > # If set, use this value for max_overflow with sqlalchemy > # max_overflow = 20 > > # Verbosity of SQL debugging information. 0=None, 100=Everything > # connection_debug = 0 > > # Add python stack traces to SQL as comment strings > # connection_trace = False > > # If set, use this value for pool_timeout with sqlalchemy > # pool_timeout = 10 > > [service_providers] > # Specify service providers (drivers) for advanced services like loadbalancer, VPN, Firewall. > # Must be in form: > # service_provider=::[:default] > # List of allowed service types includes LOADBALANCER, FIREWALL, VPN > # Combination of and must be unique; must also be unique > # This is multiline option, example for default provider: > # service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default > # example of non-default provider: > # service_provider=FIREWALL:name2:firewall_driver_path > # --- Reference implementations --- > # service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default > service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default > # In order to activate Radware's lbaas driver you need to uncomment the next line. > # If you want to keep the HA Proxy as the default lbaas driver, remove the attribute default from the line below. > # Otherwise comment the HA Proxy line > # service_provider = LOADBALANCER:Radware:neutron.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver:default > # uncomment the following line to make the 'netscaler' LBaaS provider available. > # service_provider=LOADBALANCER:NetScaler:neutron.services.loadbalancer.drivers.netscaler.netscaler_driver.NetScalerPluginDriver > # Uncomment the following line (and comment out the OpenSwan VPN line) to enable Cisco's VPN driver. > # service_provider=VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default > # Uncomment the line below to use Embrane heleos as Load Balancer service provider. > # service_provider=LOADBALANCER:Embrane:neutron.services.loadbalancer.drivers.embrane.driver.EmbraneLbaas:default > > 3.Ldif.file for openLDAP > # extended LDIF > # > # LDAPv3 > # base with scope subtree > # filter: (objectclass=*) > # requesting: ALL > # > > # example.org > dn: dc=example,dc=org > objectClass: top > objectClass: dcObject > objectClass: organization > o: example Inc > dc: example > > # Groups, example.org > dn: ou=Groups,dc=example,dc=org > ou: Groups > objectClass: organizationalUnit > > # Users, example.org > dn: ou=Users,dc=example,dc=org > ou: users > objectClass: organizationalUnit > > # Roles, example.org > dn: ou=Roles,dc=example,dc=org > ou: roles > objectClass: organizationalUnit > > # admin, Users, example.org > dn: cn=admin,ou=Users,dc=example,dc=org > cn: admin > objectClass: inetOrgPerson > objectClass: top > sn: admin > uid: admin > userPassword: secret > > # demo, Users, example.org > dn: cn=demo,ou=Users,dc=example,dc=org > cn: demo > objectClass: inetOrgPerson > objectClass: top > sn: demo > uid: demo > userPassword: demo > > # cinder, Users, example.org > dn: cn=cinder,ou=Users,dc=example,dc=org > cn: cinder > objectClass: inetOrgPerson > objectClass: top > sn: cinder > uid: cinder > userPassword: secret > > # glance, Users, example.org > dn: cn=glance,ou=Users,dc=example,dc=org > cn: glance > objectClass: inetOrgPerson > objectClass: top > sn: glance > uid: glance > userPassword: secret > > # nova, Users, example.org > dn: cn=nova,ou=Users,dc=example,dc=org > cn: nova > objectClass: inetOrgPerson > objectClass: top > sn: nova > uid: nova > userPassword: secret > > # neutron, Users, example.org > dn: cn=neutron,ou=Users,dc=example,dc=org > cn: neutron > objectClass: inetOrgPerson > objectClass: top > sn: neutron > uid: neutron > userPassword: secret > > # enabled_users, Users, example.org > dn: cn=enabled_users,ou=Users,dc=example,dc=org > cn: enabled_users > member: cn=admin,ou=Users,dc=example,dc=org > member: cn=demo,ou=Users,dc=example,dc=org > member: cn=nova,ou=Users,dc=example,dc=org > member: cn=glance,ou=Users,dc=example,dc=org > member: cn=cinder,ou=Users,dc=example,dc=org > member: cn=neutron,ou=Users,dc=example,dc=org > objectClass: groupOfNames > > # demo, Groups, example.org > dn: cn=demo,ou=Groups,dc=example,dc=org > cn: demo > objectClass: groupOfNames > member: cn=admin,ou=Users,dc=example,dc=org > member: cn=demo,ou=Users,dc=example,dc=org > member: cn=nova,ou=Users,dc=example,dc=org > member: cn=glance,ou=Users,dc=example,dc=org > member: cn=cinder,ou=Users,dc=example,dc=org > member: cn=neutron,ou=Users,dc=example,dc=org > > > # Member, demo, Groups, example.org > dn: cn=Member,cn=demo,ou=Groups,dc=example,dc=org > cn: member > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=demo,ou=Users,dc=example,dc=org > > # admin, demo, Groups, example.org > dn: cn=admin,cn=demo,ou=Groups,dc=example,dc=org > cn: admin > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=admin,ou=Users,dc=example,dc=org > roleOccupant: cn=nova,ou=Users,dc=example,dc=org > roleOccupant: cn=glance,ou=Users,dc=example,dc=org > roleOccupant: cn=cinder,ou=Users,dc=example,dc=org > roleOccupant: cn=neutron,ou=Users,dc=example,dc=org > > > # services, Groups, example.org > dn: cn=services,ou=Groups,dc=example,dc=org > cn: services > objectClass: groupOfNames > member: cn=admin,ou=Users,dc=example,dc=org > member: cn=demo,ou=Users,dc=example,dc=org > member: cn=nova,ou=Users,dc=example,dc=org > member: cn=glance,ou=Users,dc=example,dc=org > member: cn=cinder,ou=Users,dc=example,dc=org > member: cn=neutron,ou=Users,dc=example,dc=org > > # admin, services, Groups, example.org > dn: cn=admin,cn=services,ou=Groups,dc=example,dc=org > cn: admin > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=admin,ou=Users,dc=example,dc=org > roleOccupant: cn=nova,ou=Users,dc=example,dc=org > roleOccupant: cn=glance,ou=Users,dc=example,dc=org > roleOccupant: cn=cinder,ou=Users,dc=example,dc=org > roleOccupant: cn=neutron,ou=Users,dc=example,dc=org > > # admin, Groups, example.org > dn: cn=admin,ou=Groups,dc=example,dc=org > cn: admin > objectClass: groupOfNames > member: cn=admin,ou=Users,dc=example,dc=org > member: cn=demo,ou=Users,dc=example,dc=org > member: cn=nova,ou=Users,dc=example,dc=org > member: cn=glance,ou=Users,dc=example,dc=org > member: cn=cinder,ou=Users,dc=example,dc=org > member: cn=neutron,ou=Users,dc=example,dc=org > > # admin, admin, Groups, example.org > dn: cn=admin,cn=admin,ou=Groups,dc=example,dc=org > cn: admin > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=admin,ou=Users,dc=example,dc=org > roleOccupant: cn=nova,ou=Users,dc=example,dc=org > roleOccupant: cn=glance,ou=Users,dc=example,dc=org > roleOccupant: cn=cinder,ou=Users,dc=example,dc=org > roleOccupant: cn=neutron,ou=Users,dc=example,dc=org > > # Member, Roles, example.org > dn: cn=Member,ou=Roles,dc=example,dc=org > cn: member > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=demo,ou=Users,dc=example,dc=org > > # admin, Roles, example.org > dn: cn=admin,ou=Roles,dc=example,dc=org > cn: admin > description: Role associated with openstack users > objectClass: organizationalRole > roleOccupant: cn=admin,ou=Users,dc=example,dc=org > roleOccupant: cn=nova,ou=Users,dc=example,dc=org > roleOccupant: cn=glance,ou=Users,dc=example,dc=org > roleOccupant: cn=cinder,ou=Users,dc=example,dc=org > roleOccupant: cn=neutron,ou=Users,dc=example,dc=org > > > On Wed, Sep 10, 2014 at 11:56 AM, Rasanjaya Subasinghe wrote: >> > Hi, > Sorry for the inconvenience sir,I herewith attached the keystone.conf,neutron.conf and LDAP ldif file. > Its Centos6.5 and control and 3 compute node setup in-house cloud and without LDAP keystone settings(driver=keystone.identity.backends.ldap.Identity) everything working fine. those are, > 1.Instance spawn perfectly, > 2.live migration work perfectly. > then try to configure keystone with LDAP driver gives that error on neutron server.log. > 3.This setup up is tested on without ml2 and even ml2 test end with same issue. > I will attached the LDAP file and neutron file. > *keystone version 0.9.0 > > > > > > below shows the neutron error show on compute.log > > On Wed, Sep 10, 2014 at 11:52 AM, Rasanjaya Subasinghe wrote: > > On Sep 9, 2014, at 8:09 PM, Rasanjaya Subasinghe wrote: > >> >> Hi Kashyap, >> Its Centos6.5 and control and 3 compute node setup in-house cloud and without LDAP keystone settings(driver=keystone.identity.backends.ldap.Identity) everything working fine. those are, >> 1.Instance spawn perfectly, >> 2.live migration work perfectly. >> then try to configure keystone with LDAP driver gives that error on neutron server.log. >> 3.This setup up is tested on without ml2 and even ml2 test end with same issue. >> I will attached the LDAP file and neutron file. >> *keystone version 0.9.0 >> >> >> >> below shows the neutron error show on compute.log >> >> >> >> cheers, >> thanks >> Begin forwarded message: >> >>> From: Kashyap Chamarthy >>> Subject: Re: [Rdo-list] icehouse ldap integration >>> Date: September 9, 2014 at 7:27:59 PM GMT+5:30 >>> To: Rasanjaya Subasinghe >>> Cc: rdo-list at redhat.com >>> >>> On Tue, Sep 09, 2014 at 06:19:56PM +0530, Rasanjaya Subasinghe wrote: >>>> >>>> Hi, >>>> I tried to configure openstack ice house with LDAP and all things are >>>> goes well execp neutron issue, this is the issue which appears on the >>>> server.log file of neutron service. >>>> >>>> Can you guide me for this matter? thanks for the help. >>> >>> This information you've provided is not sufficient to give any >>> meaningful response. >>> >>> At a minimum, if anyone have to help you diagnose your issue, you need >>> to provide: >>> >>> - Describe in more detail what you mean by "configure >>> openstack ice house with LDAP". >>> - What is the test you're trying to perform? An exact reproducer would >>> be very useful. >>> - What is the exact error message you see? Contextual logs/errors from >>> Keystone/Nova. >>> - Exact versions of Keystone, and other relevant packages. >>> - What OS? Fedora? CentOS? Something else? >>> - Probably, provide config files for /etc/keystone/keystone.conf and >>> relevant Neutron config files (preferably uploaded somewhere in >>> *plain text*). >>> >>> >>> -- >>> /kashyap >> > > > > > -- > Rasanjaya Subasinghe > > > > -- > Rasanjaya Subasinghe > > > > -- > Rasanjaya Subasinghe -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasaposha at gmail.com Wed Sep 10 13:35:59 2014 From: rasaposha at gmail.com (Rasanjaya Subasinghe) Date: Wed, 10 Sep 2014 19:05:59 +0530 Subject: [Rdo-list] icehouse ldap integration In-Reply-To: References: <20140909135759.GM14391@tesla.pnq.redhat.com> Message-ID: Hi sir, this is debug server.log and it appers when spawning instance. 2014-09-10 18:52:03.656 16952 DEBUG urllib3.connectionpool [-] "POST /v2.0/tokens HTTP/1.1" 401 133 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 2014-09-10 18:52:03.657 16952 ERROR neutron.notifiers.nova [-] Failed to notify nova on events: [{'status': 'completed', 'tag': u'4fbbddbf-ff50-45c0-b5af-3f92e9b81f68', 'name': 'network-vif-plugged', 'server_uuid': u'0d5c1932-cc2b-42e3-95da-2fe04e33b570'}] 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova Traceback (most recent call last): 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/neutron/notifiers/nova.py", line 221, in send_events 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova batched_events) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/v1_1/contrib/server_external_events.py", line 39, in create 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova return_raw=True) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/base.py", line 152, in _create 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova _resp, body = self.api.client.post(url, body=body) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 312, in post 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova return self._cs_request(url, 'POST', **kwargs) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 275, in _cs_request 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova self.authenticate() 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 408, in authenticate 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova auth_url = self._v2_auth(auth_url) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 495, in _v2_auth 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova return self._authenticate(url, body) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 508, in _authenticate 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova **kwargs) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 268, in _time_request 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova resp, body = self.request(url, method, **kwargs) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 262, in request 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova raise exceptions.from_response(resp, body, url, method) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova Unauthorized: User nova is unauthorized for tenant d3e2355e31b449cca9dd57fa5073ec2f (HTTP 401) 2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova 2014-09-10 18:52:04.892 16952 DEBUG neutron.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-a460d750-d0b6-49d Rasanjaya Subasinghe Dev/Ops Engineer,WSO2 inc. Mobile: +94772250358 E-Mail: rasanjaya at wso2.com On Sep 10, 2014, at 1:23 PM, Rasanjaya Subasinghe wrote: > Hi, > Any luck sir.. > > cheers > > On Sep 10, 2014, at 12:03 PM, Rasanjaya Subasinghe wrote: > >> Hi sir, >> >> I will provide more details for reproduce the issue. >> >> cheers >> >> On Wed, Sep 10, 2014 at 12:02 PM, Rasanjaya Subasinghe wrote: >> Hi Kashyap, >> >> this is the configuration i have made for integrate with LDAP, >> >> 1. keystone.conf >> >> url = ldap://192.168.16.100 >> user = cn=admin,dc=example,dc=org >> password = 123 >> suffix = dc=example,dc=org >> >> user_tree_dn = ou=Users,dc=example,dc=org >> user_objectclass = inetOrgPerson >> user_id_attribute = cn >> user_name_attribute = cn >> user_pass_attribute = userPassword >> user_enabled_emulation = True >> user_enabled_emulation_dn = cn=enabled_users,ou=Users,dc=example,dc=org >> user_allow_create = False >> user_allow_update = False >> user_allow_delete = False >> >> tenant_tree_dn = ou=Groups,dc=example,dc=org >> tenant_objectclass = groupOfNames >> tenant_id_attribute = cn >> #tenant_domain_id_attribute = businessCategory >> #tenant_domain_id_attribute = cn >> tenant_member_attribute = member >> tenant_name_attribute = cn >> tenant_domain_id_attribute = None >> tenant_allow_create = False >> tenant_allow_update = False >> tenant_allow_delete = False >> >> >> role_tree_dn = ou=Roles,dc=example,dc=org >> role_objectclass = organizationalRole >> role_member_attribute = roleOccupant >> role_id_attribute = cn >> role_name_attribute = cn >> role_allow_create = False >> role_allow_update = False >> role_allow_delete = False >> >> 2.neutron.conf >> >> [DEFAULT] >> # Print more verbose output (set logging level to INFO instead of default WARNING level). >> # verbose = True >> verbose = True >> >> # Print debugging output (set logging level to DEBUG instead of default WARNING level). >> # debug = False >> debug = True >> >> # Where to store Neutron state files. This directory must be writable by the >> # user executing the agent. >> # state_path = /var/lib/neutron >> >> # Where to store lock files >> # lock_path = $state_path/lock >> >> # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s >> # log_date_format = %Y-%m-%d %H:%M:%S >> >> # use_syslog -> syslog >> # log_file and log_dir -> log_dir/log_file >> # (not log_file) and log_dir -> log_dir/{binary_name}.log >> # use_stderr -> stderr >> # (not user_stderr) and (not log_file) -> stdout >> # publish_errors -> notification system >> >> # use_syslog = False >> use_syslog = False >> # syslog_log_facility = LOG_USER >> >> # use_stderr = False >> # log_file = >> # log_dir = >> log_dir =/var/log/neutron >> >> # publish_errors = False >> >> # Address to bind the API server to >> # bind_host = 0.0.0.0 >> bind_host = 0.0.0.0 >> >> # Port the bind the API server to >> # bind_port = 9696 >> bind_port = 9696 >> >> # Path to the extensions. Note that this can be a colon-separated list of >> # paths. For example: >> # api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions >> # The __path__ of neutron.extensions is appended to this, so if your >> # extensions are in there you don't need to specify them here >> # api_extensions_path = >> >> # (StrOpt) Neutron core plugin entrypoint to be loaded from the >> # neutron.core_plugins namespace. See setup.cfg for the entrypoint names of the >> # plugins included in the neutron source distribution. For compatibility with >> # previous versions, the class name of a plugin can be specified instead of its >> # entrypoint name. >> # >> # core_plugin = >> core_plugin =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 >> # Example: core_plugin = ml2 >> >> # (ListOpt) List of service plugin entrypoints to be loaded from the >> # neutron.service_plugins namespace. See setup.cfg for the entrypoint names of >> # the plugins included in the neutron source distribution. For compatibility >> # with previous versions, the class name of a plugin can be specified instead >> # of its entrypoint name. >> # >> # service_plugins = >> service_plugins =neutron.services.firewall.fwaas_plugin.FirewallPlugin >> # Example: service_plugins = router,firewall,lbaas,vpnaas,metering >> >> # Paste configuration file >> # api_paste_config = /usr/share/neutron/api-paste.ini >> >> # The strategy to be used for auth. >> # Supported values are 'keystone'(default), 'noauth'. >> # auth_strategy = noauth >> auth_strategy = keystone >> >> # Base MAC address. The first 3 octets will remain unchanged. If the >> # 4h octet is not 00, it will also be used. The others will be >> # randomly generated. >> # 3 octet >> # base_mac = fa:16:3e:00:00:00 >> base_mac = fa:16:3e:00:00:00 >> # 4 octet >> # base_mac = fa:16:3e:4f:00:00 >> >> # Maximum amount of retries to generate a unique MAC address >> # mac_generation_retries = 16 >> mac_generation_retries = 16 >> >> # DHCP Lease duration (in seconds) >> # dhcp_lease_duration = 86400 >> dhcp_lease_duration = 86400 >> >> # Allow sending resource operation notification to DHCP agent >> # dhcp_agent_notification = True >> >> # Enable or disable bulk create/update/delete operations >> # allow_bulk = True >> allow_bulk = True >> # Enable or disable pagination >> # allow_pagination = False >> allow_pagination = False >> # Enable or disable sorting >> # allow_sorting = False >> allow_sorting = False >> # Enable or disable overlapping IPs for subnets >> # Attention: the following parameter MUST be set to False if Neutron is >> # being used in conjunction with nova security groups >> # allow_overlapping_ips = True >> allow_overlapping_ips = True >> # Ensure that configured gateway is on subnet >> # force_gateway_on_subnet = False >> >> >> # RPC configuration options. Defined in rpc __init__ >> # The messaging module to use, defaults to kombu. >> # rpc_backend = neutron.openstack.common.rpc.impl_kombu >> rpc_backend = neutron.openstack.common.rpc.impl_kombu >> # Size of RPC thread pool >> # rpc_thread_pool_size = 64 >> # Size of RPC connection pool >> # rpc_conn_pool_size = 30 >> # Seconds to wait for a response from call or multicall >> # rpc_response_timeout = 60 >> # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq. >> # rpc_cast_timeout = 30 >> # Modules of exceptions that are permitted to be recreated >> # upon receiving exception data from an rpc call. >> # allowed_rpc_exception_modules = neutron.openstack.common.exception, nova.exception >> # AMQP exchange to connect to if using RabbitMQ or QPID >> # control_exchange = neutron >> control_exchange = neutron >> >> # If passed, use a fake RabbitMQ provider >> # fake_rabbit = False >> >> # Configuration options if sending notifications via kombu rpc (these are >> # the defaults) >> # SSL version to use (valid only if SSL enabled) >> # kombu_ssl_version = >> # SSL key file (valid only if SSL enabled) >> # kombu_ssl_keyfile = >> # SSL cert file (valid only if SSL enabled) >> # kombu_ssl_certfile = >> # SSL certification authority file (valid only if SSL enabled) >> # kombu_ssl_ca_certs = >> # IP address of the RabbitMQ installation >> # rabbit_host = localhost >> rabbit_host = 192.168.32.20 >> # Password of the RabbitMQ server >> # rabbit_password = guest >> rabbit_password = guest >> # Port where RabbitMQ server is running/listening >> # rabbit_port = 5672 >> rabbit_port = 5672 >> # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) >> # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' >> # rabbit_hosts = localhost:5672 >> rabbit_hosts = 192.168.32.20:5672 >> # User ID used for RabbitMQ connections >> # rabbit_userid = guest >> rabbit_userid = guest >> # Location of a virtual RabbitMQ installation. >> # rabbit_virtual_host = / >> rabbit_virtual_host = / >> # Maximum retries with trying to connect to RabbitMQ >> # (the default of 0 implies an infinite retry count) >> # rabbit_max_retries = 0 >> # RabbitMQ connection retry interval >> # rabbit_retry_interval = 1 >> # Use HA queues in RabbitMQ (x-ha-policy: all). You need to >> # wipe RabbitMQ database when changing this option. (boolean value) >> # rabbit_ha_queues = false >> rabbit_ha_queues = False >> >> # QPID >> # rpc_backend=neutron.openstack.common.rpc.impl_qpid >> # Qpid broker hostname >> # qpid_hostname = localhost >> # Qpid broker port >> # qpid_port = 5672 >> # Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) >> # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port' >> # qpid_hosts = localhost:5672 >> # Username for qpid connection >> # qpid_username = '' >> # Password for qpid connection >> # qpid_password = '' >> # Space separated list of SASL mechanisms to use for auth >> # qpid_sasl_mechanisms = '' >> # Seconds between connection keepalive heartbeats >> # qpid_heartbeat = 60 >> # Transport to use, either 'tcp' or 'ssl' >> # qpid_protocol = tcp >> # Disable Nagle algorithm >> # qpid_tcp_nodelay = True >> >> # ZMQ >> # rpc_backend=neutron.openstack.common.rpc.impl_zmq >> # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. >> # The "host" option should point or resolve to this address. >> # rpc_zmq_bind_address = * >> >> # ============ Notification System Options ===================== >> >> # Notifications can be sent when network/subnet/port are created, updated or deleted. >> # There are three methods of sending notifications: logging (via the >> # log_file directive), rpc (via a message queue) and >> # noop (no notifications sent, the default) >> >> # Notification_driver can be defined multiple times >> # Do nothing driver >> # notification_driver = neutron.openstack.common.notifier.no_op_notifier >> # Logging driver >> # notification_driver = neutron.openstack.common.notifier.log_notifier >> # RPC driver. >> # notification_driver = neutron.openstack.common.notifier.rpc_notifier >> >> # default_notification_level is used to form actual topic name(s) or to set logging level >> # default_notification_level = INFO >> >> # default_publisher_id is a part of the notification payload >> # host = myhost.com >> # default_publisher_id = $host >> >> # Defined in rpc_notifier, can be comma separated values. >> # The actual topic names will be %s.%(default_notification_level)s >> # notification_topics = notifications >> >> # Default maximum number of items returned in a single response, >> # value == infinite and value < 0 means no max limit, and value must >> # be greater than 0. If the number of items requested is greater than >> # pagination_max_limit, server will just return pagination_max_limit >> # of number of items. >> # pagination_max_limit = -1 >> >> # Maximum number of DNS nameservers per subnet >> # max_dns_nameservers = 5 >> >> # Maximum number of host routes per subnet >> # max_subnet_host_routes = 20 >> >> # Maximum number of fixed ips per port >> # max_fixed_ips_per_port = 5 >> >> # =========== items for agent management extension ============= >> # Seconds to regard the agent as down; should be at least twice >> # report_interval, to be sure the agent is down for good >> # agent_down_time = 75 >> agent_down_time = 75 >> # =========== end of items for agent management extension ===== >> >> # =========== items for agent scheduler extension ============= >> # Driver to use for scheduling network to DHCP agent >> # network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler >> # Driver to use for scheduling router to a default L3 agent >> # router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler >> router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler >> # Driver to use for scheduling a loadbalancer pool to an lbaas agent >> # loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.agent_scheduler.ChanceScheduler >> >> # Allow auto scheduling networks to DHCP agent. It will schedule non-hosted >> # networks to first DHCP agent which sends get_active_networks message to >> # neutron server >> # network_auto_schedule = True >> >> # Allow auto scheduling routers to L3 agent. It will schedule non-hosted >> # routers to first L3 agent which sends sync_routers message to neutron server >> # router_auto_schedule = True >> >> # Number of DHCP agents scheduled to host a network. This enables redundant >> # DHCP agents for configured networks. >> # dhcp_agents_per_network = 1 >> dhcp_agents_per_network = 1 >> >> # =========== end of items for agent scheduler extension ===== >> >> # =========== WSGI parameters related to the API server ============== >> # Number of separate worker processes to spawn. The default, 0, runs the >> # worker thread in the current process. Greater than 0 launches that number of >> # child processes as workers. The parent process manages them. >> # api_workers = 0 >> api_workers = 0 >> >> # Number of separate RPC worker processes to spawn. The default, 0, runs the >> # worker thread in the current process. Greater than 0 launches that number of >> # child processes as RPC workers. The parent process manages them. >> # This feature is experimental until issues are addressed and testing has been >> # enabled for various plugins for compatibility. >> # rpc_workers = 0 >> >> # Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when >> # starting API server. Not supported on OS X. >> # tcp_keepidle = 600 >> >> # Number of seconds to keep retrying to listen >> # retry_until_window = 30 >> >> # Number of backlog requests to configure the socket with. >> # backlog = 4096 >> >> # Max header line to accommodate large tokens >> # max_header_line = 16384 >> >> # Enable SSL on the API server >> # use_ssl = False >> use_ssl = False >> >> # Certificate file to use when starting API server securely >> # ssl_cert_file = /path/to/certfile >> >> # Private key file to use when starting API server securely >> # ssl_key_file = /path/to/keyfile >> >> # CA certificate file to use when starting API server securely to >> # verify connecting clients. This is an optional parameter only required if >> # API clients need to authenticate to the API server using SSL certificates >> # signed by a trusted CA >> # ssl_ca_file = /path/to/cafile >> # ======== end of WSGI parameters related to the API server ========== >> >> >> # ======== neutron nova interactions ========== >> # Send notification to nova when port status is active. >> # notify_nova_on_port_status_changes = False >> notify_nova_on_port_status_changes = True >> >> # Send notifications to nova when port data (fixed_ips/floatingips) change >> # so nova can update it's cache. >> # notify_nova_on_port_data_changes = False >> notify_nova_on_port_data_changes = True >> >> # URL for connection to nova (Only supports one nova region currently). >> # nova_url = http://127.0.0.1:8774/v2 >> nova_url = http://192.168.32.20:8774/v2 >> >> # Name of nova region to use. Useful if keystone manages more than one region >> # nova_region_name = >> nova_region_name =RegionOne >> >> # Username for connection to nova in admin context >> # nova_admin_username = >> nova_admin_username =nova >> >> # The uuid of the admin nova tenant >> # nova_admin_tenant_id = >> nova_admin_tenant_id =d3e2355e31b449cca9dd57fa5073ec2f >> >> # Password for connection to nova in admin context. >> # nova_admin_password = >> nova_admin_password =secret >> >> # Authorization URL for connection to nova in admin context. >> # nova_admin_auth_url = >> nova_admin_auth_url =http://192.168.32.20:35357/v2.0 >> >> # Number of seconds between sending events to nova if there are any events to send >> # send_events_interval = 2 >> send_events_interval = 2 >> >> # ======== end of neutron nova interactions ========== >> rabbit_use_ssl=False >> >> [quotas] >> # Default driver to use for quota checks >> # quota_driver = neutron.db.quota_db.DbQuotaDriver >> >> # Resource name(s) that are supported in quota features >> # quota_items = network,subnet,port >> >> # Default number of resource allowed per tenant. A negative value means >> # unlimited. >> # default_quota = -1 >> >> # Number of networks allowed per tenant. A negative value means unlimited. >> # quota_network = 10 >> >> # Number of subnets allowed per tenant. A negative value means unlimited. >> # quota_subnet = 10 >> >> # Number of ports allowed per tenant. A negative value means unlimited. >> # quota_port = 50 >> >> # Number of security groups allowed per tenant. A negative value means >> # unlimited. >> # quota_security_group = 10 >> >> # Number of security group rules allowed per tenant. A negative value means >> # unlimited. >> # quota_security_group_rule = 100 >> >> # Number of vips allowed per tenant. A negative value means unlimited. >> # quota_vip = 10 >> >> # Number of pools allowed per tenant. A negative value means unlimited. >> # quota_pool = 10 >> >> # Number of pool members allowed per tenant. A negative value means unlimited. >> # The default is unlimited because a member is not a real resource consumer >> # on Openstack. However, on back-end, a member is a resource consumer >> # and that is the reason why quota is possible. >> # quota_member = -1 >> >> # Number of health monitors allowed per tenant. A negative value means >> # unlimited. >> # The default is unlimited because a health monitor is not a real resource >> # consumer on Openstack. However, on back-end, a member is a resource consumer >> # and that is the reason why quota is possible. >> # quota_health_monitors = -1 >> >> # Number of routers allowed per tenant. A negative value means unlimited. >> # quota_router = 10 >> >> # Number of floating IPs allowed per tenant. A negative value means unlimited. >> # quota_floatingip = 50 >> >> [agent] >> # Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real >> # root filter facility. >> # Change to "sudo" to skip the filtering and just run the comand directly >> # root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf >> root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf >> >> # =========== items for agent management extension ============= >> # seconds between nodes reporting state to server; should be less than >> # agent_down_time, best if it is half or less than agent_down_time >> # report_interval = 30 >> report_interval = 30 >> >> # =========== end of items for agent management extension ===== >> >> [keystone_authtoken] >> # auth_host = 127.0.0.1 >> auth_host = 192.168.32.20 >> # auth_port = 35357 >> auth_port = 35357 >> # auth_protocol = http >> auth_protocol = http >> # admin_tenant_name = %SERVICE_TENANT_NAME% >> admin_tenant_name = services >> # admin_user = %SERVICE_USER% >> admin_user = neutron >> # admin_password = %SERVICE_PASSWORD% >> admin_password = secret >> auth_uri=http://192.168.32.20:5000/ >> >> [database] >> # This line MUST be changed to actually run the plugin. >> # Example: >> # connection = mysql://root:pass at 127.0.0.1:3306/neutron >> connection = mysql://neutron:secret at 192.168.32.20/ovs_neutron >> # Replace 127.0.0.1 above with the IP address of the database used by the >> # main neutron server. (Leave it as is if the database runs on this host.) >> # connection = sqlite:// >> >> # The SQLAlchemy connection string used to connect to the slave database >> # slave_connection = >> >> # Database reconnection retry times - in event connectivity is lost >> # set to -1 implies an infinite retry count >> # max_retries = 10 >> max_retries = 10 >> >> # Database reconnection interval in seconds - if the initial connection to the >> # database fails >> # retry_interval = 10 >> retry_interval = 10 >> >> # Minimum number of SQL connections to keep open in a pool >> # min_pool_size = 1 >> >> # Maximum number of SQL connections to keep open in a pool >> # max_pool_size = 10 >> >> # Timeout in seconds before idle sql connections are reaped >> # idle_timeout = 3600 >> idle_timeout = 3600 >> >> # If set, use this value for max_overflow with sqlalchemy >> # max_overflow = 20 >> >> # Verbosity of SQL debugging information. 0=None, 100=Everything >> # connection_debug = 0 >> >> # Add python stack traces to SQL as comment strings >> # connection_trace = False >> >> # If set, use this value for pool_timeout with sqlalchemy >> # pool_timeout = 10 >> >> [service_providers] >> # Specify service providers (drivers) for advanced services like loadbalancer, VPN, Firewall. >> # Must be in form: >> # service_provider=::[:default] >> # List of allowed service types includes LOADBALANCER, FIREWALL, VPN >> # Combination of and must be unique; must also be unique >> # This is multiline option, example for default provider: >> # service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default >> # example of non-default provider: >> # service_provider=FIREWALL:name2:firewall_driver_path >> # --- Reference implementations --- >> # service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default >> service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default >> # In order to activate Radware's lbaas driver you need to uncomment the next line. >> # If you want to keep the HA Proxy as the default lbaas driver, remove the attribute default from the line below. >> # Otherwise comment the HA Proxy line >> # service_provider = LOADBALANCER:Radware:neutron.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver:default >> # uncomment the following line to make the 'netscaler' LBaaS provider available. >> # service_provider=LOADBALANCER:NetScaler:neutron.services.loadbalancer.drivers.netscaler.netscaler_driver.NetScalerPluginDriver >> # Uncomment the following line (and comment out the OpenSwan VPN line) to enable Cisco's VPN driver. >> # service_provider=VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default >> # Uncomment the line below to use Embrane heleos as Load Balancer service provider. >> # service_provider=LOADBALANCER:Embrane:neutron.services.loadbalancer.drivers.embrane.driver.EmbraneLbaas:default >> >> 3.Ldif.file for openLDAP >> # extended LDIF >> # >> # LDAPv3 >> # base with scope subtree >> # filter: (objectclass=*) >> # requesting: ALL >> # >> >> # example.org >> dn: dc=example,dc=org >> objectClass: top >> objectClass: dcObject >> objectClass: organization >> o: example Inc >> dc: example >> >> # Groups, example.org >> dn: ou=Groups,dc=example,dc=org >> ou: Groups >> objectClass: organizationalUnit >> >> # Users, example.org >> dn: ou=Users,dc=example,dc=org >> ou: users >> objectClass: organizationalUnit >> >> # Roles, example.org >> dn: ou=Roles,dc=example,dc=org >> ou: roles >> objectClass: organizationalUnit >> >> # admin, Users, example.org >> dn: cn=admin,ou=Users,dc=example,dc=org >> cn: admin >> objectClass: inetOrgPerson >> objectClass: top >> sn: admin >> uid: admin >> userPassword: secret >> >> # demo, Users, example.org >> dn: cn=demo,ou=Users,dc=example,dc=org >> cn: demo >> objectClass: inetOrgPerson >> objectClass: top >> sn: demo >> uid: demo >> userPassword: demo >> >> # cinder, Users, example.org >> dn: cn=cinder,ou=Users,dc=example,dc=org >> cn: cinder >> objectClass: inetOrgPerson >> objectClass: top >> sn: cinder >> uid: cinder >> userPassword: secret >> >> # glance, Users, example.org >> dn: cn=glance,ou=Users,dc=example,dc=org >> cn: glance >> objectClass: inetOrgPerson >> objectClass: top >> sn: glance >> uid: glance >> userPassword: secret >> >> # nova, Users, example.org >> dn: cn=nova,ou=Users,dc=example,dc=org >> cn: nova >> objectClass: inetOrgPerson >> objectClass: top >> sn: nova >> uid: nova >> userPassword: secret >> >> # neutron, Users, example.org >> dn: cn=neutron,ou=Users,dc=example,dc=org >> cn: neutron >> objectClass: inetOrgPerson >> objectClass: top >> sn: neutron >> uid: neutron >> userPassword: secret >> >> # enabled_users, Users, example.org >> dn: cn=enabled_users,ou=Users,dc=example,dc=org >> cn: enabled_users >> member: cn=admin,ou=Users,dc=example,dc=org >> member: cn=demo,ou=Users,dc=example,dc=org >> member: cn=nova,ou=Users,dc=example,dc=org >> member: cn=glance,ou=Users,dc=example,dc=org >> member: cn=cinder,ou=Users,dc=example,dc=org >> member: cn=neutron,ou=Users,dc=example,dc=org >> objectClass: groupOfNames >> >> # demo, Groups, example.org >> dn: cn=demo,ou=Groups,dc=example,dc=org >> cn: demo >> objectClass: groupOfNames >> member: cn=admin,ou=Users,dc=example,dc=org >> member: cn=demo,ou=Users,dc=example,dc=org >> member: cn=nova,ou=Users,dc=example,dc=org >> member: cn=glance,ou=Users,dc=example,dc=org >> member: cn=cinder,ou=Users,dc=example,dc=org >> member: cn=neutron,ou=Users,dc=example,dc=org >> >> >> # Member, demo, Groups, example.org >> dn: cn=Member,cn=demo,ou=Groups,dc=example,dc=org >> cn: member >> description: Role associated with openstack users >> objectClass: organizationalRole >> roleOccupant: cn=demo,ou=Users,dc=example,dc=org >> >> # admin, demo, Groups, example.org >> dn: cn=admin,cn=demo,ou=Groups,dc=example,dc=org >> cn: admin >> description: Role associated with openstack users >> objectClass: organizationalRole >> roleOccupant: cn=admin,ou=Users,dc=example,dc=org >> roleOccupant: cn=nova,ou=Users,dc=example,dc=org >> roleOccupant: cn=glance,ou=Users,dc=example,dc=org >> roleOccupant: cn=cinder,ou=Users,dc=example,dc=org >> roleOccupant: cn=neutron,ou=Users,dc=example,dc=org >> >> >> # services, Groups, example.org >> dn: cn=services,ou=Groups,dc=example,dc=org >> cn: services >> objectClass: groupOfNames >> member: cn=admin,ou=Users,dc=example,dc=org >> member: cn=demo,ou=Users,dc=example,dc=org >> member: cn=nova,ou=Users,dc=example,dc=org >> member: cn=glance,ou=Users,dc=example,dc=org >> member: cn=cinder,ou=Users,dc=example,dc=org >> member: cn=neutron,ou=Users,dc=example,dc=org >> >> # admin, services, Groups, example.org >> dn: cn=admin,cn=services,ou=Groups,dc=example,dc=org >> cn: admin >> description: Role associated with openstack users >> objectClass: organizationalRole >> roleOccupant: cn=admin,ou=Users,dc=example,dc=org >> roleOccupant: cn=nova,ou=Users,dc=example,dc=org >> roleOccupant: cn=glance,ou=Users,dc=example,dc=org >> roleOccupant: cn=cinder,ou=Users,dc=example,dc=org >> roleOccupant: cn=neutron,ou=Users,dc=example,dc=org >> >> # admin, Groups, example.org >> dn: cn=admin,ou=Groups,dc=example,dc=org >> cn: admin >> objectClass: groupOfNames >> member: cn=admin,ou=Users,dc=example,dc=org >> member: cn=demo,ou=Users,dc=example,dc=org >> member: cn=nova,ou=Users,dc=example,dc=org >> member: cn=glance,ou=Users,dc=example,dc=org >> member: cn=cinder,ou=Users,dc=example,dc=org >> member: cn=neutron,ou=Users,dc=example,dc=org >> >> # admin, admin, Groups, example.org >> dn: cn=admin,cn=admin,ou=Groups,dc=example,dc=org >> cn: admin >> description: Role associated with openstack users >> objectClass: organizationalRole >> roleOccupant: cn=admin,ou=Users,dc=example,dc=org >> roleOccupant: cn=nova,ou=Users,dc=example,dc=org >> roleOccupant: cn=glance,ou=Users,dc=example,dc=org >> roleOccupant: cn=cinder,ou=Users,dc=example,dc=org >> roleOccupant: cn=neutron,ou=Users,dc=example,dc=org >> >> # Member, Roles, example.org >> dn: cn=Member,ou=Roles,dc=example,dc=org >> cn: member >> description: Role associated with openstack users >> objectClass: organizationalRole >> roleOccupant: cn=demo,ou=Users,dc=example,dc=org >> >> # admin, Roles, example.org >> dn: cn=admin,ou=Roles,dc=example,dc=org >> cn: admin >> description: Role associated with openstack users >> objectClass: organizationalRole >> roleOccupant: cn=admin,ou=Users,dc=example,dc=org >> roleOccupant: cn=nova,ou=Users,dc=example,dc=org >> roleOccupant: cn=glance,ou=Users,dc=example,dc=org >> roleOccupant: cn=cinder,ou=Users,dc=example,dc=org >> roleOccupant: cn=neutron,ou=Users,dc=example,dc=org >> >> >> On Wed, Sep 10, 2014 at 11:56 AM, Rasanjaya Subasinghe wrote: >>> >> Hi, >> Sorry for the inconvenience sir,I herewith attached the keystone.conf,neutron.conf and LDAP ldif file. >> Its Centos6.5 and control and 3 compute node setup in-house cloud and without LDAP keystone settings(driver=keystone.identity.backends.ldap.Identity) everything working fine. those are, >> 1.Instance spawn perfectly, >> 2.live migration work perfectly. >> then try to configure keystone with LDAP driver gives that error on neutron server.log. >> 3.This setup up is tested on without ml2 and even ml2 test end with same issue. >> I will attached the LDAP file and neutron file. >> *keystone version 0.9.0 >> >> >> >> >> >> below shows the neutron error show on compute.log >> >> On Wed, Sep 10, 2014 at 11:52 AM, Rasanjaya Subasinghe wrote: >> >> On Sep 9, 2014, at 8:09 PM, Rasanjaya Subasinghe wrote: >> >>> >>> Hi Kashyap, >>> Its Centos6.5 and control and 3 compute node setup in-house cloud and without LDAP keystone settings(driver=keystone.identity.backends.ldap.Identity) everything working fine. those are, >>> 1.Instance spawn perfectly, >>> 2.live migration work perfectly. >>> then try to configure keystone with LDAP driver gives that error on neutron server.log. >>> 3.This setup up is tested on without ml2 and even ml2 test end with same issue. >>> I will attached the LDAP file and neutron file. >>> *keystone version 0.9.0 >>> >>> >>> >>> below shows the neutron error show on compute.log >>> >>> >>> >>> cheers, >>> thanks >>> Begin forwarded message: >>> >>>> From: Kashyap Chamarthy >>>> Subject: Re: [Rdo-list] icehouse ldap integration >>>> Date: September 9, 2014 at 7:27:59 PM GMT+5:30 >>>> To: Rasanjaya Subasinghe >>>> Cc: rdo-list at redhat.com >>>> >>>> On Tue, Sep 09, 2014 at 06:19:56PM +0530, Rasanjaya Subasinghe wrote: >>>>> >>>>> Hi, >>>>> I tried to configure openstack ice house with LDAP and all things are >>>>> goes well execp neutron issue, this is the issue which appears on the >>>>> server.log file of neutron service. >>>>> >>>>> Can you guide me for this matter? thanks for the help. >>>> >>>> This information you've provided is not sufficient to give any >>>> meaningful response. >>>> >>>> At a minimum, if anyone have to help you diagnose your issue, you need >>>> to provide: >>>> >>>> - Describe in more detail what you mean by "configure >>>> openstack ice house with LDAP". >>>> - What is the test you're trying to perform? An exact reproducer would >>>> be very useful. >>>> - What is the exact error message you see? Contextual logs/errors from >>>> Keystone/Nova. >>>> - Exact versions of Keystone, and other relevant packages. >>>> - What OS? Fedora? CentOS? Something else? >>>> - Probably, provide config files for /etc/keystone/keystone.conf and >>>> relevant Neutron config files (preferably uploaded somewhere in >>>> *plain text*). >>>> >>>> >>>> -- >>>> /kashyap >>> >> >> >> >> >> -- >> Rasanjaya Subasinghe >> >> >> >> -- >> Rasanjaya Subasinghe >> >> >> >> -- >> Rasanjaya Subasinghe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Sep 10 14:54:23 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 10 Sep 2014 10:54:23 -0400 Subject: [Rdo-list] Fwd: [OpenStack Marketing] Superuser Award Nominations Now Open! In-Reply-To: <38D204C7-953D-42ED-86B4-52D2A54E0A47@openstack.org> References: <38D204C7-953D-42ED-86B4-52D2A54E0A47@openstack.org> Message-ID: <5410661F.9010701@redhat.com> In case you haven't seen this, you should take a look at the below email. This is a great opportunity to show off what you're doing with OpenStack. And, of course, it's a chance for some shameless promotion of RDO, too! If you're not familiar with the SuperUser awards, watch this video of the keynote from the last OpenStack summit: https://www.youtube.com/watch?v=YoP0erdr7No If you're doing something interesting/exciting/cool/unique with RDO in your organization, here's a chance to tell the world about it, and get on the stage (or at least your video in the keynote) at the OpenStack Summit in Paris in November. I'd be delighted to nominate you, if you're uncomfortable nominating yourself. Please send me a note about how you're using OpenStack to make your corner of the world a better place. The deadline is the end of next week, so please don't wait. Get your nominations in. -------- Original Message -------- Subject: [OpenStack Marketing] Superuser Award Nominations Now Open! Date: Tue, 9 Sep 2014 15:31:51 -0500 From: Allison Price To: marketing at lists.openstack.org We are now accepting submissions for the Superuser Awards! During the last Marketing call, we introduced the inaugural Superuser Awards that will celebrate the transformational work of OpenStack end users and operators. You can now nominate a team (your own, or from another organization) to be recognized for using OpenStack to improve their business while contributing meaningfully to the OpenStack community. The deadline for submissions is September 19, and winners of the award will be announced on stage at the OpenStack Summit in Paris, November 3-7. Please let me know if you have any questions, and you can also visit the Superuser Awards page to find additional details on the selection criteria and nomination form. Cheers, Allison Allison Price OpenStack Marketing allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Marketing mailing list Marketing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing From david at zeromail.us Wed Sep 10 18:46:10 2014 From: david at zeromail.us (David S.) Date: Thu, 11 Sep 2014 01:46:10 +0700 Subject: [Rdo-list] RDO - Icehouse boot from image or snapshot and create a new volume fails Message-ID: Dear List, I just confused why when I try to launch an instance with option to boot from image and create a new volume always fail during the installation. A new volume were was created detected as "iso image" and not as a disk. I know that I can create a new volume then attach to an instance but I think this is could be a problem because requires additional step to make it works. Usually I'm doing like this: 1. launch an instance boot from image and create as volume 2. create a new volume then attach to a new instance and install the operating system to that volume attached. 2. after the installation complete, the instance must be terminate without deleting volume 3. launch new instance and boot from volume My OpenStack Icehouse running on CentOS 6.5 x86_64 single machine. Why I'm doing 3 steps above? I think I have 2 problems here: 1. Boot options doesn't change to disk after the operating system installation completed. 2. Disk were created after launch instance with option "boot from image and create volume" detected as ISO or image it self, so we need to attach additional volume (disk) into instance. If any wrong with my setup, please let me know. Thanks for your help Best regards, David S. ------------------------------------------------ p. 087881216110 e. david at zeromail.us w. http://blog.pnyet.web.id -------------- next part -------------- An HTML attachment was scrubbed... URL: From elias.moreno.tec at gmail.com Thu Sep 11 01:21:30 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Wed, 10 Sep 2014 20:51:30 -0430 Subject: [Rdo-list] read only volumes when glusterfs fails Message-ID: Hello, I'm seeing a constant behaviour with my implementation of openstack (libvirt/kvm) and cinder using glusterfs and I'm having troubles to find the real cause or if it's something not normal at all. I have configured cinder to use glusterfs as storage backend, the volume is a replica 2 of 8 disks in 2 servers and I have several volumes attached to several instances provided by cinder. The problem is this, is not uncommon that one of the gluster servers reboot suddenly due to power failures (this is an infrastructure problem unavoidable right now), when this happens the instances start to see the attached volume as read only which force me to hard reboot the instance so it can access the volume normally again. Here are my doubts, the gluster volume is created in such a way that not a single replica is on the same server as the master, if I lose a server due to hardware failure, the other is still usable so I don't really understand why couldn't the instances just use the replica brick in case that one of the servers reboots. Also, why the data is still there, can be read but can't be written to in case of glusterfs failures? Is this a problem with my implementation? configuration error on my part? something known to openstack? a cinder thing? libvirt? glusterfs? Having to hard reboot the instances is not a big issue right now, but nevertheless I want to understand what's happening and if I can avoid this issue. Some specifics: GlusterFS version is 3.5 All systems are CentOS 6.5 Openstack version is Icehouse installed with packstack/rdo Thanks in advance! -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Thu Sep 11 06:19:22 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 11 Sep 2014 02:19:22 -0400 Subject: [Rdo-list] icehouse-devel branch of redhat-openstack/tempest Message-ID: <648473255763364B961A02AC3BE1060D03C584D5C7@MX19A.corp.emc.com> In https://github.com/redhat-openstack/tempest , seems like the only activity is on that branch. Is that still IceHouse-compatible? Can anyone enlighten me on what the changes are? TIA, Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Thu Sep 11 09:17:14 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 11 Sep 2014 05:17:14 -0400 Subject: [Rdo-list] icehouse-devel branch of redhat-openstack/tempest Message-ID: <648473255763364B961A02AC3BE1060D03C584D647@MX19A.corp.emc.com> And it seems to be a bit broken on my platform (6.5, IceHouse): tools/config_tempest.py --create identity.uri http://10.103.234.141:5000/v2.0/ identity.admin_username admin identity.admin_password secret identity.admin_tenant_name admin /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability. _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning) Traceback (most recent call last): File "tools/config_tempest.py", line 31, in from tempest.common import api_discovery ImportError: No module named tempest.common From: Kaul, Yaniv Sent: Thursday, September 11, 2014 9:19 AM To: rdo-list at redhat.com Subject: icehouse-devel branch of redhat-openstack/tempest In https://github.com/redhat-openstack/tempest , seems like the only activity is on that branch. Is that still IceHouse-compatible? Can anyone enlighten me on what the changes are? TIA, Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jruzicka at redhat.com Thu Sep 11 11:49:00 2014 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Thu, 11 Sep 2014 13:49:00 +0200 Subject: [Rdo-list] Openstack Horizon el6 status In-Reply-To: <248A2D277CB6E34992A0902A839C1E5D0101948519@CERNXCHG42.cern.ch> References: <248A2D277CB6E34992A0902A839C1E5D0101948519@CERNXCHG42.cern.ch> Message-ID: <54118C2C.3030300@redhat.com> Hello, python-django-horizon 2014.1.2-2.el6 and other .2 packages are now in final stage of CI, they will be pushed to public repos as soon as tests pass. As of el6 patches, we use common patches branch (from icehouse up) for all dists unless there is explicit reason not to. As Icehouse RDO is bound to Fedora 21, f21-patches patches branch is used. Thus I'd expect Horizon Icehouse patches to be here: https://github.com/redhat-openstack/horizon/commits/f21-patches Cheers Jakub On 27.8.2014 14:57, Jose Castro Leon wrote: > Hi, > We are following the horizon releases from the RDO repository and also from the github repository as well. > We have just realised that the package for icehouse-2 has not been released and that the branch that was used to track the redhat patches for el6 has been removed as well. > Could you please tell me the timeline for this package? Is there any other repository with the el6 patches? > Kind regards, > > Jose Castro Leon > CERN IT-OIS tel: +41.22.76.74272 > mob: +41.76.48.79222 > fax: +41.22.76.67955 > Office: 31-R-021 CH-1211 Geneve 23 > email: jose.castro.leon at cern.ch > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From rbowen at redhat.com Thu Sep 11 13:33:39 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 11 Sep 2014 09:33:39 -0400 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live Message-ID: <5411A4B3.6050102@redhat.com> Those of you who were waiting for the 2014.1.2 updates for EL6, they have emerged successfully from the CI testing process, and are now available at https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/?C=M;O=D As always, questions and concerns should come back to this list. We eagerly anticipate your feedback on these packages. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Thu Sep 11 13:46:00 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 11 Sep 2014 09:46:00 -0400 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live In-Reply-To: <5411A4B3.6050102@redhat.com> References: <5411A4B3.6050102@redhat.com> Message-ID: <5411A798.90002@redhat.com> On 09/11/2014 09:33 AM, Rich Bowen wrote: > Those of you who were waiting for the 2014.1.2 updates for EL6, they > have emerged successfully from the CI testing process, and are now > available at > https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/?C=M;O=D > > As always, questions and concerns should come back to this list. We > eagerly anticipate your feedback on these packages. > > For the sake of completeness, I should mention that Ceilometer wasn't updated in that batch, but is currently being tested and should be pushed out soon. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From juanfra.rodriguez.cardoso at gmail.com Thu Sep 11 14:29:09 2014 From: juanfra.rodriguez.cardoso at gmail.com (JuanFra Rodriguez Cardoso) Date: Thu, 11 Sep 2014 16:29:09 +0200 Subject: [Rdo-list] read only volumes when glusterfs fails In-Reply-To: References: Message-ID: Hi Elias: This Joe Julian's post may help you to solve that trouble: http://joejulian.name/blog/keeping-your-vms-from-going-read-only-when-encountering-a-ping-timeout-in-glusterfs/ Regards, --- JuanFra Rodriguez Cardoso 2014-09-11 3:21 GMT+02:00 El?as David : > Hello, > > I'm seeing a constant behaviour with my implementation of openstack > (libvirt/kvm) and cinder using glusterfs and I'm having troubles to find > the real cause or if it's something not normal at all. > > I have configured cinder to use glusterfs as storage backend, the volume > is a replica 2 of 8 disks in 2 servers and I have several volumes attached > to several instances provided by cinder. The problem is this, is not > uncommon that one of the gluster servers reboot suddenly due to power > failures (this is an infrastructure problem unavoidable right now), when > this happens the instances start to see the attached volume as read only > which force me to hard reboot the instance so it can access the volume > normally again. > > Here are my doubts, the gluster volume is created in such a way that not a > single replica is on the same server as the master, if I lose a server due > to hardware failure, the other is still usable so I don't really understand > why couldn't the instances just use the replica brick in case that one of > the servers reboots. > > Also, why the data is still there, can be read but can't be written to in > case of glusterfs failures? Is this a problem with my implementation? > configuration error on my part? something known to openstack? a cinder > thing? libvirt? glusterfs? > > Having to hard reboot the instances is not a big issue right now, but > nevertheless I want to understand what's happening and if I can avoid this > issue. > > Some specifics: > > GlusterFS version is 3.5 All systems are CentOS 6.5 Openstack version is > Icehouse installed with packstack/rdo > > Thanks in advance! > > -- > El?as David. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Sep 11 14:47:34 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 11 Sep 2014 10:47:34 -0400 Subject: [Rdo-list] RDO Juno test days, September 25-26 Message-ID: <5411B606.4060302@redhat.com> tl,dr; Mark your calendar, September 25-26 RDO Juno M3 test day. As you're no doubt aware, OpenStack Juno Milestone 3 released a week ago today [1] and Juno is now in FeatureFreeze [2]. We're in the process of packaging and testing this stuff for RDO, and, as part of that, we'll be conducting test days, September 25-26, to exercise these packages. We would greatly appreciate your help in this testing process, as the more different environments these packages are subjected to, the greater the chances of ferreting out the places where it's going to break. WHERE: #rdo IRC channel on Freenode WHEN: All day, September 25-26, so that we can cover everyone's time zones WHAT: Over the coming days, we'll be documenting a number of test cases that can get people started, as well as details of how to report problems when you encounter them. We'll also document workarounds there, as we go along, so that you don't have to waste time on problems that have already been solved. That will be posted to this list real soon. [1] https://wiki.openstack.org/wiki/Juno_Release_Schedule [2] https://wiki.openstack.org/wiki/FeatureFreeze -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From Yaniv.Kaul at emc.com Thu Sep 11 14:55:35 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 11 Sep 2014 10:55:35 -0400 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live In-Reply-To: <5411A4B3.6050102@redhat.com> References: <5411A4B3.6050102@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03C584D735@MX19A.corp.emc.com> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Rich Bowen > Sent: Thursday, September 11, 2014 4:34 PM > To: rdo-list at redhat.com > Subject: [Rdo-list] Announce: All 2014.1.2 updates now live > > Those of you who were waiting for the 2014.1.2 updates for EL6, they have > emerged successfully from the CI testing process, and are now available at > https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel- > 6/?C=M;O=D > > As always, questions and concerns should come back to this list. We eagerly > anticipate your feedback on these packages. > > --Rich I wonder if it can explain the breakage in Tempest tests. I have not changed anything in my environment or deployment script, yet Tempest fails: /run_tempest.sh -N --serial 'tempest.api.volume*' /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability. _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning) Non-zero exit code (2) from test listing. ---------------------------------------------------------------------- Ran 0 tests in 5.070s > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://openstack.redhat.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From dron at redhat.com Thu Sep 11 15:01:09 2014 From: dron at redhat.com (Dafna Ron) Date: Thu, 11 Sep 2014 16:01:09 +0100 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live In-Reply-To: <648473255763364B961A02AC3BE1060D03C584D735@MX19A.corp.emc.com> References: <5411A4B3.6050102@redhat.com> <648473255763364B961A02AC3BE1060D03C584D735@MX19A.corp.emc.com> Message-ID: <5411B935.1000605@redhat.com> Hi Yaniv, I encountered this when doing cinder upgrades and found the same reported by users on line: https://ask.openstack.org/en/question/28335/you-should-rebuild-using-libgmp-5-to-avoid-timing-attack-vulnerability-_warnnot-using-mpz_powm_sec-you-should-rebuild-using-libgmp-5-to-avoid-timing/ There is a bug reported on this as well for this as well in bugzilla. https://bugzilla.redhat.com/show_bug.cgi?id=1123339 adding Eric if tempest tests are braking on RDO. Thanks! Dafna On 09/11/2014 03:55 PM, Kaul, Yaniv wrote: >> -----Original Message----- >> From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On >> Behalf Of Rich Bowen >> Sent: Thursday, September 11, 2014 4:34 PM >> To: rdo-list at redhat.com >> Subject: [Rdo-list] Announce: All 2014.1.2 updates now live >> >> Those of you who were waiting for the 2014.1.2 updates for EL6, they have >> emerged successfully from the CI testing process, and are now available at >> https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel- >> 6/?C=M;O=D >> >> As always, questions and concerns should come back to this list. We eagerly >> anticipate your feedback on these packages. >> >> --Rich > I wonder if it can explain the breakage in Tempest tests. I have not changed anything in my environment or deployment script, yet Tempest fails: > > /run_tempest.sh -N --serial 'tempest.api.volume*' > /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability. > _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning) > Non-zero exit code (2) from test listing. > > ---------------------------------------------------------------------- > Ran 0 tests in 5.070s > > > >> -- >> Rich Bowen - rbowen at redhat.com >> OpenStack Community Liaison >> http://openstack.redhat.com/ >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Dafna Ron From Yaniv.Kaul at emc.com Thu Sep 11 15:03:58 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 11 Sep 2014 11:03:58 -0400 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live In-Reply-To: <5411B935.1000605@redhat.com> References: <5411A4B3.6050102@redhat.com> <648473255763364B961A02AC3BE1060D03C584D735@MX19A.corp.emc.com> <5411B935.1000605@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03C584D73E@MX19A.corp.emc.com> > -----Original Message----- > From: Dafna Ron [mailto:dron at redhat.com] > Sent: Thursday, September 11, 2014 6:01 PM > To: Kaul, Yaniv; Rich Bowen; rdo-list at redhat.com; eha >> Eric Harney > Subject: Re: [Rdo-list] Announce: All 2014.1.2 updates now live > > Hi Yaniv, > > I encountered this when doing cinder upgrades and found the same reported by > users on line: > > https://ask.openstack.org/en/question/28335/you-should-rebuild-using-libgmp- > 5-to-avoid-timing-attack-vulnerability-_warnnot-using-mpz_powm_sec-you- > should-rebuild-using-libgmp-5-to-avoid-timing/ The warning is not the problem. I've lived with it for months. "Non-zero exit code (2) from test listing." Is the issue. The advised workaround to downgrade some Python packages did not work for me (comment 12 @ https://bugs.launchpad.net/tempest/+bug/1277538 ) Y. > > There is a bug reported on this as well for this as well in bugzilla. > > https://bugzilla.redhat.com/show_bug.cgi?id=1123339 > > adding Eric if tempest tests are braking on RDO. > > Thanks! > Dafna > > > On 09/11/2014 03:55 PM, Kaul, Yaniv wrote: > >> -----Original Message----- > >> From: rdo-list-bounces at redhat.com > >> [mailto:rdo-list-bounces at redhat.com] On Behalf Of Rich Bowen > >> Sent: Thursday, September 11, 2014 4:34 PM > >> To: rdo-list at redhat.com > >> Subject: [Rdo-list] Announce: All 2014.1.2 updates now live > >> > >> Those of you who were waiting for the 2014.1.2 updates for EL6, they > >> have emerged successfully from the CI testing process, and are now > >> available at > >> https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epe > >> l- > >> 6/?C=M;O=D > >> > >> As always, questions and concerns should come back to this list. We > >> eagerly anticipate your feedback on these packages. > >> > >> --Rich > > I wonder if it can explain the breakage in Tempest tests. I have not changed > anything in my environment or deployment script, yet Tempest fails: > > > > /run_tempest.sh -N --serial 'tempest.api.volume*' > > /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: > PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using > libgmp >= 5 to avoid timing attack vulnerability. > > _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= > > 5 to avoid timing attack vulnerability.", PowmInsecureWarning) Non-zero exit > code (2) from test listing. > > > > ---------------------------------------------------------------------- > > Ran 0 tests in 5.070s > > > > > > > >> -- > >> Rich Bowen - rbowen at redhat.com > >> OpenStack Community Liaison > >> http://openstack.redhat.com/ > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- > Dafna Ron From dron at redhat.com Thu Sep 11 15:18:03 2014 From: dron at redhat.com (Dafna Ron) Date: Thu, 11 Sep 2014 16:18:03 +0100 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live In-Reply-To: <648473255763364B961A02AC3BE1060D03C584D73E@MX19A.corp.emc.com> References: <5411A4B3.6050102@redhat.com> <648473255763364B961A02AC3BE1060D03C584D735@MX19A.corp.emc.com> <5411B935.1000605@redhat.com> <648473255763364B961A02AC3BE1060D03C584D73E@MX19A.corp.emc.com> Message-ID: <5411BD2B.1020000@redhat.com> Adding David and Yaniv Eylon On 09/11/2014 04:03 PM, Kaul, Yaniv wrote: >> -----Original Message----- >> From: Dafna Ron [mailto:dron at redhat.com] >> Sent: Thursday, September 11, 2014 6:01 PM >> To: Kaul, Yaniv; Rich Bowen; rdo-list at redhat.com; eha >> Eric Harney >> Subject: Re: [Rdo-list] Announce: All 2014.1.2 updates now live >> >> Hi Yaniv, >> >> I encountered this when doing cinder upgrades and found the same reported by >> users on line: >> >> https://ask.openstack.org/en/question/28335/you-should-rebuild-using-libgmp- >> 5-to-avoid-timing-attack-vulnerability-_warnnot-using-mpz_powm_sec-you- >> should-rebuild-using-libgmp-5-to-avoid-timing/ > The warning is not the problem. I've lived with it for months. > > "Non-zero exit code (2) from test listing." > > Is the issue. > The advised workaround to downgrade some Python packages did not work for me (comment 12 @ https://bugs.launchpad.net/tempest/+bug/1277538 ) > Y. > > > >> There is a bug reported on this as well for this as well in bugzilla. >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1123339 >> >> adding Eric if tempest tests are braking on RDO. >> >> Thanks! >> Dafna >> >> >> On 09/11/2014 03:55 PM, Kaul, Yaniv wrote: >>>> -----Original Message----- >>>> From: rdo-list-bounces at redhat.com >>>> [mailto:rdo-list-bounces at redhat.com] On Behalf Of Rich Bowen >>>> Sent: Thursday, September 11, 2014 4:34 PM >>>> To: rdo-list at redhat.com >>>> Subject: [Rdo-list] Announce: All 2014.1.2 updates now live >>>> >>>> Those of you who were waiting for the 2014.1.2 updates for EL6, they >>>> have emerged successfully from the CI testing process, and are now >>>> available at >>>> https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epe >>>> l- >>>> 6/?C=M;O=D >>>> >>>> As always, questions and concerns should come back to this list. We >>>> eagerly anticipate your feedback on these packages. >>>> >>>> --Rich >>> I wonder if it can explain the breakage in Tempest tests. I have not changed >> anything in my environment or deployment script, yet Tempest fails: >>> /run_tempest.sh -N --serial 'tempest.api.volume*' >>> /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: >> PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using >> libgmp >= 5 to avoid timing attack vulnerability. >>> _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= >>> 5 to avoid timing attack vulnerability.", PowmInsecureWarning) Non-zero exit >> code (2) from test listing. >>> ---------------------------------------------------------------------- >>> Ran 0 tests in 5.070s >>> >>> >>> >>>> -- >>>> Rich Bowen - rbowen at redhat.com >>>> OpenStack Community Liaison >>>> http://openstack.redhat.com/ >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >> >> -- >> Dafna Ron -- Dafna Ron From eharney at redhat.com Thu Sep 11 15:22:11 2014 From: eharney at redhat.com (Eric Harney) Date: Thu, 11 Sep 2014 11:22:11 -0400 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live In-Reply-To: <5411BD2B.1020000@redhat.com> References: <5411A4B3.6050102@redhat.com> <648473255763364B961A02AC3BE1060D03C584D735@MX19A.corp.emc.com> <5411B935.1000605@redhat.com> <648473255763364B961A02AC3BE1060D03C584D73E@MX19A.corp.emc.com> <5411BD2B.1020000@redhat.com> Message-ID: <5411BE23.9080209@redhat.com> On 09/11/2014 11:18 AM, Dafna Ron wrote: > Adding David and Yaniv Eylon > > On 09/11/2014 04:03 PM, Kaul, Yaniv wrote: >>> -----Original Message----- >>> From: Dafna Ron [mailto:dron at redhat.com] >>> Sent: Thursday, September 11, 2014 6:01 PM >>> To: Kaul, Yaniv; Rich Bowen; rdo-list at redhat.com; eha >> Eric Harney >>> Subject: Re: [Rdo-list] Announce: All 2014.1.2 updates now live >>> >>> Hi Yaniv, >>> >>> I encountered this when doing cinder upgrades and found the same >>> reported by >>> users on line: >>> >>> https://ask.openstack.org/en/question/28335/you-should-rebuild-using-libgmp- >>> >>> 5-to-avoid-timing-attack-vulnerability-_warnnot-using-mpz_powm_sec-you- >>> should-rebuild-using-libgmp-5-to-avoid-timing/ >> The warning is not the problem. I've lived with it for months. >> >> "Non-zero exit code (2) from test listing." >> >> Is the issue. >> The advised workaround to downgrade some Python packages did not work >> for me (comment 12 @ https://bugs.launchpad.net/tempest/+bug/1277538 ) >> Y. >> >> >> >>> There is a bug reported on this as well for this as well in bugzilla. >>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1123339 >>> >>> adding Eric if tempest tests are braking on RDO. >>> >>> Thanks! >>> Dafna >>> >>> >>> On 09/11/2014 03:55 PM, Kaul, Yaniv wrote: >>>>> -----Original Message----- >>>>> From: rdo-list-bounces at redhat.com >>>>> [mailto:rdo-list-bounces at redhat.com] On Behalf Of Rich Bowen >>>>> Sent: Thursday, September 11, 2014 4:34 PM >>>>> To: rdo-list at redhat.com >>>>> Subject: [Rdo-list] Announce: All 2014.1.2 updates now live >>>>> >>>>> Those of you who were waiting for the 2014.1.2 updates for EL6, they >>>>> have emerged successfully from the CI testing process, and are now >>>>> available at >>>>> https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epe >>>>> l- >>>>> 6/?C=M;O=D >>>>> >>>>> As always, questions and concerns should come back to this list. We >>>>> eagerly anticipate your feedback on these packages. >>>>> >>>>> --Rich >>>> I wonder if it can explain the breakage in Tempest tests. I have not >>>> changed >>> anything in my environment or deployment script, yet Tempest fails: >>>> /run_tempest.sh -N --serial 'tempest.api.volume*' >>>> /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: >>> PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using >>> libgmp >= 5 to avoid timing attack vulnerability. >>>> _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= >>>> 5 to avoid timing attack vulnerability.", PowmInsecureWarning) >>>> Non-zero exit >>> code (2) from test listing. >>>> ---------------------------------------------------------------------- >>>> Ran 0 tests in 5.070s >>>> >>>> >>>> The "Non-zero exit code (2) from test listing." sounds like testr is failing within tempest itself. Tempest should have a log file (or the ability to generate one) which will tell you what error actually occurred here. This means that tempest failed to load up its own tests independently of the other OpenStack services, I believe. From Yaniv.Kaul at emc.com Thu Sep 11 15:32:24 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 11 Sep 2014 11:32:24 -0400 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live In-Reply-To: <5411BE23.9080209@redhat.com> References: <5411A4B3.6050102@redhat.com> <648473255763364B961A02AC3BE1060D03C584D735@MX19A.corp.emc.com> <5411B935.1000605@redhat.com> <648473255763364B961A02AC3BE1060D03C584D73E@MX19A.corp.emc.com> <5411BD2B.1020000@redhat.com> <5411BE23.9080209@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03C584D761@MX19A.corp.emc.com> > -----Original Message----- > From: Eric Harney [mailto:eharney at redhat.com] > Sent: Thursday, September 11, 2014 6:22 PM > To: dron at redhat.com; Kaul, Yaniv; Rich Bowen; rdo-list at redhat.com; David > Kranz; Yaniv Eylon > Subject: Re: [Rdo-list] Announce: All 2014.1.2 updates now live > > On 09/11/2014 11:18 AM, Dafna Ron wrote: > > Adding David and Yaniv Eylon > > > > On 09/11/2014 04:03 PM, Kaul, Yaniv wrote: > >>> -----Original Message----- > >>> From: Dafna Ron [mailto:dron at redhat.com] > >>> Sent: Thursday, September 11, 2014 6:01 PM > >>> To: Kaul, Yaniv; Rich Bowen; rdo-list at redhat.com; eha >> Eric Harney > >>> Subject: Re: [Rdo-list] Announce: All 2014.1.2 updates now live > >>> > >>> Hi Yaniv, > >>> > >>> I encountered this when doing cinder upgrades and found the same > >>> reported by users on line: > >>> > >>> https://ask.openstack.org/en/question/28335/you-should-rebuild-using > >>> -libgmp- > >>> > >>> 5-to-avoid-timing-attack-vulnerability-_warnnot-using-mpz_powm_sec-y > >>> ou- should-rebuild-using-libgmp-5-to-avoid-timing/ > >> The warning is not the problem. I've lived with it for months. > >> > >> "Non-zero exit code (2) from test listing." > >> > >> Is the issue. > >> The advised workaround to downgrade some Python packages did not work > >> for me (comment 12 @ https://bugs.launchpad.net/tempest/+bug/1277538 > >> ) Y. > >> > >> > >> > >>> There is a bug reported on this as well for this as well in bugzilla. > >>> > >>> https://bugzilla.redhat.com/show_bug.cgi?id=1123339 > >>> > >>> adding Eric if tempest tests are braking on RDO. > >>> > >>> Thanks! > >>> Dafna > >>> > >>> > >>> On 09/11/2014 03:55 PM, Kaul, Yaniv wrote: > >>>>> -----Original Message----- > >>>>> From: rdo-list-bounces at redhat.com > >>>>> [mailto:rdo-list-bounces at redhat.com] On Behalf Of Rich Bowen > >>>>> Sent: Thursday, September 11, 2014 4:34 PM > >>>>> To: rdo-list at redhat.com > >>>>> Subject: [Rdo-list] Announce: All 2014.1.2 updates now live > >>>>> > >>>>> Those of you who were waiting for the 2014.1.2 updates for EL6, > >>>>> they have emerged successfully from the CI testing process, and > >>>>> are now available at > >>>>> https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/ > >>>>> epe > >>>>> l- > >>>>> 6/?C=M;O=D > >>>>> > >>>>> As always, questions and concerns should come back to this list. > >>>>> We eagerly anticipate your feedback on these packages. > >>>>> > >>>>> --Rich > >>>> I wonder if it can explain the breakage in Tempest tests. I have > >>>> not changed > >>> anything in my environment or deployment script, yet Tempest fails: > >>>> /run_tempest.sh -N --serial 'tempest.api.volume*' > >>>> /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: > >>> PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild > >>> using libgmp >= 5 to avoid timing attack vulnerability. > >>>> _warn("Not using mpz_powm_sec. You should rebuild using libgmp > >>>> >= > >>>> 5 to avoid timing attack vulnerability.", PowmInsecureWarning) > >>>> Non-zero exit > >>> code (2) from test listing. > >>>> ------------------------------------------------------------------- > >>>> --- > >>>> Ran 0 tests in 5.070s > >>>> > >>>> > >>>> > > The "Non-zero exit code (2) from test listing." sounds like testr is failing within > tempest itself. Tempest should have a log file (or the ability to generate one) > which will tell you what error actually occurred here. > > This means that tempest failed to load up its own tests independently of the > other OpenStack services, I believe. Nothing except DEBUG and INFO there. The last lines: 2014-09-11 18:25:45.375 103533 INFO requests.packages.urllib3.connectionpool [-] Starting new HTTP connection (1): 10.103.234.141 2014-09-11 18:25:45.608 103533 DEBUG requests.packages.urllib3.connectionpool [-] "GET /v2/df6db1f49c434f8da86ed97dc19028c5/images HTTP/1.1" 200 2182 _make_request /usr/lib/python2.6/site-packages/requests/packages/urllib3/connectionpool.py:362 2014-09-11 18:25:45.610 103533 DEBUG novaclient.client [-] RESP: [200] {'date': 'Thu, 11 Sep 2014 15:25:45 GMT', 'connection': 'keep-alive', 'content-type': 'application/json', 'content-length': '2182', 'x-compute-request-id': 'req-fc3d6138-bf9c-4141-b153-2bc1feb1ba2a'} RESP BODY: {"images": [{"id": "9961d022-a930-4bdb-882d-b75374f71fea", "links": [{"href": "http://10.103.234.141:8774/v2/df6db1f49c434f8da86ed97dc19028c5/images/9961d022-a930-4bdb-882d-b75374f71fea", "rel": "self"}, {"href": "http://10.103.234.141:8774/df6db1f49c434f8da86ed97dc19028c5/images/9961d022-a930-4bdb-882d-b75374f71fea", "rel": "bookmark"}, {"href": "http://10.103.234.141:9292/df6db1f49c434f8da86ed97dc19028c5/images/9961d022-a930-4bdb-882d-b75374f71fea", "type": "application/vnd.openstack.image", "rel": "alternate"}], "name": "Fedora 20 x86_64"}, {"id": "c764b332-c9a3-426b-961e-8e349b749158", "links": [{"href": "http://10.103.234.141:8774/v2/df6db1f49c434f8da86ed97dc19028c5/images/c764b332-c9a3-426b-961e-8e349b749158", "rel": "self"}, {"href": "http://10.103.234.141:8774/df6db1f49c434f8da86ed97dc19028c5/images/c764b332-c9a3-426b-961e-8e349b749158", "rel": "bookmark"}, {"href": "http://10.103.234.141:9292/df6db1f49c434f8da86ed97dc19028c5/images/c764b332-c9a3-426b-961e-8e349b749158", "type": "application/vnd.openstack.image", "rel": "alternate"}], "name": "cirros-0.3.1-x86_64-disk.img_alt"}, {"id": "2c4d3006-0c06-4e42-9067-b290afd6e704", "links": [{"href": "http://10.103.234.141:8774/v2/df6db1f49c434f8da86ed97dc19028c5/images/2c4d3006-0c06-4e42-9067-b290afd6e704", "rel": "self"}, {"href": "http://10.103.234.141:8774/df6db1f49c434f8da86ed97dc19028c5/images/2c4d3006-0c06-4e42-9067-b290afd6e704", "rel": "bookmark"}, {"href": "http://10.103.234.141:9292/df6db1f49c434f8da86ed97dc19028c5/images/2c4d3006-0c06-4e42-9067-b290afd6e704", "type": "application/vnd.openstack.image", "rel": "alternate"}], "name": "cirros-0.3.1-x86_64-disk.img"}, {"id": "6441f369-7322-4943-aa57-450c72650dc6", "links": [{"href": "http://10.103.234.141:8774/v2/df6db1f49c434f8da86ed97dc19028c5/images/6441f369-7322-4943-aa57-450c72650dc6", "rel": "self"}, {"href": "http://10.103.234.141:8774/df6db1f49c434f8da86ed97dc19028c5/images/6441f369-7322-4943-aa57-450c72650dc6", "rel": "bookmark"}, {"href": "http://10.103.234.141:9292/df6db1f49c434f8da8 6ed97dc19028c5/images/6441f369-7322-4943-aa57-450c72650dc6", "type": "application/vnd.openstack.image", "rel": "alternate"}], "name": "cirros"}]} http_log_resp /usr/lib/python2.6/site-packages/novaclient/client.py:187 Y. From Yaniv.Kaul at emc.com Thu Sep 11 15:40:13 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 11 Sep 2014 11:40:13 -0400 Subject: [Rdo-list] Announce: All 2014.1.2 updates now live In-Reply-To: <648473255763364B961A02AC3BE1060D03C584D761@MX19A.corp.emc.com> References: <5411A4B3.6050102@redhat.com> <648473255763364B961A02AC3BE1060D03C584D735@MX19A.corp.emc.com> <5411B935.1000605@redhat.com> <648473255763364B961A02AC3BE1060D03C584D73E@MX19A.corp.emc.com> <5411BD2B.1020000@redhat.com> <5411BE23.9080209@redhat.com> <648473255763364B961A02AC3BE1060D03C584D761@MX19A.corp.emc.com> Message-ID: <648473255763364B961A02AC3BE1060D03C584D765@MX19A.corp.emc.com> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Kaul, Yaniv > Sent: Thursday, September 11, 2014 6:32 PM > To: Eric Harney; dron at redhat.com; Rich Bowen; rdo-list at redhat.com; David > Kranz; Yaniv Eylon > Subject: Re: [Rdo-list] Announce: All 2014.1.2 updates now live > > > -----Original Message----- > > From: Eric Harney [mailto:eharney at redhat.com] > > Sent: Thursday, September 11, 2014 6:22 PM > > To: dron at redhat.com; Kaul, Yaniv; Rich Bowen; rdo-list at redhat.com; > > David Kranz; Yaniv Eylon > > Subject: Re: [Rdo-list] Announce: All 2014.1.2 updates now live > > > > On 09/11/2014 11:18 AM, Dafna Ron wrote: > > > Adding David and Yaniv Eylon > > > > > > On 09/11/2014 04:03 PM, Kaul, Yaniv wrote: > > >>> -----Original Message----- > > >>> From: Dafna Ron [mailto:dron at redhat.com] > > >>> Sent: Thursday, September 11, 2014 6:01 PM > > >>> To: Kaul, Yaniv; Rich Bowen; rdo-list at redhat.com; eha >> Eric > > >>> Harney > > >>> Subject: Re: [Rdo-list] Announce: All 2014.1.2 updates now live > > >>> > > >>> Hi Yaniv, > > >>> > > >>> I encountered this when doing cinder upgrades and found the same > > >>> reported by users on line: > > >>> > > >>> https://ask.openstack.org/en/question/28335/you-should-rebuild-usi > > >>> ng > > >>> -libgmp- > > >>> > > >>> 5-to-avoid-timing-attack-vulnerability-_warnnot-using-mpz_powm_sec > > >>> -y > > >>> ou- should-rebuild-using-libgmp-5-to-avoid-timing/ > > >> The warning is not the problem. I've lived with it for months. > > >> > > >> "Non-zero exit code (2) from test listing." > > >> > > >> Is the issue. > > >> The advised workaround to downgrade some Python packages did not > > >> work for me (comment 12 @ > > >> https://bugs.launchpad.net/tempest/+bug/1277538 > > >> ) Y. > > >> > > >> > > >> > > >>> There is a bug reported on this as well for this as well in bugzilla. > > >>> > > >>> https://bugzilla.redhat.com/show_bug.cgi?id=1123339 > > >>> > > >>> adding Eric if tempest tests are braking on RDO. > > >>> > > >>> Thanks! > > >>> Dafna > > >>> > > >>> > > >>> On 09/11/2014 03:55 PM, Kaul, Yaniv wrote: > > >>>>> -----Original Message----- > > >>>>> From: rdo-list-bounces at redhat.com > > >>>>> [mailto:rdo-list-bounces at redhat.com] On Behalf Of Rich Bowen > > >>>>> Sent: Thursday, September 11, 2014 4:34 PM > > >>>>> To: rdo-list at redhat.com > > >>>>> Subject: [Rdo-list] Announce: All 2014.1.2 updates now live > > >>>>> > > >>>>> Those of you who were waiting for the 2014.1.2 updates for EL6, > > >>>>> they have emerged successfully from the CI testing process, and > > >>>>> are now available at > > >>>>> https://repos.fedorapeople.org/repos/openstack/openstack-icehous > > >>>>> e/ > > >>>>> epe > > >>>>> l- > > >>>>> 6/?C=M;O=D > > >>>>> > > >>>>> As always, questions and concerns should come back to this list. > > >>>>> We eagerly anticipate your feedback on these packages. > > >>>>> > > >>>>> --Rich > > >>>> I wonder if it can explain the breakage in Tempest tests. I have > > >>>> not changed > > >>> anything in my environment or deployment script, yet Tempest fails: > > >>>> /run_tempest.sh -N --serial 'tempest.api.volume*' > > >>>> /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: > > >>> PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild > > >>> using libgmp >= 5 to avoid timing attack vulnerability. > > >>>> _warn("Not using mpz_powm_sec. You should rebuild using > > >>>> libgmp > > >>>> >= > > >>>> 5 to avoid timing attack vulnerability.", PowmInsecureWarning) > > >>>> Non-zero exit > > >>> code (2) from test listing. > > >>>> ----------------------------------------------------------------- > > >>>> -- > > >>>> --- > > >>>> Ran 0 tests in 5.070s > > >>>> > > >>>> > > >>>> > > > > The "Non-zero exit code (2) from test listing." sounds like testr is > > failing within tempest itself. Tempest should have a log file (or the > > ability to generate one) which will tell you what error actually occurred here. > > > > This means that tempest failed to load up its own tests independently > > of the other OpenStack services, I believe. > > Nothing except DEBUG and INFO there. It's Testrepository somehow. https://bugs.launchpad.net/subunit/+bug/1278539 - which leads to https://bugs.launchpad.net/testrepository/+bug/1271133 ... Not sure what I need to update. Testrepository, subunit, testtools.... Yay. Y. > > The last lines: > 2014-09-11 18:25:45.375 103533 INFO > requests.packages.urllib3.connectionpool [-] Starting new HTTP connection (1): > 10.103.234.141 > 2014-09-11 18:25:45.608 103533 DEBUG > requests.packages.urllib3.connectionpool [-] "GET > /v2/df6db1f49c434f8da86ed97dc19028c5/images HTTP/1.1" 200 2182 > _make_request /usr/lib/python2.6/site- > packages/requests/packages/urllib3/connectionpool.py:362 > 2014-09-11 18:25:45.610 103533 DEBUG novaclient.client [-] RESP: [200] > {'date': 'Thu, 11 Sep 2014 15:25:45 GMT', 'connection': 'keep-alive', 'content- > type': 'application/json', 'content-length': '2182', 'x-compute-request-id': 'req- > fc3d6138-bf9c-4141-b153-2bc1feb1ba2a'} > RESP BODY: {"images": [{"id": "9961d022-a930-4bdb-882d-b75374f71fea", > "links": [{"href": > "http://10.103.234.141:8774/v2/df6db1f49c434f8da86ed97dc19028c5/images/ > 9961d022-a930-4bdb-882d-b75374f71fea", "rel": "self"}, {"href": > "http://10.103.234.141:8774/df6db1f49c434f8da86ed97dc19028c5/images/996 > 1d022-a930-4bdb-882d-b75374f71fea", "rel": "bookmark"}, {"href": > "http://10.103.234.141:9292/df6db1f49c434f8da86ed97dc19028c5/images/996 > 1d022-a930-4bdb-882d-b75374f71fea", "type": > "application/vnd.openstack.image", "rel": "alternate"}], "name": "Fedora 20 > x86_64"}, {"id": "c764b332-c9a3-426b-961e-8e349b749158", "links": [{"href": > "http://10.103.234.141:8774/v2/df6db1f49c434f8da86ed97dc19028c5/images/ > c764b332-c9a3-426b-961e-8e349b749158", "rel": "self"}, {"href": > "http://10.103.234.141:8774/df6db1f49c434f8da86ed97dc19028c5/images/c76 > 4b332-c9a3-426b-961e-8e349b749158", "rel": "bookmark"}, {"href": > "http://10.103.234.141:9292/df6db1f49c434f8da86ed97dc19028c5/images/c76 > 4b332-c9a3-426b-961e! > -8e349b749158", "type": "application/vnd.openstack.image", "rel": > "alternate"}], "name": "cirros-0.3.1-x86_64-disk.img_alt"}, {"id": "2c4d3006- > 0c06-4e42-9067-b290afd6e704", "links": [{"href": > "http://10.103.234.141:8774/v2/df6db1f49c434f8da86ed97dc19028c5/images/ > 2c4d3006-0c06-4e42-9067-b290afd6e704", "rel": "self"}, {"href": > "http://10.103.234.141:8774/df6db1f49c434f8da86ed97dc19028c5/images/2c4 > d3006-0c06-4e42-9067-b290afd6e704", "rel": "bookmark"}, {"href": > "http://10.103.234.141:9292/df6db1f49c434f8da86ed97dc19028c5/images/2c4 > d3006-0c06-4e42-9067-b290afd6e704", "type": > "application/vnd.openstack.image", "rel": "alternate"}], "name": "cirros-0.3.1- > x86_64-disk.img"}, {"id": "6441f369-7322-4943-aa57-450c72650dc6", "links": > [{"href": > "http://10.103.234.141:8774/v2/df6db1f49c434f8da86ed97dc19028c5/images/ > 6441f369-7322-4943-aa57-450c72650dc6", "rel": "self"}, {"href": > "http://10.103.234.141:8774/df6db1f49c434f8da86ed97dc19028c5/images/644 > 1f369-7322-4943-aa57-450c72650dc6", "rel! > ": "bookmark"}, {"href": "http://10.103.234.141:9292/df6db1f49c434f8da > 8 > 6ed97dc19028c5/images/6441f369-7322-4943-aa57-450c72650dc6", "type": > "application/vnd.openstack.image", "rel": "alternate"}], "name": "cirros"}]} > http_log_resp /usr/lib/python2.6/site-packages/novaclient/client.py:187 > > Y. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From dkranz at redhat.com Thu Sep 11 16:03:26 2014 From: dkranz at redhat.com (David Kranz) Date: Thu, 11 Sep 2014 12:03:26 -0400 Subject: [Rdo-list] icehouse-devel branch of redhat-openstack/tempest In-Reply-To: <648473255763364B961A02AC3BE1060D03C584D647@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03C584D647@MX19A.corp.emc.com> Message-ID: <5411C7CE.2030103@redhat.com> On 09/11/2014 05:17 AM, Kaul, Yaniv wrote: > > And it seems to be a bit broken on my platform (6.5, IceHouse): > > tools/config_tempest.py --create identity.uri > http://10.103.234.141:5000/v2.0/ identity.admin_username admin > identity.admin_password secret identity.admin_tenant_name admin > > /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: > PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using > libgmp >= 5 to avoid timing attack vulnerability. > > _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 > to avoid timing attack vulnerability.", PowmInsecureWarning) > > Traceback (most recent call last): > > File "tools/config_tempest.py", line 31, in > > from tempest.common import api_discovery > > ImportError: No module named tempest.common > icehouse-devel was a branch being used prior to having complete end-to-end testing. The above traceback should be fixed now but we will soon merge icehouse-devel to the icehouse branch. We are creating an rpm for tempest which can be used instead of pulling from git and will be more stable. I'm not sure what is causing the warnings. -David > > *From:*Kaul, Yaniv > *Sent:* Thursday, September 11, 2014 9:19 AM > *To:* rdo-list at redhat.com > *Subject:* icehouse-devel branch of redhat-openstack/tempest > > In https://github.com/redhat-openstack/tempest , seems like the only > activity is on that branch. Is that still IceHouse-compatible? Can > anyone enlighten me on what the changes are? > > TIA, > > Y. > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From elias.moreno.tec at gmail.com Fri Sep 12 02:14:55 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Thu, 11 Sep 2014 21:44:55 -0430 Subject: [Rdo-list] read only volumes when glusterfs fails In-Reply-To: References: Message-ID: Thank you JuanFra! I've read the link you posted however I'm not completly clear. As far as I know, in the case of a ping timeout, once the server is accessible again a reconnect is issued, the client starts talking to the server and lock tables will be updated, after that, operations will be carried out normally. Since the reconnect is happening and operation is not normal (before rw access -> after ro access) we go to the next point: ext4. My bricks are formatted with xfs, however when the partition is mounted by cinder on the instance the partition was formatted as ext4. Given that the read-only thing seems to be ext4's way to deal with disk access getting lost, could I avoid this issue if I mount the ext4 partition with 'errors=continue'? What could happen then if I format this same partition as xfs for instance, do you know? On Thu, Sep 11, 2014 at 9:59 AM, JuanFra Rodriguez Cardoso < juanfra.rodriguez.cardoso at gmail.com> wrote: > Hi Elias: > > This Joe Julian's post may help you to solve that trouble: > > > http://joejulian.name/blog/keeping-your-vms-from-going-read-only-when-encountering-a-ping-timeout-in-glusterfs/ > > Regards, > --- > JuanFra Rodriguez Cardoso > > 2014-09-11 3:21 GMT+02:00 El?as David : > >> Hello, >> >> I'm seeing a constant behaviour with my implementation of openstack >> (libvirt/kvm) and cinder using glusterfs and I'm having troubles to find >> the real cause or if it's something not normal at all. >> >> I have configured cinder to use glusterfs as storage backend, the volume >> is a replica 2 of 8 disks in 2 servers and I have several volumes attached >> to several instances provided by cinder. The problem is this, is not >> uncommon that one of the gluster servers reboot suddenly due to power >> failures (this is an infrastructure problem unavoidable right now), when >> this happens the instances start to see the attached volume as read only >> which force me to hard reboot the instance so it can access the volume >> normally again. >> >> Here are my doubts, the gluster volume is created in such a way that not >> a single replica is on the same server as the master, if I lose a server >> due to hardware failure, the other is still usable so I don't really >> understand why couldn't the instances just use the replica brick in case >> that one of the servers reboots. >> >> Also, why the data is still there, can be read but can't be written to in >> case of glusterfs failures? Is this a problem with my implementation? >> configuration error on my part? something known to openstack? a cinder >> thing? libvirt? glusterfs? >> >> Having to hard reboot the instances is not a big issue right now, but >> nevertheless I want to understand what's happening and if I can avoid this >> issue. >> >> Some specifics: >> >> GlusterFS version is 3.5 All systems are CentOS 6.5 Openstack version is >> Icehouse installed with packstack/rdo >> >> Thanks in advance! >> >> -- >> El?as David. >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> > -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juanfra.rodriguez.cardoso at gmail.com Fri Sep 12 12:34:31 2014 From: juanfra.rodriguez.cardoso at gmail.com (JuanFra Rodriguez Cardoso) Date: Fri, 12 Sep 2014 14:34:31 +0200 Subject: [Rdo-list] read only volumes when glusterfs fails In-Reply-To: References: Message-ID: Hi Elias: I'd give a try to 'errors=continue' on your ext4 mount options. Regards, --- JuanFra Rodriguez Cardoso 2014-09-12 4:14 GMT+02:00 El?as David : > Thank you JuanFra! > > I've read the link you posted however I'm not completly clear. > > As far as I know, in the case of a ping timeout, once the server is > accessible again a reconnect is issued, the client starts talking to the > server and lock tables will be updated, after that, operations will be > carried out normally. > > Since the reconnect is happening and operation is not normal (before rw > access -> after ro access) we go to the next point: ext4. > > My bricks are formatted with xfs, however when the partition is mounted by > cinder on the instance the partition was formatted as ext4. Given that the > read-only thing seems to be ext4's way to deal with disk access getting > lost, could I avoid this issue if I mount the ext4 partition with > 'errors=continue'? What could happen then if I format this same partition as > xfs for instance, do you know? > > On Thu, Sep 11, 2014 at 9:59 AM, JuanFra Rodriguez Cardoso > wrote: >> >> Hi Elias: >> >> This Joe Julian's post may help you to solve that trouble: >> >> >> http://joejulian.name/blog/keeping-your-vms-from-going-read-only-when-encountering-a-ping-timeout-in-glusterfs/ >> >> Regards, >> --- >> JuanFra Rodriguez Cardoso >> >> 2014-09-11 3:21 GMT+02:00 El?as David : >>> >>> Hello, >>> >>> I'm seeing a constant behaviour with my implementation of openstack >>> (libvirt/kvm) and cinder using glusterfs and I'm having troubles to find the >>> real cause or if it's something not normal at all. >>> >>> I have configured cinder to use glusterfs as storage backend, the volume >>> is a replica 2 of 8 disks in 2 servers and I have several volumes attached >>> to several instances provided by cinder. The problem is this, is not >>> uncommon that one of the gluster servers reboot suddenly due to power >>> failures (this is an infrastructure problem unavoidable right now), when >>> this happens the instances start to see the attached volume as read only >>> which force me to hard reboot the instance so it can access the volume >>> normally again. >>> >>> Here are my doubts, the gluster volume is created in such a way that not >>> a single replica is on the same server as the master, if I lose a server due >>> to hardware failure, the other is still usable so I don't really understand >>> why couldn't the instances just use the replica brick in case that one of >>> the servers reboots. >>> >>> Also, why the data is still there, can be read but can't be written to in >>> case of glusterfs failures? Is this a problem with my implementation? >>> configuration error on my part? something known to openstack? a cinder >>> thing? libvirt? glusterfs? >>> >>> Having to hard reboot the instances is not a big issue right now, but >>> nevertheless I want to understand what's happening and if I can avoid this >>> issue. >>> >>> Some specifics: >>> >>> GlusterFS version is 3.5 All systems are CentOS 6.5 Openstack version is >>> Icehouse installed with packstack/rdo >>> >>> Thanks in advance! >>> >>> >>> -- >>> El?as David. >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >> > > > > -- > El?as David. From rbowen at redhat.com Fri Sep 12 14:58:13 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 12 Sep 2014 10:58:13 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <53BBF488.5020501@redhat.com> References: <53BBF488.5020501@redhat.com> Message-ID: <54130A05.1010704@redhat.com> On 07/08/2014 09:39 AM, Rich Bowen wrote: > I'm running `packstack --allinone` on a fresh install of the new > CentOS7, and I'm getting a failure at: > > 192.168.0.176_mysql.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > Error: Could not enable mysqld: > You will find full trace in log > /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > Please check log file > /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for more > information > I'm encountering this problem again. It appears that the very best way to work around it right now is to tell packstack that it's Fedora 20, rather than CentOS7. ie, replace contents of /etc/redhat-release with ?Fedora release 20 (Heisenbug)? and rerun packstack. I've been digging for tickets, and I've found a number of related ones: https://bugzilla.redhat.com/show_bug.cgi?id=1065977 https://bugzilla.redhat.com/show_bug.cgi?id=1066112 https://bugzilla.redhat.com/show_bug.cgi?id=1061152 https://bugzilla.redhat.com/show_bug.cgi?id=1061045 Possibly related https://bugzilla.redhat.com/show_bug.cgi?id=981116 (Closed) What I'm wondering is if these are in fact different facets of the same issue, or if they're really different things. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From gchamoul at redhat.com Fri Sep 12 15:17:50 2014 From: gchamoul at redhat.com (Gael Chamoulaud) Date: Fri, 12 Sep 2014 11:17:50 -0400 (EDT) Subject: [Rdo-list] =?utf-8?q?mysqld_failure_on_--allinone=2C_centos7?= In-Reply-To: <54130A05.1010704@redhat.com> References: <53BBF488.5020501@redhat.com> <54130A05.1010704@redhat.com> Message-ID: <292810143.21626450.1410535070870.JavaMail.zimbra@zmail12.collab.prod.int.phx2.redhat.com> Rich, The last openstack-puppet-modules build [1] fixes this. It should available from RDO icehouse soon. [1] - https://kojipkgs.fedoraproject.org//work/tasks/4866/7564866/openstack-puppet-modules-2014.1-23.el7.noarch.rpm Cheers, Ga?l. Sent from my Android phone using TouchDown (www.nitrodesk.com) -----Original Message----- From: Rich Bowen [rbowen at redhat.com] Received: Friday, 12 Sep 2014, 4:58PM To: rdo-list at redhat.com Subject: Re: [Rdo-list] mysqld failure on --allinone, centos7 On 07/08/2014 09:39 AM, Rich Bowen wrote: > I'm running `packstack --allinone` on a fresh install of the new > CentOS7, and I'm getting a failure at: > > 192.168.0.176_mysql.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > Error: Could not enable mysqld: > You will find full trace in log > /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > Please check log file > /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for more > information > I'm encountering this problem again. It appears that the very best way to work around it right now is to tell packstack that it's Fedora 20, rather than CentOS7. ie, replace contents of /etc/redhat-release with ?Fedora release 20 (Heisenbug)? and rerun packstack. I've been digging for tickets, and I've found a number of related ones: https://bugzilla.redhat.com/show_bug.cgi?id=1065977 https://bugzilla.redhat.com/show_bug.cgi?id=1066112 https://bugzilla.redhat.com/show_bug.cgi?id=1061152 https://bugzilla.redhat.com/show_bug.cgi?id=1061045 Possibly related https://bugzilla.redhat.com/show_bug.cgi?id=981116 (Closed) What I'm wondering is if these are in fact different facets of the same issue, or if they're really different things. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri Sep 12 15:20:09 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 12 Sep 2014 11:20:09 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <292810143.21626450.1410535070870.JavaMail.zimbra@zmail12.collab.prod.int.phx2.redhat.com> References: <53BBF488.5020501@redhat.com> <54130A05.1010704@redhat.com> <292810143.21626450.1410535070870.JavaMail.zimbra@zmail12.collab.prod.int.phx2.redhat.com> Message-ID: <54130F29.1090003@redhat.com> On 09/12/2014 11:17 AM, Gael Chamoulaud wrote: > Rich, > > The last openstack-puppet-modules build [1] fixes this. > It should available from RDO icehouse soon. > > [1] - > https://kojipkgs.fedoraproject.org//work/tasks/4866/7564866/openstack-puppet-modules-2014.1-23.el7.noarch.rpm > > Thanks. My mistake. I've had this message in draft for a while and hadn't sent it because I knew you were working on that. Looks like I went ahead and sent it. --Rich > Cheers, > Ga?l. > > > Sent from my Android phone using TouchDown (www.nitrodesk.com) > > > -----Original Message----- > From: Rich Bowen [rbowen at redhat.com] > Received: Friday, 12 Sep 2014, 4:58PM > To: rdo-list at redhat.com > Subject: Re: [Rdo-list] mysqld failure on --allinone, centos7 > > > > On 07/08/2014 09:39 AM, Rich Bowen wrote: > > I'm running `packstack --allinone` on a fresh install of the new > > CentOS7, and I'm getting a failure at: > > > > 192.168.0.176_mysql.pp: [ ERROR ] > > Applying Puppet manifests [ ERROR ] > > > > ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > > Error: Could not enable mysqld: > > You will find full trace in log > > > /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > > Please check log file > > /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for more > > information > > > > I'm encountering this problem again. It appears that the very best way > to work around it right now is to tell packstack that it's Fedora 20, > rather than CentOS7. ie, replace contents of /etc/redhat-release with > ?Fedora release 20 (Heisenbug)? and rerun packstack. > > I've been digging for tickets, and I've found a number of related ones: > > https://bugzilla.redhat.com/show_bug.cgi?id=1065977 > https://bugzilla.redhat.com/show_bug.cgi?id=1066112 > https://bugzilla.redhat.com/show_bug.cgi?id=1061152 > https://bugzilla.redhat.com/show_bug.cgi?id=1061045 > Possibly related https://bugzilla.redhat.com/show_bug.cgi?id=981116 > (Closed) > > What I'm wondering is if these are in fact different facets of the same > issue, or if they're really different things. > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://openstack.redhat.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From ichavero at redhat.com Fri Sep 12 18:14:29 2014 From: ichavero at redhat.com (Ivan Chavero) Date: Fri, 12 Sep 2014 14:14:29 -0400 (EDT) Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <54130A05.1010704@redhat.com> References: <53BBF488.5020501@redhat.com> <54130A05.1010704@redhat.com> Message-ID: <1744366902.37376141.1410545669163.JavaMail.zimbra@redhat.com> which packstack and openstack puppet modules versions are you using? we recently added more centos support on the latest packages Cheers, Ivan ----- Mensaje original ----- > De: "Rich Bowen" > Para: rdo-list at redhat.com > Enviados: Viernes, 12 de Septiembre 2014 8:58:13 > Asunto: Re: [Rdo-list] mysqld failure on --allinone, centos7 > > > On 07/08/2014 09:39 AM, Rich Bowen wrote: > > I'm running `packstack --allinone` on a fresh install of the new > > CentOS7, and I'm getting a failure at: > > > > 192.168.0.176_mysql.pp: [ ERROR ] > > Applying Puppet manifests [ ERROR ] > > > > ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > > Error: Could not enable mysqld: > > You will find full trace in log > > /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > > Please check log file > > /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for more > > information > > > > I'm encountering this problem again. It appears that the very best way > to work around it right now is to tell packstack that it's Fedora 20, > rather than CentOS7. ie, replace contents of /etc/redhat-release with > ?Fedora release 20 (Heisenbug)? and rerun packstack. > > I've been digging for tickets, and I've found a number of related ones: > > https://bugzilla.redhat.com/show_bug.cgi?id=1065977 > https://bugzilla.redhat.com/show_bug.cgi?id=1066112 > https://bugzilla.redhat.com/show_bug.cgi?id=1061152 > https://bugzilla.redhat.com/show_bug.cgi?id=1061045 > Possibly related https://bugzilla.redhat.com/show_bug.cgi?id=981116 (Closed) > > What I'm wondering is if these are in fact different facets of the same > issue, or if they're really different things. > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://openstack.redhat.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From rbowen at redhat.com Fri Sep 12 18:18:52 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 12 Sep 2014 14:18:52 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <1744366902.37376141.1410545669163.JavaMail.zimbra@redhat.com> References: <53BBF488.5020501@redhat.com> <54130A05.1010704@redhat.com> <1744366902.37376141.1410545669163.JavaMail.zimbra@redhat.com> Message-ID: <5413390C.2060405@redhat.com> On 09/12/2014 02:14 PM, Ivan Chavero wrote: > which packstack and openstack puppet modules versions are you using? > we recently added more centos support on the latest packages Ga?l mentioned in his response that the issue I'm experiencing will be resolved on CentOS7 real soon, so I'll just hang in there and try again in a few days. As I mentioned in my earlier response, I sent a message from my draft folder in error - I knew that Ga?l was working on this fix. But, for whatever it's worth: openstack-packstack-2014.1.1-0.28.dev1238.el7.noarch openstack-packstack-puppet-2014.1.1-0.28.dev1238.el7.noarch > Cheers, > Ivan > > ----- Mensaje original ----- >> De: "Rich Bowen" >> Para: rdo-list at redhat.com >> Enviados: Viernes, 12 de Septiembre 2014 8:58:13 >> Asunto: Re: [Rdo-list] mysqld failure on --allinone, centos7 >> >> >> On 07/08/2014 09:39 AM, Rich Bowen wrote: >>> I'm running `packstack --allinone` on a fresh install of the new >>> CentOS7, and I'm getting a failure at: >>> >>> 192.168.0.176_mysql.pp: [ ERROR ] >>> Applying Puppet manifests [ ERROR ] >>> >>> ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp >>> Error: Could not enable mysqld: >>> You will find full trace in log >>> /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log >>> Please check log file >>> /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for more >>> information >>> >> I'm encountering this problem again. It appears that the very best way >> to work around it right now is to tell packstack that it's Fedora 20, >> rather than CentOS7. ie, replace contents of /etc/redhat-release with >> ?Fedora release 20 (Heisenbug)? and rerun packstack. >> >> I've been digging for tickets, and I've found a number of related ones: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1065977 >> https://bugzilla.redhat.com/show_bug.cgi?id=1066112 >> https://bugzilla.redhat.com/show_bug.cgi?id=1061152 >> https://bugzilla.redhat.com/show_bug.cgi?id=1061045 >> Possibly related https://bugzilla.redhat.com/show_bug.cgi?id=981116 (Closed) >> >> What I'm wondering is if these are in fact different facets of the same >> issue, or if they're really different things. >> >> >> -- >> Rich Bowen - rbowen at redhat.com >> OpenStack Community Liaison >> http://openstack.redhat.com/ >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From benjamin.ernst.lipp at cern.ch Sun Sep 14 14:51:09 2014 From: benjamin.ernst.lipp at cern.ch (Benjamin Lipp) Date: Sun, 14 Sep 2014 16:51:09 +0200 Subject: [Rdo-list] =?utf-8?b?4oCcR2V0IGludm9sdmVk4oCdIHBhZ2Ugb24gb3BlbnN0?= =?utf-8?q?ack=2Eredhat=2Ecom?= In-Reply-To: <5405C2AE.4090209@redhat.com> References: <5405EC34.6010305@cern.ch> <5405C2AE.4090209@redhat.com> Message-ID: <5415AB5D.6030203@cern.ch> Hi, On 02.09.2014 15:14, Rich Bowen wrote: > > On 09/02/2014 12:11 PM, Benjamin Lipp wrote: >> >> Finally, I found everything like Bugtracker, Gerrit, ? on >> https://wiki.openstack.org/wiki/Packstack >> >> Maybe you might want to update https://openstack.redhat.com/Get_involved >> with this information (I just checked, I don't have edit rights). > > The page was set as protected, and I've fixed that. You should be able > to edit now if you want. Thanks, @Rich. I like the changes you made on the wiki page. I just made some additional changes on * https://openstack.redhat.com/Get_involved * https://openstack.redhat.com/Adding_new_content * https://openstack.redhat.com/Help:Editing Feel free to modify them if it's not like you want it to be. >> On https://github.com/stackforge/packstack a link to >> https://wiki.openstack.org/wiki/Packstack or >> https://openstack.redhat.com/Get_involved might be useful, because right >> now someone ending up on this Github repository will be lost on a dead >> end. >> >> You mention different IRC channels on >> https://wiki.openstack.org/wiki/Packstack and >> https://openstack.redhat.com/Get_involved as well. >> > > I'll try to get these changes made today if you don't beat me to it. Thanks, I think it's much better now! Kind regards, Benjamin From benjamin.ernst.lipp at cern.ch Sun Sep 14 15:41:46 2014 From: benjamin.ernst.lipp at cern.ch (Benjamin Lipp) Date: Sun, 14 Sep 2014 17:41:46 +0200 Subject: [Rdo-list] Make obvious the forum moved to ask.openstack.org Message-ID: <5415B73A.8090301@cern.ch> Hi, the move of the forum has been decided almost a year ago, see [1]. I think it's time to adapt all the wiki pages accordingly so people don't waste time trying to find out how to post in this forum. I just adapted * https://openstack.redhat.com/Get_involved and * https://openstack.redhat.com/Frequently_Asked_Questions What is left, please take care of that: * https://openstack.redhat.com/Frequently_Asked_Questions : ?Users of OpenStack on Fedora are welcome to participate in the Red Hat OpenStack community forums on openstack.redhat.com [?]? What to do with this? Of course they can join ask.openstack, but it's not RDO to decide on that because ask.openstack is for everyone. Thus it would sound strange to say they are welcome to ask.openstack. * https://openstack.redhat.com/Main_Page : The main page is not editable, which is good, so please adapt the section ?Introducing RDO?. I propose to replace the current link to the old forum by [[Get involved#ask.openstack|forums on ask.openstack]]. * Maybe it would be a good thing to include a hint on top of every page of the old forum, excluding the pages belonging to the blog of course, like: ?The forum has been moved to ask.openstack, see this post [1] and this wiki page [2] for more information?. Kind regards, Benjamin [1] https://openstack.redhat.com/forum/discussion/935/rdo-forum-moving-to-ask-openstack-org-and-the-path-forward/p1 [2] https://openstack.redhat.com/Get_involved#ask.openstack From sgordon at redhat.com Mon Sep 15 03:16:33 2014 From: sgordon at redhat.com (Steve Gordon) Date: Sun, 14 Sep 2014 23:16:33 -0400 (EDT) Subject: [Rdo-list] Issues with sysctl.conf settings on CentOS 6? In-Reply-To: <1639737601.13668351.1410750804552.JavaMail.zimbra@redhat.com> Message-ID: <432029146.13668674.1410750993453.JavaMail.zimbra@redhat.com> Hi all, Running packstack --allinone on a freshly installed and updated CentOS 6.5 system I encountered this error with sysctl.conf: """ Applying 192.168.122.152_neutron.pp 192.168.122.152_neutron.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.122.152_neutron.pp Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] You will find full trace in log /var/tmp/packstack/20140914-225857-OebbrQ/manifests/192.168.122.152_neutron.pp.log Please check log file /var/tmp/packstack/20140914-225857-OebbrQ/openstack-setup.log for more information """ Running `sysctl -p /etc/sysctl.conf` myself I receive: """ # sysctl -p /etc/sysctl.conf net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 # echo $? 255 """ Removing the errant lines and re-running PackStack it just adds them back and thus fails for the same reason. I couldn't find another RDO bug covering this issue, has anyone else run into it? Logs are attached to https://bugzilla.redhat.com/show_bug.cgi?id=1141608 Thanks, Steve From gchamoul at redhat.com Mon Sep 15 07:30:51 2014 From: gchamoul at redhat.com (=?iso-8859-1?Q?Ga=EBl?= Chamoulaud) Date: Mon, 15 Sep 2014 09:30:51 +0200 Subject: [Rdo-list] Issues with sysctl.conf settings on CentOS 6? In-Reply-To: <432029146.13668674.1410750993453.JavaMail.zimbra@redhat.com> References: <1639737601.13668351.1410750804552.JavaMail.zimbra@redhat.com> <432029146.13668674.1410750993453.JavaMail.zimbra@redhat.com> Message-ID: <20140915073051.GA9752@strider.cdg.redhat.com> On 14/Sep/2014 @ 23:16, Steve Gordon wrote: > Hi all, > > Running packstack --allinone on a freshly installed and updated CentOS 6.5 system I encountered this error with sysctl.conf: > > """ > Applying 192.168.122.152_neutron.pp > 192.168.122.152_neutron.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.168.122.152_neutron.pp > Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] > You will find full trace in log /var/tmp/packstack/20140914-225857-OebbrQ/manifests/192.168.122.152_neutron.pp.log > Please check log file /var/tmp/packstack/20140914-225857-OebbrQ/openstack-setup.log for more information > """ > > Running `sysctl -p /etc/sysctl.conf` myself I receive: > > """ > # sysctl -p /etc/sysctl.conf > net.ipv4.ip_forward = 1 > net.ipv4.conf.default.rp_filter = 1 > net.ipv4.conf.default.accept_source_route = 0 > kernel.sysrq = 0 > kernel.core_uses_pid = 1 > net.ipv4.tcp_syncookies = 1 > error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key > error: "net.bridge.bridge-nf-call-iptables" is an unknown key > error: "net.bridge.bridge-nf-call-arptables" is an unknown key > kernel.msgmnb = 65536 > kernel.msgmax = 65536 > kernel.shmmax = 68719476736 > kernel.shmall = 4294967296 > # echo $? > 255 > """ > > Removing the errant lines and re-running PackStack it just adds them back and thus fails for the same reason. I couldn't find another RDO bug covering this issue, has anyone else run into it? > > Logs are attached to https://bugzilla.redhat.com/show_bug.cgi?id=1141608 Hi Steve, That issue has been already fixed in packstack [1] and will be present in the next build for rdo-icehouse-epel6 asap. [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1132129 Cheers, -- Ga?l Chamoulaud Openstack IRC: strider/gchamoul (redhat), gchamoul (Freenode) GnuPG Key ID: 7F4B301 C75F 15C2 A7FD EBC3 7B2D CE41 0077 6A4B A7F4 B301 Freedom...Courage...Commitment...Accountability -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Mon Sep 15 13:18:55 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 15 Sep 2014 09:18:55 -0400 Subject: [Rdo-list] Make obvious the forum moved to ask.openstack.org In-Reply-To: <5415B73A.8090301@cern.ch> References: <5415B73A.8090301@cern.ch> Message-ID: <5416E73F.7050100@redhat.com> Thanks for pointing these out. I thought I'd caught everything in the wiki already. --Rich On 09/14/2014 11:41 AM, Benjamin Lipp wrote: > Hi, > > the move of the forum has been decided almost a year ago, see [1]. I > think it's time to adapt all the wiki pages accordingly so people don't > waste time trying to find out how to post in this forum. > > I just adapted > * https://openstack.redhat.com/Get_involved and > * https://openstack.redhat.com/Frequently_Asked_Questions > > What is left, please take care of that: > * https://openstack.redhat.com/Frequently_Asked_Questions : > ?Users of OpenStack on Fedora are welcome to participate in the Red Hat > OpenStack community forums on openstack.redhat.com [?]? > What to do with this? Of course they can join ask.openstack, but it's > not RDO to decide on that because ask.openstack is for everyone. Thus it > would sound strange to say they are welcome to ask.openstack. > > * https://openstack.redhat.com/Main_Page : > The main page is not editable, which is good, so please adapt the > section ?Introducing RDO?. I propose to replace the current link to the > old forum by [[Get involved#ask.openstack|forums on ask.openstack]]. > > * Maybe it would be a good thing to include a hint on top of every page > of the old forum, excluding the pages belonging to the blog of course, like: > ?The forum has been moved to ask.openstack, see this post [1] and this > wiki page [2] for more information?. > > > Kind regards, > Benjamin > > > [1] > https://openstack.redhat.com/forum/discussion/935/rdo-forum-moving-to-ask-openstack-org-and-the-path-forward/p1 > [2] https://openstack.redhat.com/Get_involved#ask.openstack > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Mon Sep 15 13:31:33 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 15 Sep 2014 09:31:33 -0400 Subject: [Rdo-list] =?utf-8?b?4oCcR2V0IGludm9sdmVk4oCdIHBhZ2Ugb24gb3BlbnN0?= =?utf-8?q?ack=2Eredhat=2Ecom?= In-Reply-To: <5415AB5D.6030203@cern.ch> References: <5405EC34.6010305@cern.ch> <5405C2AE.4090209@redhat.com> <5415AB5D.6030203@cern.ch> Message-ID: <5416EA35.9070506@redhat.com> Thanks, Benjamin, its hugely appreciated. --Rich On 09/14/2014 10:51 AM, Benjamin Lipp wrote: > Hi, > > On 02.09.2014 15:14, Rich Bowen wrote: >> On 09/02/2014 12:11 PM, Benjamin Lipp wrote: >>> Finally, I found everything like Bugtracker, Gerrit, ? on >>> https://wiki.openstack.org/wiki/Packstack >>> >>> Maybe you might want to update https://openstack.redhat.com/Get_involved >>> with this information (I just checked, I don't have edit rights). >> The page was set as protected, and I've fixed that. You should be able >> to edit now if you want. > > Thanks, @Rich. I like the changes you made on the wiki page. I just made > some additional changes on > * https://openstack.redhat.com/Get_involved > * https://openstack.redhat.com/Adding_new_content > * https://openstack.redhat.com/Help:Editing > > Feel free to modify them if it's not like you want it to be. > > >>> On https://github.com/stackforge/packstack a link to >>> https://wiki.openstack.org/wiki/Packstack or >>> https://openstack.redhat.com/Get_involved might be useful, because right >>> now someone ending up on this Github repository will be lost on a dead >>> end. >>> >>> You mention different IRC channels on >>> https://wiki.openstack.org/wiki/Packstack and >>> https://openstack.redhat.com/Get_involved as well. >>> >> I'll try to get these changes made today if you don't beat me to it. > Thanks, I think it's much better now! > > Kind regards, > Benjamin > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rdo-info at redhat.com Mon Sep 15 16:19:47 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 15 Sep 2014 16:19:47 +0000 Subject: [Rdo-list] [RDO] Blog Roundup, week of September 8, 2014 Message-ID: <000001487a1ce545-0cce6af3-ec83-472c-8c54-017f10d4ceb8-000000@email.amazonses.com> rbowen started a discussion. Blog Roundup, week of September 8, 2014 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/984/blog-roundup-week-of-september-8-2014 Have a great day! From rbowen at redhat.com Mon Sep 15 19:27:56 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 15 Sep 2014 15:27:56 -0400 Subject: [Rdo-list] RDO-related Meetups in the coming week (September 15, 2014) Message-ID: <54173DBC.8000106@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts will be gathering. If you know of others, please do add them to http://openstack.redhat.com/Events If you attend any of these meetups, please take pictures, and send me some. If you blog about the events (and you should), please send me that, too. * OpenStack WorkShop by UnitedStack, Tuesday, September 16th, Shanghai - http://www.meetup.com/China-OpenStack-User-Group/events/203178112/ * Red Hat User Group inaugural happy hour, Wednesday, September 17th, Seattle - http://www.meetup.com/Seattle-Red-Hat-Enterprise-Linux-User-Group/ * All about Open Source with The Fedora Project's Robyn Bergeron, Wednesday, September 17th, Tempe AZ - http://www.meetup.com/PhoenixRedHatSoftwareUserGroup/events/202336622/ (Ok, so, nothing to do with RDO or OpenStack, but Robyn is well worth going to hear if you're in the area. * Taking the Long View: How the Oslo Program Reduces Technical Debt, Thursday, September 18th, Atlanta, GA - http://www.meetup.com/openstack-atlanta/events/199573312/ * OpenStack Conference Benelux, Friday, September 19th, Bossum - http://www.meetup.com/Openstack-Netherlands/events/181447772/ * Tech Day - Rochester, NY - RHCI & JBoss Middleware, Tuesday, September 23, West Henrietta, NY - http://www.meetup.com/RedHatTechDay/events/204676922/ (Will include discussion of using CloudForms to orchestrate self-service provisioning of virtual machines to private and public clouds based on technologies like Openstack) -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From whayutin at redhat.com Mon Sep 15 23:45:21 2014 From: whayutin at redhat.com (whayutin) Date: Mon, 15 Sep 2014 19:45:21 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) Message-ID: <1410824721.2971.2.camel@localhost.localdomain> Greetings, You should now have success installing RDO Icehouse on CentOS-7.0 using the quickstart instructions. https://openstack.redhat.com/Quickstart Thank you :) From gilles at redhat.com Tue Sep 16 01:54:23 2014 From: gilles at redhat.com (Gilles Dubreuil) Date: Tue, 16 Sep 2014 11:54:23 +1000 Subject: [Rdo-list] Getting the CentOS-7 fix's rolled into rdo In-Reply-To: <1409772341.2983.17.camel@localhost.localdomain> References: <5401A72E.6050604@karan.org> <1409772341.2983.17.camel@localhost.localdomain> Message-ID: <5417984F.1080404@redhat.com> On 04/09/14 05:25, whayutin wrote: > On Sat, 2014-08-30 at 11:27 +0100, Karanbir Singh wrote: >> hi guys, >> >> Is there a timeline on when we can expect the CentOS-7 workarounds to >> get rolled into packstack/rdo itself ? >> > > > Prior to running the packstack installer we run > the following three workarounds. > https://github.com/redhat-openstack/khaleesi/blob/master/workarounds/workarounds-pre-run-packstack.yml#L41 > code is here: > https://github.com/redhat-openstack/khaleesi/tree/master/roles/workarounds > > > Install is here.. > https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production-centos-70-aio-packstack-neutron-gre-rabbitmq/ > > CI notes/docs here.. > https://prod-rdojenkins.rhcloud.com/ > > Thanks > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > Hi, A quick update to mention a working installation with latest Centos7 for RDO/Icehouse [1] of Packstack All-In-One. The updates involves [2] and [3] [1] https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/ [2] openstack-packstack-puppet-2014.1.1-0.28.dev1238.el7.noarch.rpm [3] openstack-puppet-modules-2014.1-23.el7.src.rpm Regards, Gilles From Yaniv.Kaul at emc.com Tue Sep 16 08:08:43 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 16 Sep 2014 04:08:43 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <1410824721.2971.2.camel@localhost.localdomain> References: <1410824721.2971.2.camel@localhost.localdomain> Message-ID: <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of whayutin > Sent: Tuesday, September 16, 2014 2:45 AM > To: rdo-list at redhat.com > Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) > > Greetings, > > You should now have success installing RDO Icehouse on CentOS-7.0 using the > quickstart instructions. Failed, with the new RPM (and without it, so perhaps it's something else): ... Copying Puppet modules and manifests [ DONE ] Applying 10.103.234.139_prescript.pp 10.103.234.139_prescript.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 10.103.234.139_prescript.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack-icehouse) You will find full trace in log /var/tmp/packstack/20140916-095516-ANJtSb/manifests/10.103.234.139_prescript.pp.log Please check log file /var/tmp/packstack/20140916-095516-ANJtSb/openstack-setup.log for more information Additional information: * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * Did not create a cinder volume group, one already existed * File /root/keystonerc_admin has been created on OpenStack client host 10.103.234.139. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://10.103.234.139/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. The error in the log above points to: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack-icehouse) You will find full trace in log /var/tmp/packstack/20140916-095516-ANJtSb/manifests/10.103.234.139_prescript.pp.log > > https://openstack.redhat.com/Quickstart > > Thank you :) > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From Yaniv.Kaul at emc.com Tue Sep 16 08:31:04 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 16 Sep 2014 04:31:04 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> Message-ID: <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Kaul, Yaniv > Sent: Tuesday, September 16, 2014 11:09 AM > To: rdo-list at redhat.com > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > -----Original Message----- > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > > On Behalf Of whayutin > > Sent: Tuesday, September 16, 2014 2:45 AM > > To: rdo-list at redhat.com > > Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > Greetings, > > > > You should now have success installing RDO Icehouse on CentOS-7.0 > > using the quickstart instructions. > > Failed, with the new RPM (and without it, so perhaps it's something else): > ... > Copying Puppet modules and manifests [ DONE ] > Applying 10.103.234.139_prescript.pp > 10.103.234.139_prescript.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 10.103.234.139_prescript.pp > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- > icehouse) You will find full trace in log /var/tmp/packstack/20140916-095516- > ANJtSb/manifests/10.103.234.139_prescript.pp.log > Please check log file /var/tmp/packstack/20140916-095516-ANJtSb/openstack- > setup.log for more information > > Additional information: > * Time synchronization installation was skipped. Please note that > unsynchronized time on server instances might be problem for some OpenStack > components. > * Did not create a cinder volume group, one already existed > * File /root/keystonerc_admin has been created on OpenStack client host > 10.103.234.139. To use the command line tools you need to source the file. > * To access the OpenStack Dashboard browse to > http://10.103.234.139/dashboard . > Please, find your login credentials stored in the keystonerc_admin in your home > directory. > > > The error in the log above points to: > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- > icehouse) You will find full trace in log /var/tmp/packstack/20140916-095516- > ANJtSb/manifests/10.103.234.139_prescript.pp.log And manually trying: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack-icehouse) Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7-x86_64) selinux-policy-targeted = 3.12.1-153.el7 Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack-icehouse) Requires: selinux-policy-base >= 3.12.1-153.el7_0.10 Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7-x86_64) selinux-policy-base = 3.12.1-153.el7 Available: selinux-policy-minimum-3.12.1-153.el7.noarch (centos7-x86_64) selinux-policy-base = 3.12.1-153.el7 Available: selinux-policy-mls-3.12.1-153.el7.noarch (centos7-x86_64) selinux-policy-base = 3.12.1-153.el7 I manually found and installed the required packages. Perhaps it was not propagated to mirrors yet (as I failed to download it from multiple mirrors). It continued, then failed on: 10.103.234.139_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 10.103.234.139_nova.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-compute' returned 1: Error: Package: gnutls-utils-3.1.18-8.el7.x86_64 (centos7-x86_64) I'll check my repos. Y. > > > > > https://openstack.redhat.com/Quickstart > > > > Thank you :) > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From whayutin at redhat.com Tue Sep 16 11:52:42 2014 From: whayutin at redhat.com (whayutin) Date: Tue, 16 Sep 2014 07:52:42 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> Message-ID: <1410868362.2753.3.camel@localhost.localdomain> On Tue, 2014-09-16 at 04:31 -0400, Kaul, Yaniv wrote: > > -----Original Message----- > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > > Behalf Of Kaul, Yaniv > > Sent: Tuesday, September 16, 2014 11:09 AM > > To: rdo-list at redhat.com > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > -----Original Message----- > > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > > > On Behalf Of whayutin > > > Sent: Tuesday, September 16, 2014 2:45 AM > > > To: rdo-list at redhat.com > > > Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > Greetings, > > > > > > You should now have success installing RDO Icehouse on CentOS-7.0 > > > using the quickstart instructions. > > > > Failed, with the new RPM (and without it, so perhaps it's something else): > > ... > > Copying Puppet modules and manifests [ DONE ] > > Applying 10.103.234.139_prescript.pp > > 10.103.234.139_prescript.pp: [ ERROR ] > > Applying Puppet manifests [ ERROR ] > > > > ERROR : Error appeared during Puppet run: 10.103.234.139_prescript.pp > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- > > icehouse) You will find full trace in log /var/tmp/packstack/20140916-095516- > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > Please check log file /var/tmp/packstack/20140916-095516-ANJtSb/openstack- > > setup.log for more information > > > > Additional information: > > * Time synchronization installation was skipped. Please note that > > unsynchronized time on server instances might be problem for some OpenStack > > components. > > * Did not create a cinder volume group, one already existed > > * File /root/keystonerc_admin has been created on OpenStack client host > > 10.103.234.139. To use the command line tools you need to source the file. > > * To access the OpenStack Dashboard browse to > > http://10.103.234.139/dashboard . > > Please, find your login credentials stored in the keystonerc_admin in your home > > directory. > > > > > > The error in the log above points to: > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- > > icehouse) You will find full trace in log /var/tmp/packstack/20140916-095516- > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > And manually trying: > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack-icehouse) > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7-x86_64) > selinux-policy-targeted = 3.12.1-153.el7 > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack-icehouse) > Requires: selinux-policy-base >= 3.12.1-153.el7_0.10 > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7-x86_64) > selinux-policy-base = 3.12.1-153.el7 > Available: selinux-policy-minimum-3.12.1-153.el7.noarch (centos7-x86_64) > selinux-policy-base = 3.12.1-153.el7 > Available: selinux-policy-mls-3.12.1-153.el7.noarch (centos7-x86_64) > selinux-policy-base = 3.12.1-153.el7 > > > I manually found and installed the required packages. Perhaps it was not propagated to mirrors yet (as I failed to download it from multiple mirrors). > It continued, then failed on: > 10.103.234.139_nova.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 10.103.234.139_nova.pp > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-compute' returned 1: Error: Package: gnutls-utils-3.1.18-8.el7.x86_64 (centos7-x86_64) > > I'll check my repos. > Y. Ya.. Please check your repos, my CentOS-7.0 install is laying down openstack-selinux-0.5.15-1.el7ost.noarch Yaniv, are you installing the rdo-release rpm? It looks like your install is pulling the openstack-selinux rpm from the CentOS yum repository itself. Take a look at the quickstart for RDO You can also take a look at our installs on CentOS-7.0 https://prod-rdojenkins.rhcloud.com/ Notice the /etc/yum.repos.d/ directory on the controller, available in the log file. Let me know if you still have any issues. Thanks! > > > > > > > > > https://openstack.redhat.com/Quickstart > > > > > > Thank you :) > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From Yaniv.Kaul at emc.com Tue Sep 16 11:59:34 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 16 Sep 2014 07:59:34 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <1410868362.2753.3.camel@localhost.localdomain> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> Message-ID: <648473255763364B961A02AC3BE1060D03C59F4460@MX19A.corp.emc.com> > -----Original Message----- > From: whayutin [mailto:whayutin at redhat.com] > Sent: Tuesday, September 16, 2014 2:53 PM > To: Kaul, Yaniv > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > On Tue, 2014-09-16 at 04:31 -0400, Kaul, Yaniv wrote: > > > -----Original Message----- > > > From: rdo-list-bounces at redhat.com > > > [mailto:rdo-list-bounces at redhat.com] On Behalf Of Kaul, Yaniv > > > Sent: Tuesday, September 16, 2014 11:09 AM > > > To: rdo-list at redhat.com > > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > -----Original Message----- > > > > From: rdo-list-bounces at redhat.com > > > > [mailto:rdo-list-bounces at redhat.com] > > > > On Behalf Of whayutin > > > > Sent: Tuesday, September 16, 2014 2:45 AM > > > > To: rdo-list at redhat.com > > > > Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > > Greetings, > > > > > > > > You should now have success installing RDO Icehouse on CentOS-7.0 > > > > using the quickstart instructions. > > > > > > Failed, with the new RPM (and without it, so perhaps it's something else): > > > ... > > > Copying Puppet modules and manifests [ DONE ] > > > Applying 10.103.234.139_prescript.pp > > > 10.103.234.139_prescript.pp: [ ERROR ] > > > Applying Puppet manifests [ ERROR ] > > > > > > ERROR : Error appeared during Puppet run: > > > 10.103.234.139_prescript.pp > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > openstack-selinux' returned > > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > (openstack- > > > icehouse) You will find full trace in log > > > /var/tmp/packstack/20140916-095516- > > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > > Please check log file > > > /var/tmp/packstack/20140916-095516-ANJtSb/openstack- > > > setup.log for more information > > > > > > Additional information: > > > * Time synchronization installation was skipped. Please note that > > > unsynchronized time on server instances might be problem for some > > > OpenStack components. > > > * Did not create a cinder volume group, one already existed > > > * File /root/keystonerc_admin has been created on OpenStack client > > > host 10.103.234.139. To use the command line tools you need to source the > file. > > > * To access the OpenStack Dashboard browse to > > > http://10.103.234.139/dashboard . > > > Please, find your login credentials stored in the keystonerc_admin > > > in your home directory. > > > > > > > > > The error in the log above points to: > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > openstack-selinux' returned > > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > (openstack- > > > icehouse) You will find full trace in log > > > /var/tmp/packstack/20140916-095516- > > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > > > And manually trying: > > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- > icehouse) > > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 > > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- > x86_64) > > selinux-policy-targeted = 3.12.1-153.el7 > > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- > icehouse) > > Requires: selinux-policy-base >= 3.12.1-153.el7_0.10 > > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- > x86_64) > > selinux-policy-base = 3.12.1-153.el7 > > Available: selinux-policy-minimum-3.12.1-153.el7.noarch (centos7- > x86_64) > > selinux-policy-base = 3.12.1-153.el7 > > Available: selinux-policy-mls-3.12.1-153.el7.noarch (centos7-x86_64) > > selinux-policy-base = 3.12.1-153.el7 > > > > > > I manually found and installed the required packages. Perhaps it was not > propagated to mirrors yet (as I failed to download it from multiple mirrors). > > It continued, then failed on: > > 10.103.234.139_nova.pp: [ ERROR ] > > Applying Puppet manifests [ ERROR ] > > > > ERROR : Error appeared during Puppet run: 10.103.234.139_nova.pp > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > openstack-nova-compute' returned 1: Error: Package: > > gnutls-utils-3.1.18-8.el7.x86_64 (centos7-x86_64) > > > > I'll check my repos. > > Y. > > Ya.. Please check your repos, my CentOS-7.0 install is laying down openstack- > selinux-0.5.15-1.el7ost.noarch > > Yaniv, are you installing the rdo-release rpm? It looks like your install is pulling > the openstack-selinux rpm from the CentOS yum repository itself. Indeed, I'm installing the RDO release RPM before that. [root at lgdrm403 ~]# yum repolist Loaded plugins: fastestmirror, priorities, rhnplugin This system is receiving updates from RHN Classic or Red Hat Satellite. Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.iweb.com 17 packages excluded due to repository priority protections repo id repo name status centos7-x86_64 CentOS 7 (x86_64) 8,461+4 epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 5,605+13 openstack-icehouse OpenStack Icehouse Repository 1,131+245 !xio_custom7 XtremIO custom7 repository 40 repolist: 15,237 > > Take a look at the quickstart for RDO > You can also take a look at our installs on CentOS-7.0 https://prod- > rdojenkins.rhcloud.com/ I liked the comment 'These jobs use known workarounds to complete..' What am I supposed to do with the content @ https://github.com/redhat-openstack/khaleesi/tree/master/workarounds/ ? Y. > > Notice the /etc/yum.repos.d/ directory on the controller, available in the log > file. > > Let me know if you still have any issues. > Thanks! > > > > > > > > > > > > > > > > > > https://openstack.redhat.com/Quickstart > > > > > > > > Thank you :) > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > From whayutin at redhat.com Tue Sep 16 12:11:26 2014 From: whayutin at redhat.com (whayutin) Date: Tue, 16 Sep 2014 08:11:26 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <648473255763364B961A02AC3BE1060D03C59F4460@MX19A.corp.emc.com> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F4460@MX19A.corp.emc.com> Message-ID: <1410869486.2753.6.camel@localhost.localdomain> On Tue, 2014-09-16 at 07:59 -0400, Kaul, Yaniv wrote: > > -----Original Message----- > > From: whayutin [mailto:whayutin at redhat.com] > > Sent: Tuesday, September 16, 2014 2:53 PM > > To: Kaul, Yaniv > > Cc: rdo-list at redhat.com > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > On Tue, 2014-09-16 at 04:31 -0400, Kaul, Yaniv wrote: > > > > -----Original Message----- > > > > From: rdo-list-bounces at redhat.com > > > > [mailto:rdo-list-bounces at redhat.com] On Behalf Of Kaul, Yaniv > > > > Sent: Tuesday, September 16, 2014 11:09 AM > > > > To: rdo-list at redhat.com > > > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > > > -----Original Message----- > > > > > From: rdo-list-bounces at redhat.com > > > > > [mailto:rdo-list-bounces at redhat.com] > > > > > On Behalf Of whayutin > > > > > Sent: Tuesday, September 16, 2014 2:45 AM > > > > > To: rdo-list at redhat.com > > > > > Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > > > > Greetings, > > > > > > > > > > You should now have success installing RDO Icehouse on CentOS-7.0 > > > > > using the quickstart instructions. > > > > > > > > Failed, with the new RPM (and without it, so perhaps it's something else): > > > > ... > > > > Copying Puppet modules and manifests [ DONE ] > > > > Applying 10.103.234.139_prescript.pp > > > > 10.103.234.139_prescript.pp: [ ERROR ] > > > > Applying Puppet manifests [ ERROR ] > > > > > > > > ERROR : Error appeared during Puppet run: > > > > 10.103.234.139_prescript.pp > > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > > openstack-selinux' returned > > > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > (openstack- > > > > icehouse) You will find full trace in log > > > > /var/tmp/packstack/20140916-095516- > > > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > > > Please check log file > > > > /var/tmp/packstack/20140916-095516-ANJtSb/openstack- > > > > setup.log for more information > > > > > > > > Additional information: > > > > * Time synchronization installation was skipped. Please note that > > > > unsynchronized time on server instances might be problem for some > > > > OpenStack components. > > > > * Did not create a cinder volume group, one already existed > > > > * File /root/keystonerc_admin has been created on OpenStack client > > > > host 10.103.234.139. To use the command line tools you need to source the > > file. > > > > * To access the OpenStack Dashboard browse to > > > > http://10.103.234.139/dashboard . > > > > Please, find your login credentials stored in the keystonerc_admin > > > > in your home directory. > > > > > > > > > > > > The error in the log above points to: > > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > > openstack-selinux' returned > > > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > (openstack- > > > > icehouse) You will find full trace in log > > > > /var/tmp/packstack/20140916-095516- > > > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > > > > > And manually trying: > > > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- > > icehouse) > > > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 > > > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- > > x86_64) > > > selinux-policy-targeted = 3.12.1-153.el7 > > > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- > > icehouse) > > > Requires: selinux-policy-base >= 3.12.1-153.el7_0.10 > > > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- > > x86_64) > > > selinux-policy-base = 3.12.1-153.el7 > > > Available: selinux-policy-minimum-3.12.1-153.el7.noarch (centos7- > > x86_64) > > > selinux-policy-base = 3.12.1-153.el7 > > > Available: selinux-policy-mls-3.12.1-153.el7.noarch (centos7-x86_64) > > > selinux-policy-base = 3.12.1-153.el7 > > > > > > > > > I manually found and installed the required packages. Perhaps it was not > > propagated to mirrors yet (as I failed to download it from multiple mirrors). > > > It continued, then failed on: > > > 10.103.234.139_nova.pp: [ ERROR ] > > > Applying Puppet manifests [ ERROR ] > > > > > > ERROR : Error appeared during Puppet run: 10.103.234.139_nova.pp > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > openstack-nova-compute' returned 1: Error: Package: > > > gnutls-utils-3.1.18-8.el7.x86_64 (centos7-x86_64) > > > > > > I'll check my repos. > > > Y. > > > > Ya.. Please check your repos, my CentOS-7.0 install is laying down openstack- > > selinux-0.5.15-1.el7ost.noarch > > > > Yaniv, are you installing the rdo-release rpm? It looks like your install is pulling > > the openstack-selinux rpm from the CentOS yum repository itself. > > Indeed, I'm installing the RDO release RPM before that. > [root at lgdrm403 ~]# yum repolist > Loaded plugins: fastestmirror, priorities, rhnplugin > This system is receiving updates from RHN Classic or Red Hat Satellite. > Loading mirror speeds from cached hostfile > * epel: fedora-epel.mirror.iweb.com > 17 packages excluded due to repository priority protections > repo id repo name status > centos7-x86_64 CentOS 7 (x86_64) 8,461+4 > epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 5,605+13 > openstack-icehouse OpenStack Icehouse Repository 1,131+245 > !xio_custom7 XtremIO custom7 repository 40 > repolist: 15,237 > > > > > Take a look at the quickstart for RDO > > You can also take a look at our installs on CentOS-7.0 https://prod- > > rdojenkins.rhcloud.com/ > > I liked the comment 'These jobs use known workarounds to complete..' > What am I supposed to do with the content @ https://github.com/redhat-openstack/khaleesi/tree/master/workarounds/ ? > Y. Contribute of course! All the workarounds have been disabled. https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production-centos-70-aio-packstack-neutron-gre-rabbitmq/56/consoleFull 03:02:18 workaround: 03:02:18 iptables_install: false 03:02:18 centos7_release: false 03:02:18 mysql_centos7: false 03:02:18 messagebus_centos7: false > > > > > Notice the /etc/yum.repos.d/ directory on the controller, available in the log > > file. > > > > Let me know if you still have any issues. > > Thanks! > > > > > > > > > > > > > > > > > > > > > > > > > > > https://openstack.redhat.com/Quickstart > > > > > > > > > > Thank you :) > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > From pmyers at redhat.com Tue Sep 16 12:11:55 2014 From: pmyers at redhat.com (Perry Myers) Date: Tue, 16 Sep 2014 08:11:55 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <1410868362.2753.3.camel@localhost.localdomain> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> Message-ID: <5418290B.4020900@redhat.com> > Ya.. Please check your repos, my CentOS-7.0 install is laying down > openstack-selinux-0.5.15-1.el7ost.noarch > > Yaniv, are you installing the rdo-release rpm? It looks like your > install is pulling the openstack-selinux rpm from the CentOS yum > repository itself. Just curious... why would there be an openstack-selinux RPM in the core CentOS yum repos? That package isn't part of core RHEL, only RDO and RHEL OSP, so it shouldn't show up in CentOS, should it? Perry From kbsingh at redhat.com Tue Sep 16 12:13:50 2014 From: kbsingh at redhat.com (Karanbir Singh) Date: Tue, 16 Sep 2014 13:13:50 +0100 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <5418290B.4020900@redhat.com> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> <5418290B.4020900@redhat.com> Message-ID: <5418297E.1080200@redhat.com> On 09/16/2014 01:11 PM, Perry Myers wrote: >> Ya.. Please check your repos, my CentOS-7.0 install is laying down >> openstack-selinux-0.5.15-1.el7ost.noarch >> >> Yaniv, are you installing the rdo-release rpm? It looks like your >> install is pulling the openstack-selinux rpm from the CentOS yum >> repository itself. > > Just curious... why would there be an openstack-selinux RPM in the core > CentOS yum repos? > > That package isn't part of core RHEL, only RDO and RHEL OSP, so it > shouldn't show up in CentOS, should it? Just checked, there isnt a openstack rpm in the centos base repos ( or the support repos even ), can we get a "rpm -qi openstack-selinux" from this machine ? -- Karanbir Singh, The CentOS Project, London, UK RH Ext. 8274455 | DID: 0044 207 009 4455 From Yaniv.Kaul at emc.com Tue Sep 16 12:30:36 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 16 Sep 2014 08:30:36 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <5418297E.1080200@redhat.com> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> <5418290B.4020900@redhat.com> <5418297E.1080200@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03C59F4475@MX19A.corp.emc.com> > -----Original Message----- > From: Karanbir Singh [mailto:kbsingh at redhat.com] > Sent: Tuesday, September 16, 2014 3:14 PM > To: Perry Myers; whayutin at redhat.com; Kaul, Yaniv > Cc: rdo-list at redhat.com; Alan Pevec > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > On 09/16/2014 01:11 PM, Perry Myers wrote: > >> Ya.. Please check your repos, my CentOS-7.0 install is laying down > >> openstack-selinux-0.5.15-1.el7ost.noarch > >> > >> Yaniv, are you installing the rdo-release rpm? It looks like your > >> install is pulling the openstack-selinux rpm from the CentOS yum > >> repository itself. > > > > Just curious... why would there be an openstack-selinux RPM in the > > core CentOS yum repos? > > > > That package isn't part of core RHEL, only RDO and RHEL OSP, so it > > shouldn't show up in CentOS, should it? > > Just checked, there isnt a openstack rpm in the centos base repos ( or the > support repos even ), can we get a "rpm -qi openstack-selinux" from this > machine ? [root at lgdrm403 ~]# rpm -qi openstack-selinux Name : openstack-selinux Version : 0.5.15 Release : 1.el7ost Architecture: noarch Install Date: Tue 16 Sep 2014 11:19:21 AM IDT Group : System Environment/Base Size : 70405 License : GPLv2 Signature : RSA/SHA1, Sat 13 Sep 2014 12:47:02 AM IDT, Key ID e50be6ab0e4fbd28 Source RPM : openstack-selinux-0.5.15-1.el7ost.src.rpm Build Date : Tue 09 Sep 2014 05:25:27 PM IDT Build Host : x86-017.build.eng.bos.redhat.com Relocations : (not relocatable) Packager : Red Hat, Inc. Vendor : Red Hat, Inc. URL : https://github.com/redhat-openstack/openstack-selinux Summary : SELinux Policies for OpenStack Description : SELinux policy modules for use with OpenStack > > > -- > Karanbir Singh, The CentOS Project, London, UK RH Ext. 8274455 | DID: 0044 > 207 009 4455 From kbsingh at redhat.com Tue Sep 16 12:36:32 2014 From: kbsingh at redhat.com (Karanbir Singh) Date: Tue, 16 Sep 2014 13:36:32 +0100 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <648473255763364B961A02AC3BE1060D03C59F4475@MX19A.corp.emc.com> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> <5418290B.4020900@redhat.com> <5418297E.1080200@redhat.com> <648473255763364B961A02AC3BE1060D03C59F4475@MX19A.corp.emc.com> Message-ID: <54182ED0.7070705@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 09/16/2014 01:30 PM, Kaul, Yaniv wrote: > > [root at lgdrm403 ~]# rpm -qi openstack-selinux Name : > openstack-selinux Version : 0.5.15 Release : 1.el7ost > Architecture: noarch Install Date: Tue 16 Sep 2014 11:19:21 AM IDT > Group : System Environment/Base Size : 70405 License > : GPLv2 Signature : RSA/SHA1, Sat 13 Sep 2014 12:47:02 AM IDT, > Key ID e50be6ab0e4fbd28 This is the RDO-Icehouse release key .. not a CentOS pacakge. - -- Karanbir Singh, The CentOS Project, London, UK RH Ext. 8274455 | DID: 0044 207 009 4455 -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (GNU/Linux) iQEcBAEBAgAGBQJUGC7QAAoJEI3Oi2Mx7xbtoDcH/iDCHIpaPCPT3vPHgihAb5ST Dog6H29hJeTGYkuIB+MnIf8MkR4r/gW6UuzNhdYFEwT+Ui3+0IvXVV2Y6nN1bN1Y A+FtBPJFGdyyMTtFXH9Ylu65J2VfAqC6aDIl5COznZh1pLunzl2olG95Js4LidE2 v7V4YjW9ReJaLrnleIH4RYS8Ft/atU2LUWrhOafohVT0+alrPz7W59YKCZQ7/soz iLdtaHaIfi9+tVGq5Wp/07Xdaq4GxbaAQS5AyisOsSAGw8ITnXHy00ySkxDsL3xZ jSauOUsRZ/8WrbbIkdfrvqsBwe1JtSG/GizC1jCXLreiqEFNn74cN77jkTqq3vo= =MP7q -----END PGP SIGNATURE----- From apevec at gmail.com Tue Sep 16 12:39:11 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 16 Sep 2014 14:39:11 +0200 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <648473255763364B961A02AC3BE1060D03C59F4460@MX19A.corp.emc.com> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F4460@MX19A.corp.emc.com> Message-ID: >> > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch (openstack- >> icehouse) >> > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 >> > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- >> x86_64) >> > I'll check my repos. > [root at lgdrm403 ~]# yum repolist > 17 packages excluded due to repository priority protections ^ just curious: where and why you have yum priorities set? > repo id repo name status > centos7-x86_64 CentOS 7 (x86_64) 8,461+4 > epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 5,605+13 > openstack-icehouse OpenStack Icehouse Repository 1,131+245 > !xio_custom7 XtremIO custom7 repository 40 You are missing centos updates which is where updated selinux-policy should be, also centros extras is required for epel7. On my centos7 box: repo id repo name status !base/7/x86_64 CentOS-7 - Base 8,465 !epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 5,618 !extras/7/x86_64 CentOS-7 - Extras 44 !updates/7/x86_64 CentOS-7 - Updates 774 Cheers, Alan From Yaniv.Kaul at emc.com Tue Sep 16 12:48:45 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 16 Sep 2014 08:48:45 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F4460@MX19A.corp.emc.com> Message-ID: <648473255763364B961A02AC3BE1060D03C59F4487@MX19A.corp.emc.com> > -----Original Message----- > From: Alan Pevec [mailto:apevec at gmail.com] > Sent: Tuesday, September 16, 2014 3:39 PM > To: Kaul, Yaniv > Cc: whayutin at redhat.com; rdo-list at redhat.com > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > >> > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > >> > (openstack- > >> icehouse) > >> > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 > >> > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch > >> > (@centos7- > >> x86_64) > > > >> > I'll check my repos. > > > [root at lgdrm403 ~]# yum repolist > > > 17 packages excluded due to repository priority protections > ^ just curious: where and why you have yum priorities set? It was written in the Quickstart at some point to use it... I'll remove it from my scripts. Y. > > > repo id repo name > status > > centos7-x86_64 CentOS 7 (x86_64) > 8,461+4 > > epel/x86_64 Extra Packages for > Enterprise Linux 7 - x86_64 5,605+13 > > openstack-icehouse OpenStack Icehouse > Repository 1,131+245 > > !xio_custom7 XtremIO custom7 > repository 40 > You are missing centos updates which is where updated selinux-policy should > be, also centros extras is required for epel7. On my centos7 > box: > repo id repo name status > !base/7/x86_64 CentOS-7 - Base 8,465 > !epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 5,618 > !extras/7/x86_64 CentOS-7 - Extras 44 > !updates/7/x86_64 CentOS-7 - Updates 774 > > Cheers, > Alan From thefossgeek at gmail.com Tue Sep 16 13:05:42 2014 From: thefossgeek at gmail.com (foss geek) Date: Tue, 16 Sep 2014 18:35:42 +0530 Subject: [Rdo-list] Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] Message-ID: Dear All, I trying to install openstack with vCenter on CentOS 6.5. I am getting below error: *SequenceError: Error appeared during Puppet run: 10.10.2.2_neutron.pp* *Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0]* *You will find full trace in log /var/tmp/packstack/20140916-191749-_fyhX5/manifests/10.10.2.2_neutron.pp.log* *2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing script:* *rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb* Here is detailed error message and answer file: 2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing script: rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb [root at vCenter-OS-Dev ~]# tail -n 30 /var/tmp/packstack/20140916-191749-_fyhX5/openstack-setup.log rpm -q --whatprovides puppet || yum install -y puppet rpm -q --whatprovides openssh-clients || yum install -y openssh-clients rpm -q --whatprovides tar || yum install -y tar rpm -q --whatprovides nc || yum install -y nc rpm -q --whatprovides rubygem-json || yum install -y rubygem-json 2014-09-16 19:18:13::INFO::shell::81::root:: [localhost] Executing script: cd /usr/lib/python2.6/site-packages/packstack/puppet cd /var/tmp/packstack/20140916-191749-_fyhX5/manifests tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root at 10.10.2.2 tar -C /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb -xpzf - cd /usr/share/openstack-puppet/modules tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall glance heat horizon inifile keystone memcached mongodb mysql neutron nova nssdb openstack packstack qpid rabbitmq remote rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root at 10.10.2.2 tar -C /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb/modules -xpzf - 2014-09-16 19:21:57::ERROR::run_setup::921::root:: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 916, in main _main(confFile) File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 605, in _main runSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 584, in runSequences controller.runAllSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 68, in runAllSequences sequence.run(config=self.CONF, messages=self.MESSAGES) File "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", line 98, in run step.run(config=config, messages=messages) File "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", line 44, in run raise SequenceError(str(ex)) SequenceError: Error appeared during Puppet run: 10.10.2.2_neutron.pp Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] You will find full trace in log /var/tmp/packstack/20140916-191749-_fyhX5/manifests/10.10.2.2_neutron.pp.log 2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing script: rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb # cat all-in-one.conf [general] # Path to a Public key to install on servers. If a usable key has not # been installed on the remote servers the user will be prompted for a # password and this key will be installed so the password will not be # required again CONFIG_SSH_KEY= # Set to 'y' if you would like Packstack to install MySQL CONFIG_MYSQL_INSTALL=y # Set to 'y' if you would like Packstack to install OpenStack Image # Service (Glance) CONFIG_GLANCE_INSTALL=y # Set to 'y' if you would like Packstack to install OpenStack Block # Storage (Cinder) CONFIG_CINDER_INSTALL=y # Set to 'y' if you would like Packstack to install OpenStack Compute # (Nova) CONFIG_NOVA_INSTALL=y # Set to 'y' if you would like Packstack to install OpenStack # Networking (Neutron). Otherwise Nova Network will be used. CONFIG_NEUTRON_INSTALL=y # Set to 'y' if you would like Packstack to install OpenStack # Dashboard (Horizon) CONFIG_HORIZON_INSTALL=y # Set to 'y' if you would like Packstack to install OpenStack Object # Storage (Swift) CONFIG_SWIFT_INSTALL=y # Set to 'y' if you would like Packstack to install OpenStack # Metering (Ceilometer) CONFIG_CEILOMETER_INSTALL=y # Set to 'y' if you would like Packstack to install OpenStack # Orchestration (Heat) CONFIG_HEAT_INSTALL=y # Set to 'y' if you would like Packstack to install the OpenStack # Client packages. An admin "rc" file will also be installed CONFIG_CLIENT_INSTALL=y # Comma separated list of NTP servers. Leave plain if Packstack # should not install ntpd on instances. CONFIG_NTP_SERVERS= # Set to 'y' if you would like Packstack to install Nagios to monitor # OpenStack hosts CONFIG_NAGIOS_INSTALL=y # Comma separated list of servers to be excluded from installation in # case you are running Packstack the second time with the same answer # file and don't want Packstack to touch these servers. Leave plain if # you don't need to exclude any server. EXCLUDE_SERVERS= # Set to 'y' if you want to run OpenStack services in debug mode. # Otherwise set to 'n'. CONFIG_DEBUG_MODE=n # The IP address of the server on which to install OpenStack services # specific to controller role such as API servers, Horizon, etc. CONFIG_CONTROLLER_HOST=10.10.2.2 # The list of IP addresses of the server on which to install the Nova # compute service CONFIG_COMPUTE_HOSTS=10.10.2.2 # The list of IP addresses of the server on which to install the # network service such as Nova network or Neutron CONFIG_NETWORK_HOSTS=10.10.2.2 # Set to 'y' if you want to use VMware vCenter as hypervisor and # storage. Otherwise set to 'n'. CONFIG_VMWARE_BACKEND=y # The IP address of the VMware vCenter server CONFIG_VCENTER_HOST=192.168.1.9 # The username to authenticate to VMware vCenter server CONFIG_VCENTER_USER=root # The password to authenticate to VMware vCenter server CONFIG_VCENTER_PASSWORD=tcl at 123 # The name of the vCenter cluster CONFIG_VCENTER_CLUSTER_NAME=ssoccluster # To subscribe each server to EPEL enter "y" CONFIG_USE_EPEL=y # A comma separated list of URLs to any additional yum repositories # to install CONFIG_REPO= # To subscribe each server with Red Hat subscription manager, include # this with CONFIG_RH_PW CONFIG_RH_USER= # To subscribe each server with Red Hat subscription manager, include # this with CONFIG_RH_USER CONFIG_RH_PW= # To enable RHEL optional repos use value "y" CONFIG_RH_OPTIONAL=y # To subscribe each server with RHN Satellite,fill Satellite's URL # here. Note that either satellite's username/password or activation # key has to be provided CONFIG_SATELLITE_URL= # Username to access RHN Satellite CONFIG_SATELLITE_USER= # Password to access RHN Satellite CONFIG_SATELLITE_PW= # Activation key for subscription to RHN Satellite CONFIG_SATELLITE_AKEY= # Specify a path or URL to a SSL CA certificate to use CONFIG_SATELLITE_CACERT= # If required specify the profile name that should be used as an # identifier for the system in RHN Satellite CONFIG_SATELLITE_PROFILE= # Comma separated list of flags passed to rhnreg_ks. Valid flags are: # novirtinfo, norhnsd, nopackages CONFIG_SATELLITE_FLAGS= # Specify a HTTP proxy to use with RHN Satellite CONFIG_SATELLITE_PROXY= # Specify a username to use with an authenticated HTTP proxy CONFIG_SATELLITE_PROXY_USER= # Specify a password to use with an authenticated HTTP proxy. CONFIG_SATELLITE_PROXY_PW= # Set the AMQP service backend. Allowed values are: qpid, rabbitmq CONFIG_AMQP_BACKEND=rabbitmq # The IP address of the server on which to install the AMQP service CONFIG_AMQP_HOST=10.10.2.2 # Enable SSL for the AMQP service CONFIG_AMQP_ENABLE_SSL=n # Enable Authentication for the AMQP service CONFIG_AMQP_ENABLE_AUTH=n # The password for the NSS certificate database of the AMQP service CONFIG_AMQP_NSS_CERTDB_PW=changeme # The port in which the AMQP service listens to SSL connections CONFIG_AMQP_SSL_PORT=5671 # The filename of the certificate that the AMQP service is going to # use CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem # The filename of the private key that the AMQP service is going to # use CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem # Auto Generates self signed SSL certificate and key CONFIG_AMQP_SSL_SELF_SIGNED=y # User for amqp authentication CONFIG_AMQP_AUTH_USER=amqp_user # Password for user authentication CONFIG_AMQP_AUTH_PASSWORD=changeme # The IP address of the server on which to install MySQL or IP # address of DB server to use if MySQL installation was not selected CONFIG_MYSQL_HOST=10.10.2.2 # Username for the MySQL admin user CONFIG_MYSQL_USER=root # Password for the MySQL admin user CONFIG_MYSQL_PW=changeme # The password to use for the Keystone to access DB CONFIG_KEYSTONE_DB_PW=changeme # The token to use for the Keystone service api CONFIG_KEYSTONE_ADMIN_TOKEN=changeme # The password to use for the Keystone admin user CONFIG_KEYSTONE_ADMIN_PW=changeme # The password to use for the Keystone demo user CONFIG_KEYSTONE_DEMO_PW=changeme # Kestone token format. Use either UUID or PKI CONFIG_KEYSTONE_TOKEN_FORMAT=UUID # The password to use for the Glance to access DB CONFIG_GLANCE_DB_PW=changeme # The password to use for the Glance to authenticate with Keystone CONFIG_GLANCE_KS_PW=changeme # The password to use for the Cinder to access DB CONFIG_CINDER_DB_PW=changeme # The password to use for the Cinder to authenticate with Keystone CONFIG_CINDER_KS_PW=changeme # The Cinder backend to use, valid options are: lvm, gluster, nfs CONFIG_CINDER_BACKEND=lvm # Create Cinder's volumes group. This should only be done for testing # on a proof-of-concept installation of Cinder. This will create a # file-backed volume group and is not suitable for production usage. CONFIG_CINDER_VOLUMES_CREATE=y # Cinder's volumes group size. Note that actual volume size will be # extended with 3% more space for VG metadata. CONFIG_CINDER_VOLUMES_SIZE=8G # A single or comma separated list of gluster volume shares to mount, # eg: ip-address:/vol-name, domain:/vol-name CONFIG_CINDER_GLUSTER_MOUNTS= # A single or comma seprated list of NFS exports to mount, eg: ip- # address:/export-name CONFIG_CINDER_NFS_MOUNTS= # The password to use for the Nova to access DB CONFIG_NOVA_DB_PW=changeme # The password to use for the Nova to authenticate with Keystone CONFIG_NOVA_KS_PW=changeme # The overcommitment ratio for virtual to physical CPUs. Set to 1.0 # to disable CPU overcommitment CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0 # The overcommitment ratio for virtual to physical RAM. Set to 1.0 to # disable RAM overcommitment CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5 # Private interface for Flat DHCP on the Nova compute servers CONFIG_NOVA_COMPUTE_PRIVIF=eth1 # Nova network manager CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager # Public interface on the Nova network server CONFIG_NOVA_NETWORK_PUBIF= # Private interface for network manager on the Nova network server CONFIG_NOVA_NETWORK_PRIVIF= # IP Range for network manager CONFIG_NOVA_NETWORK_FIXEDRANGE= # IP Range for Floating IP's CONFIG_NOVA_NETWORK_FLOATRANGE= # Name of the default floating pool to which the specified floating # ranges are added to CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova # Automatically assign a floating IP to new instances CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n # First VLAN for private networks CONFIG_NOVA_NETWORK_VLAN_START= # Number of networks to support CONFIG_NOVA_NETWORK_NUMBER= # Number of addresses in each private subnet CONFIG_NOVA_NETWORK_SIZE= # The password to use for Neutron to authenticate with Keystone CONFIG_NEUTRON_KS_PW=changeme # The password to use for Neutron to access DB CONFIG_NEUTRON_DB_PW=changeme # The name of the bridge that the Neutron L3 agent will use for # external traffic, or 'provider' if using provider networks CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex # The name of the L2 plugin to be used with Neutron CONFIG_NEUTRON_L2_PLUGIN=ml2 # Neutron metadata agent password CONFIG_NEUTRON_METADATA_PW=changeme # Set to 'y' if you would like Packstack to install Neutron LBaaS CONFIG_LBAAS_INSTALL=y # Set to 'y' if you would like Packstack to install Neutron L3 # Metering agent CONFIG_NEUTRON_METERING_AGENT_INSTALL=n # Whether to configure neutron Firewall as a Service CONFIG_NEUTRON_FWAAS=y # A comma separated list of network type driver entrypoints to be # loaded from the neutron.ml2.type_drivers namespace. CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan,vxlan # A comma separated ordered list of network_types to allocate as # tenant networks. The value 'local' is only useful for single-box # testing but provides no connectivity between hosts. CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan # A comma separated ordered list of networking mechanism driver # entrypoints to be loaded from the neutron.ml2.mechanism_drivers # namespace. CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch,cisco_nexus # A comma separated list of physical_network names with which flat # networks can be created. Use * to allow flat networks with arbitrary # physical_network names. CONFIG_NEUTRON_ML2_FLAT_NETWORKS=* # A comma separated list of :: # or specifying physical_network names usable for # VLAN provider and tenant networks, as well as ranges of VLAN tags on # each available for allocation to tenant networks. CONFIG_NEUTRON_ML2_VLAN_RANGES= # A comma separated list of : tuples enumerating # ranges of GRE tunnel IDs that are available for tenant network # allocation. Should be an array with tun_max +1 - tun_min > 1000000 CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES= # Multicast group for VXLAN. If unset, disables VXLAN enable sending # allocate broadcast traffic to this multicast group. When left # unconfigured, will disable multicast VXLAN mode. Should be an # Multicast IP (v4 or v6) address. CONFIG_NEUTRON_ML2_VXLAN_GROUP= # A comma separated list of : tuples enumerating # ranges of VXLAN VNI IDs that are available for tenant network # allocation. Min value is 0 and Max value is 16777215. CONFIG_NEUTRON_ML2_VNI_RANGES= # The name of the L2 agent to be used with Neutron CONFIG_NEUTRON_L2_AGENT=openvswitch # The type of network to allocate for tenant networks (eg. vlan, # local) CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local # A comma separated list of VLAN ranges for the Neutron linuxbridge # plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999) CONFIG_NEUTRON_LB_VLAN_RANGES= # A comma separated list of interface mappings for the Neutron # linuxbridge plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3 # :br-eth3) CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= # Type of network to allocate for tenant networks (eg. vlan, local, # gre, vxlan) CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan # A comma separated list of VLAN ranges for the Neutron openvswitch # plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999) CONFIG_NEUTRON_OVS_VLAN_RANGES= # A comma separated list of bridge mappings for the Neutron # openvswitch plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3 # :br-eth3) CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= # A comma separated list of colon-separated OVS bridge:interface # pairs. The interface will be added to the associated bridge. CONFIG_NEUTRON_OVS_BRIDGE_IFACES= # A comma separated list of tunnel ranges for the Neutron openvswitch # plugin (eg. 1:1000) CONFIG_NEUTRON_OVS_TUNNEL_RANGES= # The interface for the OVS tunnel. Packstack will override the IP # address used for tunnels on this hypervisor to the IP found on the # specified interface. (eg. eth1) CONFIG_NEUTRON_OVS_TUNNEL_IF= # VXLAN UDP port CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789 # To set up Horizon communication over https set this to 'y' CONFIG_HORIZON_SSL=n # PEM encoded certificate to be used for ssl on the https server, # leave blank if one should be generated, this certificate should not # require a passphrase CONFIG_SSL_CERT= # SSL keyfile corresponding to the certificate if one was entered CONFIG_SSL_KEY= # PEM encoded CA certificates from which the certificate chain of the # server certificate can be assembled. CONFIG_SSL_CACHAIN= # The password to use for the Swift to authenticate with Keystone CONFIG_SWIFT_KS_PW=changeme # A comma separated list of devices which to use as Swift Storage # device. Each entry should take the format /path/to/dev, for example # /dev/vdb will install /dev/vdb as Swift storage device (packstack # does not create the filesystem, you must do this first). If value is # omitted Packstack will create a loopback device for test setup CONFIG_SWIFT_STORAGES= # Number of swift storage zones, this number MUST be no bigger than # the number of storage devices configured CONFIG_SWIFT_STORAGE_ZONES=1 # Number of swift storage replicas, this number MUST be no bigger # than the number of storage zones configured CONFIG_SWIFT_STORAGE_REPLICAS=1 # FileSystem type for storage nodes CONFIG_SWIFT_STORAGE_FSTYPE=ext4 # Shared secret for Swift CONFIG_SWIFT_HASH=changeme # Size of the swift loopback file storage device CONFIG_SWIFT_STORAGE_SIZE=2G # Whether to provision for demo usage and testing. Note that # provisioning is only supported for all-in-one installations. CONFIG_PROVISION_DEMO=y # Whether to configure tempest for testing CONFIG_PROVISION_TEMPEST=n # The name of the Tempest Provisioning user. If you don't provide a # user name, Tempest will be configured in a standalone mode CONFIG_PROVISION_TEMPEST_USER= # The password to use for the Tempest Provisioning user CONFIG_PROVISION_TEMPEST_USER_PW=changeme # The CIDR network address for the floating IP subnet CONFIG_PROVISION_DEMO_FLOATRANGE=10.10.4.0/24 # The uri of the tempest git repository to use CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git # The revision of the tempest git repository to use CONFIG_PROVISION_TEMPEST_REPO_REVISION=master # Whether to configure the ovs external bridge in an all-in-one # deployment CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n # The password used by Heat user to authenticate against MySQL CONFIG_HEAT_DB_PW=changeme # The encryption key to use for authentication info in database CONFIG_HEAT_AUTH_ENC_KEY=changeme # The password to use for the Heat to authenticate with Keystone CONFIG_HEAT_KS_PW=changeme # Set to 'y' if you would like Packstack to install Heat CloudWatch # API CONFIG_HEAT_CLOUDWATCH_INSTALL=n # Set to 'y' if you would like Packstack to install Heat # CloudFormation API CONFIG_HEAT_CFN_INSTALL=n # Name of Keystone domain for Heat CONFIG_HEAT_DOMAIN=heat # Name of Keystone domain admin user for Heat CONFIG_HEAT_DOMAIN_ADMIN=heat_admin # Password for Keystone domain admin user for Heat CONFIG_HEAT_DOMAIN_PASSWORD=changeme # Secret key for signing metering messages CONFIG_CEILOMETER_SECRET=changeme # The password to use for Ceilometer to authenticate with Keystone CONFIG_CEILOMETER_KS_PW=changeme # The IP address of the server on which to install MongoDB CONFIG_MONGODB_HOST=10.10.2.2 # The password of the nagiosadmin user on the Nagios server CONFIG_NAGIOS_PW=changeme ============== -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at redhat.com Tue Sep 16 13:35:18 2014 From: kgiusti at redhat.com (Ken Giusti) Date: Tue, 16 Sep 2014 09:35:18 -0400 (EDT) Subject: [Rdo-list] Getting support for AMQP 1.0 messaging in RDO In-Reply-To: <726893589.31641084.1410871140176.JavaMail.zimbra@redhat.com> Message-ID: <2132347672.31688161.1410874518252.JavaMail.zimbra@redhat.com> Greetings! The next release of oslo.messaging (Juno) will provide support for version 1.0 of the AMQP messaging protocol. This is new feature is being released as an experimental rpc_backend option, with the hopes that the community will take this opportunity to 'kick the tires' a bit. This new feature has dependencies on the following: 1) A version of the qpidd broker >= 0.26 2) The Qpid Proton libraries (qpid-proton-c) 3) Python bindings for Qpid Proton (python-qpid-proton) 4) The pyngus pure python client API Items 2 and 3 are available for fedora19+ and RHEL/Centos 6+7 via EPEL. Item 1 is available for fedora19+, RHEL/Centos 7 _only_ via EPEL. pyngus (item 4) is currently available via PyPi https://pypi.python.org/pypi/pyngus/1.1.0 - there are no RPMs for it (yet). My main concerns are RHEL6/Centos6 support for a version of the qpid broker that supports AMQP 1.0, and easier access to pyngus. I'd like to get pyngus into EPEL, which I assume would satisfy that dependency for RDO. The bigger issue is broker support in RHEL/Centos6 - base RHEL6 has an old, unsupported version of qpidd (0.14), and the icehouse repo has an old version that does not support AMQP 1.0. Would it be possible to update the version of qpidd broker in the RDO repos for RHEL/Centos6? Thanks, -K From sgordon at redhat.com Tue Sep 16 13:55:29 2014 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 16 Sep 2014 09:55:29 -0400 (EDT) Subject: [Rdo-list] Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] In-Reply-To: References: Message-ID: <17507635.119025.1410875719486.JavaMail.sgordon@localhost.localdomain> Hi, I also encountered this issue ( https://www.redhat.com/archives/rdo-list/2014-September/msg00067.html ). Gael/Martin when is this going to be fixed in RDO Icehouse for EL6? Thanks, Steve ----- Original Message ----- > From: "foss geek" > To: rdo-list at redhat.com > > Dear All, > > I trying to install openstack with vCenter on CentOS 6.5. > > I am getting below error: > > SequenceError: Error appeared during Puppet run: 10.10.2.2_neutron.pp > Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] > You will find full trace in log > /var/tmp/packstack/20140916-191749-_fyhX5/manifests/10.10.2.2_neutron.pp.log > > 2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing > script: > rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb > > Here is detailed error message and answer file: > > 2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing > script: > rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb > [root at vCenter-OS-Dev ~]# tail -n 30 > /var/tmp/packstack/20140916-191749-_fyhX5/openstack-setup.log > rpm -q --whatprovides puppet || yum install -y puppet > rpm -q --whatprovides openssh-clients || yum install -y > openssh-clients > rpm -q --whatprovides tar || yum install -y tar > rpm -q --whatprovides nc || yum install -y nc > rpm -q --whatprovides rubygem-json || yum install -y rubygem-json > 2014-09-16 19:18:13::INFO::shell::81::root:: [localhost] Executing > script: > cd /usr/lib/python2.6/site-packages/packstack/puppet > cd /var/tmp/packstack/20140916-191749-_fyhX5/manifests > tar --dereference -cpzf - ../manifests | ssh -o > StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null > root at 10.10.2.2 tar -C > /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb -xpzf - > cd /usr/share/openstack-puppet/modules > tar --dereference -cpzf - apache ceilometer certmonger cinder concat > firewall glance heat horizon inifile keystone memcached mongodb > mysql neutron nova nssdb openstack packstack qpid rabbitmq remote > rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | > ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null > root at 10.10.2.2 tar -C > /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb/modules -xpzf - > 2014-09-16 19:21:57::ERROR::run_setup::921::root:: Traceback (most > recent call last): > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 916, in main > _main(confFile) > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 605, in _main > runSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 584, in runSequences > controller.runAllSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > line 68, in runAllSequences > sequence.run(config=self.CONF, messages=self.MESSAGES) > File > "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", > line 98, in run > step.run(config=config, messages=messages) > File > "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", > line 44, in run > raise SequenceError(str(ex)) > SequenceError: Error appeared during Puppet run: 10.10.2.2_neutron.pp > Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] > You will find full trace in log > /var/tmp/packstack/20140916-191749-_fyhX5/manifests/10.10.2.2_neutron.pp.log > > > 2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing > script: > rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb > > # cat all-in-one.conf > > [general] > > > # Path to a Public key to install on servers. If a usable key has not > # been installed on the remote servers the user will be prompted for > a > # password and this key will be installed so the password will not be > # required again > CONFIG_SSH_KEY= > > > # Set to 'y' if you would like Packstack to install MySQL > CONFIG_MYSQL_INSTALL=y > > > # Set to 'y' if you would like Packstack to install OpenStack Image > # Service (Glance) > CONFIG_GLANCE_INSTALL=y > > > # Set to 'y' if you would like Packstack to install OpenStack Block > # Storage (Cinder) > CONFIG_CINDER_INSTALL=y > > > # Set to 'y' if you would like Packstack to install OpenStack Compute > # (Nova) > CONFIG_NOVA_INSTALL=y > > > # Set to 'y' if you would like Packstack to install OpenStack > # Networking (Neutron). Otherwise Nova Network will be used. > CONFIG_NEUTRON_INSTALL=y > > > # Set to 'y' if you would like Packstack to install OpenStack > # Dashboard (Horizon) > CONFIG_HORIZON_INSTALL=y > > > # Set to 'y' if you would like Packstack to install OpenStack Object > # Storage (Swift) > CONFIG_SWIFT_INSTALL=y > > > # Set to 'y' if you would like Packstack to install OpenStack > # Metering (Ceilometer) > CONFIG_CEILOMETER_INSTALL=y > > > # Set to 'y' if you would like Packstack to install OpenStack > # Orchestration (Heat) > CONFIG_HEAT_INSTALL=y > > > # Set to 'y' if you would like Packstack to install the OpenStack > # Client packages. An admin "rc" file will also be installed > CONFIG_CLIENT_INSTALL=y > > > # Comma separated list of NTP servers. Leave plain if Packstack > # should not install ntpd on instances. > CONFIG_NTP_SERVERS= > > > # Set to 'y' if you would like Packstack to install Nagios to monitor > # OpenStack hosts > CONFIG_NAGIOS_INSTALL=y > > > # Comma separated list of servers to be excluded from installation in > # case you are running Packstack the second time with the same answer > # file and don't want Packstack to touch these servers. Leave plain > if > # you don't need to exclude any server. > EXCLUDE_SERVERS= > > > # Set to 'y' if you want to run OpenStack services in debug mode. > # Otherwise set to 'n'. > CONFIG_DEBUG_MODE=n > > > # The IP address of the server on which to install OpenStack services > # specific to controller role such as API servers, Horizon, etc. > CONFIG_CONTROLLER_HOST=10.10.2.2 > > > # The list of IP addresses of the server on which to install the Nova > # compute service > CONFIG_COMPUTE_HOSTS=10.10.2.2 > > > # The list of IP addresses of the server on which to install the > # network service such as Nova network or Neutron > CONFIG_NETWORK_HOSTS=10.10.2.2 > > > # Set to 'y' if you want to use VMware vCenter as hypervisor and > # storage. Otherwise set to 'n'. > CONFIG_VMWARE_BACKEND=y > > > # The IP address of the VMware vCenter server > CONFIG_VCENTER_HOST=192.168.1.9 > > > # The username to authenticate to VMware vCenter server > CONFIG_VCENTER_USER=root > > > # The password to authenticate to VMware vCenter server > CONFIG_VCENTER_PASSWORD=tcl at 123 > > > # The name of the vCenter cluster > CONFIG_VCENTER_CLUSTER_NAME=ssoccluster > > > # To subscribe each server to EPEL enter "y" > CONFIG_USE_EPEL=y > > > # A comma separated list of URLs to any additional yum repositories > # to install > CONFIG_REPO= > > > # To subscribe each server with Red Hat subscription manager, include > # this with CONFIG_RH_PW > CONFIG_RH_USER= > > > # To subscribe each server with Red Hat subscription manager, include > # this with CONFIG_RH_USER > CONFIG_RH_PW= > > > # To enable RHEL optional repos use value "y" > CONFIG_RH_OPTIONAL=y > > > # To subscribe each server with RHN Satellite,fill Satellite's URL > # here. Note that either satellite's username/password or activation > # key has to be provided > CONFIG_SATELLITE_URL= > > > # Username to access RHN Satellite > CONFIG_SATELLITE_USER= > > > # Password to access RHN Satellite > CONFIG_SATELLITE_PW= > > > # Activation key for subscription to RHN Satellite > CONFIG_SATELLITE_AKEY= > > > # Specify a path or URL to a SSL CA certificate to use > CONFIG_SATELLITE_CACERT= > > > # If required specify the profile name that should be used as an > # identifier for the system in RHN Satellite > CONFIG_SATELLITE_PROFILE= > > > # Comma separated list of flags passed to rhnreg_ks. Valid flags are: > # novirtinfo, norhnsd, nopackages > CONFIG_SATELLITE_FLAGS= > > > # Specify a HTTP proxy to use with RHN Satellite > CONFIG_SATELLITE_PROXY= > > > # Specify a username to use with an authenticated HTTP proxy > CONFIG_SATELLITE_PROXY_USER= > > > # Specify a password to use with an authenticated HTTP proxy. > CONFIG_SATELLITE_PROXY_PW= > > > # Set the AMQP service backend. Allowed values are: qpid, rabbitmq > CONFIG_AMQP_BACKEND=rabbitmq > > > # The IP address of the server on which to install the AMQP service > CONFIG_AMQP_HOST=10.10.2.2 > > > # Enable SSL for the AMQP service > CONFIG_AMQP_ENABLE_SSL=n > > > # Enable Authentication for the AMQP service > CONFIG_AMQP_ENABLE_AUTH=n > > > # The password for the NSS certificate database of the AMQP service > CONFIG_AMQP_NSS_CERTDB_PW=changeme > > > # The port in which the AMQP service listens to SSL connections > CONFIG_AMQP_SSL_PORT=5671 > > > # The filename of the certificate that the AMQP service is going to > # use > CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem > > > # The filename of the private key that the AMQP service is going to > # use > CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem > > > # Auto Generates self signed SSL certificate and key > CONFIG_AMQP_SSL_SELF_SIGNED=y > > > # User for amqp authentication > CONFIG_AMQP_AUTH_USER=amqp_user > > > # Password for user authentication > CONFIG_AMQP_AUTH_PASSWORD=changeme > > > # The IP address of the server on which to install MySQL or IP > # address of DB server to use if MySQL installation was not selected > CONFIG_MYSQL_HOST=10.10.2.2 > > > # Username for the MySQL admin user > CONFIG_MYSQL_USER=root > > > # Password for the MySQL admin user > CONFIG_MYSQL_PW=changeme > > > # The password to use for the Keystone to access DB > CONFIG_KEYSTONE_DB_PW=changeme > > > # The token to use for the Keystone service api > CONFIG_KEYSTONE_ADMIN_TOKEN=changeme > > > # The password to use for the Keystone admin user > CONFIG_KEYSTONE_ADMIN_PW=changeme > > > # The password to use for the Keystone demo user > CONFIG_KEYSTONE_DEMO_PW=changeme > > > # Kestone token format. Use either UUID or PKI > CONFIG_KEYSTONE_TOKEN_FORMAT=UUID > > > # The password to use for the Glance to access DB > CONFIG_GLANCE_DB_PW=changeme > > > # The password to use for the Glance to authenticate with Keystone > CONFIG_GLANCE_KS_PW=changeme > > > # The password to use for the Cinder to access DB > CONFIG_CINDER_DB_PW=changeme > > > # The password to use for the Cinder to authenticate with Keystone > CONFIG_CINDER_KS_PW=changeme > > > # The Cinder backend to use, valid options are: lvm, gluster, nfs > CONFIG_CINDER_BACKEND=lvm > > > # Create Cinder's volumes group. This should only be done for testing > # on a proof-of-concept installation of Cinder. This will create a > # file-backed volume group and is not suitable for production usage. > CONFIG_CINDER_VOLUMES_CREATE=y > > > # Cinder's volumes group size. Note that actual volume size will be > # extended with 3% more space for VG metadata. > CONFIG_CINDER_VOLUMES_SIZE=8G > > > # A single or comma separated list of gluster volume shares to mount, > # eg: ip-address:/vol-name, domain:/vol-name > CONFIG_CINDER_GLUSTER_MOUNTS= > > > # A single or comma seprated list of NFS exports to mount, eg: ip- > # address:/export-name > CONFIG_CINDER_NFS_MOUNTS= > > > # The password to use for the Nova to access DB > CONFIG_NOVA_DB_PW=changeme > > > # The password to use for the Nova to authenticate with Keystone > CONFIG_NOVA_KS_PW=changeme > > > # The overcommitment ratio for virtual to physical CPUs. Set to 1.0 > # to disable CPU overcommitment > CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0 > > > # The overcommitment ratio for virtual to physical RAM. Set to 1.0 to > # disable RAM overcommitment > CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5 > > > # Private interface for Flat DHCP on the Nova compute servers > CONFIG_NOVA_COMPUTE_PRIVIF=eth1 > > > # Nova network manager > CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager > > > # Public interface on the Nova network server > CONFIG_NOVA_NETWORK_PUBIF= > > > # Private interface for network manager on the Nova network server > CONFIG_NOVA_NETWORK_PRIVIF= > > > # IP Range for network manager > CONFIG_NOVA_NETWORK_FIXEDRANGE= > > > # IP Range for Floating IP's > CONFIG_NOVA_NETWORK_FLOATRANGE= > > > # Name of the default floating pool to which the specified floating > # ranges are added to > CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova > > > # Automatically assign a floating IP to new instances > CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n > > > # First VLAN for private networks > CONFIG_NOVA_NETWORK_VLAN_START= > > > # Number of networks to support > CONFIG_NOVA_NETWORK_NUMBER= > > > # Number of addresses in each private subnet > CONFIG_NOVA_NETWORK_SIZE= > > > # The password to use for Neutron to authenticate with Keystone > CONFIG_NEUTRON_KS_PW=changeme > > > # The password to use for Neutron to access DB > CONFIG_NEUTRON_DB_PW=changeme > > > # The name of the bridge that the Neutron L3 agent will use for > # external traffic, or 'provider' if using provider networks > CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex > > > # The name of the L2 plugin to be used with Neutron > CONFIG_NEUTRON_L2_PLUGIN=ml2 > > > # Neutron metadata agent password > CONFIG_NEUTRON_METADATA_PW=changeme > > > # Set to 'y' if you would like Packstack to install Neutron LBaaS > CONFIG_LBAAS_INSTALL=y > > > # Set to 'y' if you would like Packstack to install Neutron L3 > # Metering agent > CONFIG_NEUTRON_METERING_AGENT_INSTALL=n > > > # Whether to configure neutron Firewall as a Service > CONFIG_NEUTRON_FWAAS=y > > > # A comma separated list of network type driver entrypoints to be > # loaded from the neutron.ml2.type_drivers namespace. > CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan,vxlan > > > # A comma separated ordered list of network_types to allocate as > # tenant networks. The value 'local' is only useful for single-box > # testing but provides no connectivity between hosts. > CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan > > > # A comma separated ordered list of networking mechanism driver > # entrypoints to be loaded from the neutron.ml2.mechanism_drivers > # namespace. > CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch,cisco_nexus > > > # A comma separated list of physical_network names with which flat > # networks can be created. Use * to allow flat networks with > arbitrary > # physical_network names. > CONFIG_NEUTRON_ML2_FLAT_NETWORKS=* > > > # A comma separated list of :: > # or specifying physical_network names usable for > # VLAN provider and tenant networks, as well as ranges of VLAN tags > on > # each available for allocation to tenant networks. > CONFIG_NEUTRON_ML2_VLAN_RANGES= > > > # A comma separated list of : tuples enumerating > # ranges of GRE tunnel IDs that are available for tenant network > # allocation. Should be an array with tun_max +1 - tun_min > 1000000 > CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES= > > > # Multicast group for VXLAN. If unset, disables VXLAN enable sending > # allocate broadcast traffic to this multicast group. When left > # unconfigured, will disable multicast VXLAN mode. Should be an > # Multicast IP (v4 or v6) address. > CONFIG_NEUTRON_ML2_VXLAN_GROUP= > > > # A comma separated list of : tuples enumerating > # ranges of VXLAN VNI IDs that are available for tenant network > # allocation. Min value is 0 and Max value is 16777215. > CONFIG_NEUTRON_ML2_VNI_RANGES= > > > # The name of the L2 agent to be used with Neutron > CONFIG_NEUTRON_L2_AGENT=openvswitch > > > # The type of network to allocate for tenant networks (eg. vlan, > # local) > CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local > > > # A comma separated list of VLAN ranges for the Neutron linuxbridge > # plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999) > CONFIG_NEUTRON_LB_VLAN_RANGES= > > > # A comma separated list of interface mappings for the Neutron > # linuxbridge plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3 > # :br-eth3) > CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= > > > # Type of network to allocate for tenant networks (eg. vlan, local, > # gre, vxlan) > CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan > > > # A comma separated list of VLAN ranges for the Neutron openvswitch > # plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999) > CONFIG_NEUTRON_OVS_VLAN_RANGES= > > > # A comma separated list of bridge mappings for the Neutron > # openvswitch plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3 > # :br-eth3) > CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= > > > # A comma separated list of colon-separated OVS bridge:interface > # pairs. The interface will be added to the associated bridge. > CONFIG_NEUTRON_OVS_BRIDGE_IFACES= > > > # A comma separated list of tunnel ranges for the Neutron openvswitch > # plugin (eg. 1:1000) > CONFIG_NEUTRON_OVS_TUNNEL_RANGES= > > > # The interface for the OVS tunnel. Packstack will override the IP > # address used for tunnels on this hypervisor to the IP found on the > # specified interface. (eg. eth1) > CONFIG_NEUTRON_OVS_TUNNEL_IF= > > > # VXLAN UDP port > CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789 > > > # To set up Horizon communication over https set this to 'y' > CONFIG_HORIZON_SSL=n > > > # PEM encoded certificate to be used for ssl on the https server, > # leave blank if one should be generated, this certificate should not > # require a passphrase > CONFIG_SSL_CERT= > > > # SSL keyfile corresponding to the certificate if one was entered > CONFIG_SSL_KEY= > > > # PEM encoded CA certificates from which the certificate chain of the > # server certificate can be assembled. > CONFIG_SSL_CACHAIN= > > > # The password to use for the Swift to authenticate with Keystone > CONFIG_SWIFT_KS_PW=changeme > > > # A comma separated list of devices which to use as Swift Storage > # device. Each entry should take the format /path/to/dev, for example > # /dev/vdb will install /dev/vdb as Swift storage device (packstack > # does not create the filesystem, you must do this first). If value > is > # omitted Packstack will create a loopback device for test setup > CONFIG_SWIFT_STORAGES= > > > # Number of swift storage zones, this number MUST be no bigger than > # the number of storage devices configured > CONFIG_SWIFT_STORAGE_ZONES=1 > > > # Number of swift storage replicas, this number MUST be no bigger > # than the number of storage zones configured > CONFIG_SWIFT_STORAGE_REPLICAS=1 > > > # FileSystem type for storage nodes > CONFIG_SWIFT_STORAGE_FSTYPE=ext4 > > > # Shared secret for Swift > CONFIG_SWIFT_HASH=changeme > > > # Size of the swift loopback file storage device > CONFIG_SWIFT_STORAGE_SIZE=2G > > > # Whether to provision for demo usage and testing. Note that > # provisioning is only supported for all-in-one installations. > CONFIG_PROVISION_DEMO=y > > > # Whether to configure tempest for testing > CONFIG_PROVISION_TEMPEST=n > > > # The name of the Tempest Provisioning user. If you don't provide a > # user name, Tempest will be configured in a standalone mode > CONFIG_PROVISION_TEMPEST_USER= > > > # The password to use for the Tempest Provisioning user > CONFIG_PROVISION_TEMPEST_USER_PW=changeme > > > # The CIDR network address for the floating IP subnet > CONFIG_PROVISION_DEMO_FLOATRANGE= 10.10.4.0/24 > > > # The uri of the tempest git repository to use > CONFIG_PROVISION_TEMPEST_REPO_URI= > https://github.com/openstack/tempest.git > > > # The revision of the tempest git repository to use > CONFIG_PROVISION_TEMPEST_REPO_REVISION=master > > > # Whether to configure the ovs external bridge in an all-in-one > # deployment > CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n > > > # The password used by Heat user to authenticate against MySQL > CONFIG_HEAT_DB_PW=changeme > > > # The encryption key to use for authentication info in database > CONFIG_HEAT_AUTH_ENC_KEY=changeme > > > # The password to use for the Heat to authenticate with Keystone > CONFIG_HEAT_KS_PW=changeme > > > # Set to 'y' if you would like Packstack to install Heat CloudWatch > # API > CONFIG_HEAT_CLOUDWATCH_INSTALL=n > > > # Set to 'y' if you would like Packstack to install Heat > # CloudFormation API > CONFIG_HEAT_CFN_INSTALL=n > > > # Name of Keystone domain for Heat > CONFIG_HEAT_DOMAIN=heat > > > # Name of Keystone domain admin user for Heat > CONFIG_HEAT_DOMAIN_ADMIN=heat_admin > > > # Password for Keystone domain admin user for Heat > CONFIG_HEAT_DOMAIN_PASSWORD=changeme > > > # Secret key for signing metering messages > CONFIG_CEILOMETER_SECRET=changeme > > > # The password to use for Ceilometer to authenticate with Keystone > CONFIG_CEILOMETER_KS_PW=changeme > > > # The IP address of the server on which to install MongoDB > CONFIG_MONGODB_HOST=10.10.2.2 > > > # The password of the nagiosadmin user on the Nagios server > CONFIG_NAGIOS_PW=changeme > > > ============== > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From mmagr at redhat.com Tue Sep 16 14:17:13 2014 From: mmagr at redhat.com (=?UTF-8?B?TWFydGluIE3DoWdy?=) Date: Tue, 16 Sep 2014 16:17:13 +0200 Subject: [Rdo-list] Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] In-Reply-To: <17507635.119025.1410875719486.JavaMail.sgordon@localhost.localdomain> References: <17507635.119025.1410875719486.JavaMail.sgordon@localhost.localdomain> Message-ID: <54184669.4080306@redhat.com> Hi Steve, RDO Icehouse update process is WIP. As soon as the package will pass CI. I can't give you ETA on that, but if you need the update now, you can download package straight from koji [1]. Regards, Martin [1] https://koji.fedoraproject.org/koji/taskinfo?taskID=7577794 On 09/16/2014 03:55 PM, Steve Gordon wrote: > Hi, > > I also encountered this issue ( https://www.redhat.com/archives/rdo-list/2014-September/msg00067.html ). Gael/Martin when is this going to be fixed in RDO Icehouse for EL6? > > Thanks, > > Steve > > ----- Original Message ----- >> From: "foss geek" >> To: rdo-list at redhat.com >> >> Dear All, >> >> I trying to install openstack with vCenter on CentOS 6.5. >> >> I am getting below error: >> >> SequenceError: Error appeared during Puppet run: 10.10.2.2_neutron.pp >> Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] >> You will find full trace in log >> /var/tmp/packstack/20140916-191749-_fyhX5/manifests/10.10.2.2_neutron.pp.log >> >> 2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing >> script: >> rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb >> >> Here is detailed error message and answer file: >> >> 2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing >> script: >> rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb >> [root at vCenter-OS-Dev ~]# tail -n 30 >> /var/tmp/packstack/20140916-191749-_fyhX5/openstack-setup.log >> rpm -q --whatprovides puppet || yum install -y puppet >> rpm -q --whatprovides openssh-clients || yum install -y >> openssh-clients >> rpm -q --whatprovides tar || yum install -y tar >> rpm -q --whatprovides nc || yum install -y nc >> rpm -q --whatprovides rubygem-json || yum install -y rubygem-json >> 2014-09-16 19:18:13::INFO::shell::81::root:: [localhost] Executing >> script: >> cd /usr/lib/python2.6/site-packages/packstack/puppet >> cd /var/tmp/packstack/20140916-191749-_fyhX5/manifests >> tar --dereference -cpzf - ../manifests | ssh -o >> StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null >> root at 10.10.2.2 tar -C >> /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb -xpzf - >> cd /usr/share/openstack-puppet/modules >> tar --dereference -cpzf - apache ceilometer certmonger cinder concat >> firewall glance heat horizon inifile keystone memcached mongodb >> mysql neutron nova nssdb openstack packstack qpid rabbitmq remote >> rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | >> ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null >> root at 10.10.2.2 tar -C >> /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb/modules -xpzf - >> 2014-09-16 19:21:57::ERROR::run_setup::921::root:: Traceback (most >> recent call last): >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 916, in main >> _main(confFile) >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 605, in _main >> runSequences() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 584, in runSequences >> controller.runAllSequences() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", >> line 68, in runAllSequences >> sequence.run(config=self.CONF, messages=self.MESSAGES) >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", >> line 98, in run >> step.run(config=config, messages=messages) >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", >> line 44, in run >> raise SequenceError(str(ex)) >> SequenceError: Error appeared during Puppet run: 10.10.2.2_neutron.pp >> Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0] >> You will find full trace in log >> /var/tmp/packstack/20140916-191749-_fyhX5/manifests/10.10.2.2_neutron.pp.log >> >> >> 2014-09-16 19:21:57::INFO::shell::81::root:: [10.10.2.2] Executing >> script: >> rm -rf /var/tmp/packstack/9d8561bfd26c44b392473e9f0e601bbb >> >> # cat all-in-one.conf >> >> [general] >> >> >> # Path to a Public key to install on servers. If a usable key has not >> # been installed on the remote servers the user will be prompted for >> a >> # password and this key will be installed so the password will not be >> # required again >> CONFIG_SSH_KEY= >> >> >> # Set to 'y' if you would like Packstack to install MySQL >> CONFIG_MYSQL_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install OpenStack Image >> # Service (Glance) >> CONFIG_GLANCE_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install OpenStack Block >> # Storage (Cinder) >> CONFIG_CINDER_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install OpenStack Compute >> # (Nova) >> CONFIG_NOVA_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install OpenStack >> # Networking (Neutron). Otherwise Nova Network will be used. >> CONFIG_NEUTRON_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install OpenStack >> # Dashboard (Horizon) >> CONFIG_HORIZON_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install OpenStack Object >> # Storage (Swift) >> CONFIG_SWIFT_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install OpenStack >> # Metering (Ceilometer) >> CONFIG_CEILOMETER_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install OpenStack >> # Orchestration (Heat) >> CONFIG_HEAT_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install the OpenStack >> # Client packages. An admin "rc" file will also be installed >> CONFIG_CLIENT_INSTALL=y >> >> >> # Comma separated list of NTP servers. Leave plain if Packstack >> # should not install ntpd on instances. >> CONFIG_NTP_SERVERS= >> >> >> # Set to 'y' if you would like Packstack to install Nagios to monitor >> # OpenStack hosts >> CONFIG_NAGIOS_INSTALL=y >> >> >> # Comma separated list of servers to be excluded from installation in >> # case you are running Packstack the second time with the same answer >> # file and don't want Packstack to touch these servers. Leave plain >> if >> # you don't need to exclude any server. >> EXCLUDE_SERVERS= >> >> >> # Set to 'y' if you want to run OpenStack services in debug mode. >> # Otherwise set to 'n'. >> CONFIG_DEBUG_MODE=n >> >> >> # The IP address of the server on which to install OpenStack services >> # specific to controller role such as API servers, Horizon, etc. >> CONFIG_CONTROLLER_HOST=10.10.2.2 >> >> >> # The list of IP addresses of the server on which to install the Nova >> # compute service >> CONFIG_COMPUTE_HOSTS=10.10.2.2 >> >> >> # The list of IP addresses of the server on which to install the >> # network service such as Nova network or Neutron >> CONFIG_NETWORK_HOSTS=10.10.2.2 >> >> >> # Set to 'y' if you want to use VMware vCenter as hypervisor and >> # storage. Otherwise set to 'n'. >> CONFIG_VMWARE_BACKEND=y >> >> >> # The IP address of the VMware vCenter server >> CONFIG_VCENTER_HOST=192.168.1.9 >> >> >> # The username to authenticate to VMware vCenter server >> CONFIG_VCENTER_USER=root >> >> >> # The password to authenticate to VMware vCenter server >> CONFIG_VCENTER_PASSWORD=tcl at 123 >> >> >> # The name of the vCenter cluster >> CONFIG_VCENTER_CLUSTER_NAME=ssoccluster >> >> >> # To subscribe each server to EPEL enter "y" >> CONFIG_USE_EPEL=y >> >> >> # A comma separated list of URLs to any additional yum repositories >> # to install >> CONFIG_REPO= >> >> >> # To subscribe each server with Red Hat subscription manager, include >> # this with CONFIG_RH_PW >> CONFIG_RH_USER= >> >> >> # To subscribe each server with Red Hat subscription manager, include >> # this with CONFIG_RH_USER >> CONFIG_RH_PW= >> >> >> # To enable RHEL optional repos use value "y" >> CONFIG_RH_OPTIONAL=y >> >> >> # To subscribe each server with RHN Satellite,fill Satellite's URL >> # here. Note that either satellite's username/password or activation >> # key has to be provided >> CONFIG_SATELLITE_URL= >> >> >> # Username to access RHN Satellite >> CONFIG_SATELLITE_USER= >> >> >> # Password to access RHN Satellite >> CONFIG_SATELLITE_PW= >> >> >> # Activation key for subscription to RHN Satellite >> CONFIG_SATELLITE_AKEY= >> >> >> # Specify a path or URL to a SSL CA certificate to use >> CONFIG_SATELLITE_CACERT= >> >> >> # If required specify the profile name that should be used as an >> # identifier for the system in RHN Satellite >> CONFIG_SATELLITE_PROFILE= >> >> >> # Comma separated list of flags passed to rhnreg_ks. Valid flags are: >> # novirtinfo, norhnsd, nopackages >> CONFIG_SATELLITE_FLAGS= >> >> >> # Specify a HTTP proxy to use with RHN Satellite >> CONFIG_SATELLITE_PROXY= >> >> >> # Specify a username to use with an authenticated HTTP proxy >> CONFIG_SATELLITE_PROXY_USER= >> >> >> # Specify a password to use with an authenticated HTTP proxy. >> CONFIG_SATELLITE_PROXY_PW= >> >> >> # Set the AMQP service backend. Allowed values are: qpid, rabbitmq >> CONFIG_AMQP_BACKEND=rabbitmq >> >> >> # The IP address of the server on which to install the AMQP service >> CONFIG_AMQP_HOST=10.10.2.2 >> >> >> # Enable SSL for the AMQP service >> CONFIG_AMQP_ENABLE_SSL=n >> >> >> # Enable Authentication for the AMQP service >> CONFIG_AMQP_ENABLE_AUTH=n >> >> >> # The password for the NSS certificate database of the AMQP service >> CONFIG_AMQP_NSS_CERTDB_PW=changeme >> >> >> # The port in which the AMQP service listens to SSL connections >> CONFIG_AMQP_SSL_PORT=5671 >> >> >> # The filename of the certificate that the AMQP service is going to >> # use >> CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem >> >> >> # The filename of the private key that the AMQP service is going to >> # use >> CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem >> >> >> # Auto Generates self signed SSL certificate and key >> CONFIG_AMQP_SSL_SELF_SIGNED=y >> >> >> # User for amqp authentication >> CONFIG_AMQP_AUTH_USER=amqp_user >> >> >> # Password for user authentication >> CONFIG_AMQP_AUTH_PASSWORD=changeme >> >> >> # The IP address of the server on which to install MySQL or IP >> # address of DB server to use if MySQL installation was not selected >> CONFIG_MYSQL_HOST=10.10.2.2 >> >> >> # Username for the MySQL admin user >> CONFIG_MYSQL_USER=root >> >> >> # Password for the MySQL admin user >> CONFIG_MYSQL_PW=changeme >> >> >> # The password to use for the Keystone to access DB >> CONFIG_KEYSTONE_DB_PW=changeme >> >> >> # The token to use for the Keystone service api >> CONFIG_KEYSTONE_ADMIN_TOKEN=changeme >> >> >> # The password to use for the Keystone admin user >> CONFIG_KEYSTONE_ADMIN_PW=changeme >> >> >> # The password to use for the Keystone demo user >> CONFIG_KEYSTONE_DEMO_PW=changeme >> >> >> # Kestone token format. Use either UUID or PKI >> CONFIG_KEYSTONE_TOKEN_FORMAT=UUID >> >> >> # The password to use for the Glance to access DB >> CONFIG_GLANCE_DB_PW=changeme >> >> >> # The password to use for the Glance to authenticate with Keystone >> CONFIG_GLANCE_KS_PW=changeme >> >> >> # The password to use for the Cinder to access DB >> CONFIG_CINDER_DB_PW=changeme >> >> >> # The password to use for the Cinder to authenticate with Keystone >> CONFIG_CINDER_KS_PW=changeme >> >> >> # The Cinder backend to use, valid options are: lvm, gluster, nfs >> CONFIG_CINDER_BACKEND=lvm >> >> >> # Create Cinder's volumes group. This should only be done for testing >> # on a proof-of-concept installation of Cinder. This will create a >> # file-backed volume group and is not suitable for production usage. >> CONFIG_CINDER_VOLUMES_CREATE=y >> >> >> # Cinder's volumes group size. Note that actual volume size will be >> # extended with 3% more space for VG metadata. >> CONFIG_CINDER_VOLUMES_SIZE=8G >> >> >> # A single or comma separated list of gluster volume shares to mount, >> # eg: ip-address:/vol-name, domain:/vol-name >> CONFIG_CINDER_GLUSTER_MOUNTS= >> >> >> # A single or comma seprated list of NFS exports to mount, eg: ip- >> # address:/export-name >> CONFIG_CINDER_NFS_MOUNTS= >> >> >> # The password to use for the Nova to access DB >> CONFIG_NOVA_DB_PW=changeme >> >> >> # The password to use for the Nova to authenticate with Keystone >> CONFIG_NOVA_KS_PW=changeme >> >> >> # The overcommitment ratio for virtual to physical CPUs. Set to 1.0 >> # to disable CPU overcommitment >> CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0 >> >> >> # The overcommitment ratio for virtual to physical RAM. Set to 1.0 to >> # disable RAM overcommitment >> CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5 >> >> >> # Private interface for Flat DHCP on the Nova compute servers >> CONFIG_NOVA_COMPUTE_PRIVIF=eth1 >> >> >> # Nova network manager >> CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager >> >> >> # Public interface on the Nova network server >> CONFIG_NOVA_NETWORK_PUBIF= >> >> >> # Private interface for network manager on the Nova network server >> CONFIG_NOVA_NETWORK_PRIVIF= >> >> >> # IP Range for network manager >> CONFIG_NOVA_NETWORK_FIXEDRANGE= >> >> >> # IP Range for Floating IP's >> CONFIG_NOVA_NETWORK_FLOATRANGE= >> >> >> # Name of the default floating pool to which the specified floating >> # ranges are added to >> CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova >> >> >> # Automatically assign a floating IP to new instances >> CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n >> >> >> # First VLAN for private networks >> CONFIG_NOVA_NETWORK_VLAN_START= >> >> >> # Number of networks to support >> CONFIG_NOVA_NETWORK_NUMBER= >> >> >> # Number of addresses in each private subnet >> CONFIG_NOVA_NETWORK_SIZE= >> >> >> # The password to use for Neutron to authenticate with Keystone >> CONFIG_NEUTRON_KS_PW=changeme >> >> >> # The password to use for Neutron to access DB >> CONFIG_NEUTRON_DB_PW=changeme >> >> >> # The name of the bridge that the Neutron L3 agent will use for >> # external traffic, or 'provider' if using provider networks >> CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex >> >> >> # The name of the L2 plugin to be used with Neutron >> CONFIG_NEUTRON_L2_PLUGIN=ml2 >> >> >> # Neutron metadata agent password >> CONFIG_NEUTRON_METADATA_PW=changeme >> >> >> # Set to 'y' if you would like Packstack to install Neutron LBaaS >> CONFIG_LBAAS_INSTALL=y >> >> >> # Set to 'y' if you would like Packstack to install Neutron L3 >> # Metering agent >> CONFIG_NEUTRON_METERING_AGENT_INSTALL=n >> >> >> # Whether to configure neutron Firewall as a Service >> CONFIG_NEUTRON_FWAAS=y >> >> >> # A comma separated list of network type driver entrypoints to be >> # loaded from the neutron.ml2.type_drivers namespace. >> CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan,vxlan >> >> >> # A comma separated ordered list of network_types to allocate as >> # tenant networks. The value 'local' is only useful for single-box >> # testing but provides no connectivity between hosts. >> CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan >> >> >> # A comma separated ordered list of networking mechanism driver >> # entrypoints to be loaded from the neutron.ml2.mechanism_drivers >> # namespace. >> CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch,cisco_nexus >> >> >> # A comma separated list of physical_network names with which flat >> # networks can be created. Use * to allow flat networks with >> arbitrary >> # physical_network names. >> CONFIG_NEUTRON_ML2_FLAT_NETWORKS=* >> >> >> # A comma separated list of :: >> # or specifying physical_network names usable for >> # VLAN provider and tenant networks, as well as ranges of VLAN tags >> on >> # each available for allocation to tenant networks. >> CONFIG_NEUTRON_ML2_VLAN_RANGES= >> >> >> # A comma separated list of : tuples enumerating >> # ranges of GRE tunnel IDs that are available for tenant network >> # allocation. Should be an array with tun_max +1 - tun_min > 1000000 >> CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES= >> >> >> # Multicast group for VXLAN. If unset, disables VXLAN enable sending >> # allocate broadcast traffic to this multicast group. When left >> # unconfigured, will disable multicast VXLAN mode. Should be an >> # Multicast IP (v4 or v6) address. >> CONFIG_NEUTRON_ML2_VXLAN_GROUP= >> >> >> # A comma separated list of : tuples enumerating >> # ranges of VXLAN VNI IDs that are available for tenant network >> # allocation. Min value is 0 and Max value is 16777215. >> CONFIG_NEUTRON_ML2_VNI_RANGES= >> >> >> # The name of the L2 agent to be used with Neutron >> CONFIG_NEUTRON_L2_AGENT=openvswitch >> >> >> # The type of network to allocate for tenant networks (eg. vlan, >> # local) >> CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local >> >> >> # A comma separated list of VLAN ranges for the Neutron linuxbridge >> # plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999) >> CONFIG_NEUTRON_LB_VLAN_RANGES= >> >> >> # A comma separated list of interface mappings for the Neutron >> # linuxbridge plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3 >> # :br-eth3) >> CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= >> >> >> # Type of network to allocate for tenant networks (eg. vlan, local, >> # gre, vxlan) >> CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan >> >> >> # A comma separated list of VLAN ranges for the Neutron openvswitch >> # plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999) >> CONFIG_NEUTRON_OVS_VLAN_RANGES= >> >> >> # A comma separated list of bridge mappings for the Neutron >> # openvswitch plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3 >> # :br-eth3) >> CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= >> >> >> # A comma separated list of colon-separated OVS bridge:interface >> # pairs. The interface will be added to the associated bridge. >> CONFIG_NEUTRON_OVS_BRIDGE_IFACES= >> >> >> # A comma separated list of tunnel ranges for the Neutron openvswitch >> # plugin (eg. 1:1000) >> CONFIG_NEUTRON_OVS_TUNNEL_RANGES= >> >> >> # The interface for the OVS tunnel. Packstack will override the IP >> # address used for tunnels on this hypervisor to the IP found on the >> # specified interface. (eg. eth1) >> CONFIG_NEUTRON_OVS_TUNNEL_IF= >> >> >> # VXLAN UDP port >> CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789 >> >> >> # To set up Horizon communication over https set this to 'y' >> CONFIG_HORIZON_SSL=n >> >> >> # PEM encoded certificate to be used for ssl on the https server, >> # leave blank if one should be generated, this certificate should not >> # require a passphrase >> CONFIG_SSL_CERT= >> >> >> # SSL keyfile corresponding to the certificate if one was entered >> CONFIG_SSL_KEY= >> >> >> # PEM encoded CA certificates from which the certificate chain of the >> # server certificate can be assembled. >> CONFIG_SSL_CACHAIN= >> >> >> # The password to use for the Swift to authenticate with Keystone >> CONFIG_SWIFT_KS_PW=changeme >> >> >> # A comma separated list of devices which to use as Swift Storage >> # device. Each entry should take the format /path/to/dev, for example >> # /dev/vdb will install /dev/vdb as Swift storage device (packstack >> # does not create the filesystem, you must do this first). If value >> is >> # omitted Packstack will create a loopback device for test setup >> CONFIG_SWIFT_STORAGES= >> >> >> # Number of swift storage zones, this number MUST be no bigger than >> # the number of storage devices configured >> CONFIG_SWIFT_STORAGE_ZONES=1 >> >> >> # Number of swift storage replicas, this number MUST be no bigger >> # than the number of storage zones configured >> CONFIG_SWIFT_STORAGE_REPLICAS=1 >> >> >> # FileSystem type for storage nodes >> CONFIG_SWIFT_STORAGE_FSTYPE=ext4 >> >> >> # Shared secret for Swift >> CONFIG_SWIFT_HASH=changeme >> >> >> # Size of the swift loopback file storage device >> CONFIG_SWIFT_STORAGE_SIZE=2G >> >> >> # Whether to provision for demo usage and testing. Note that >> # provisioning is only supported for all-in-one installations. >> CONFIG_PROVISION_DEMO=y >> >> >> # Whether to configure tempest for testing >> CONFIG_PROVISION_TEMPEST=n >> >> >> # The name of the Tempest Provisioning user. If you don't provide a >> # user name, Tempest will be configured in a standalone mode >> CONFIG_PROVISION_TEMPEST_USER= >> >> >> # The password to use for the Tempest Provisioning user >> CONFIG_PROVISION_TEMPEST_USER_PW=changeme >> >> >> # The CIDR network address for the floating IP subnet >> CONFIG_PROVISION_DEMO_FLOATRANGE= 10.10.4.0/24 >> >> >> # The uri of the tempest git repository to use >> CONFIG_PROVISION_TEMPEST_REPO_URI= >> https://github.com/openstack/tempest.git >> >> >> # The revision of the tempest git repository to use >> CONFIG_PROVISION_TEMPEST_REPO_REVISION=master >> >> >> # Whether to configure the ovs external bridge in an all-in-one >> # deployment >> CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n >> >> >> # The password used by Heat user to authenticate against MySQL >> CONFIG_HEAT_DB_PW=changeme >> >> >> # The encryption key to use for authentication info in database >> CONFIG_HEAT_AUTH_ENC_KEY=changeme >> >> >> # The password to use for the Heat to authenticate with Keystone >> CONFIG_HEAT_KS_PW=changeme >> >> >> # Set to 'y' if you would like Packstack to install Heat CloudWatch >> # API >> CONFIG_HEAT_CLOUDWATCH_INSTALL=n >> >> >> # Set to 'y' if you would like Packstack to install Heat >> # CloudFormation API >> CONFIG_HEAT_CFN_INSTALL=n >> >> >> # Name of Keystone domain for Heat >> CONFIG_HEAT_DOMAIN=heat >> >> >> # Name of Keystone domain admin user for Heat >> CONFIG_HEAT_DOMAIN_ADMIN=heat_admin >> >> >> # Password for Keystone domain admin user for Heat >> CONFIG_HEAT_DOMAIN_PASSWORD=changeme >> >> >> # Secret key for signing metering messages >> CONFIG_CEILOMETER_SECRET=changeme >> >> >> # The password to use for Ceilometer to authenticate with Keystone >> CONFIG_CEILOMETER_KS_PW=changeme >> >> >> # The IP address of the server on which to install MongoDB >> CONFIG_MONGODB_HOST=10.10.2.2 >> >> >> # The password of the nagiosadmin user on the Nagios server >> CONFIG_NAGIOS_PW=changeme >> >> >> ============== >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> -- Martin M?gr Openstack Red Hat Czech IRC nick: mmagr / para Internal channels: #brno, #packstack, #rhos-dev, #rhos-users Freenode channels: #openstack-dev, #packstack-dev, #puppet-openstack, #rdo From Yaniv.Kaul at emc.com Tue Sep 16 14:19:37 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 16 Sep 2014 10:19:37 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <1410869486.2753.6.camel@localhost.localdomain> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F4460@MX19A.corp.emc.com> <1410869486.2753.6.camel@localhost.localdomain> Message-ID: <648473255763364B961A02AC3BE1060D03C59F44DC@MX19A.corp.emc.com> > -----Original Message----- > From: whayutin [mailto:whayutin at redhat.com] > Sent: Tuesday, September 16, 2014 3:11 PM > To: Kaul, Yaniv > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > On Tue, 2014-09-16 at 07:59 -0400, Kaul, Yaniv wrote: > > > -----Original Message----- > > > From: whayutin [mailto:whayutin at redhat.com] > > > Sent: Tuesday, September 16, 2014 2:53 PM > > > To: Kaul, Yaniv > > > Cc: rdo-list at redhat.com > > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > On Tue, 2014-09-16 at 04:31 -0400, Kaul, Yaniv wrote: > > > > > -----Original Message----- > > > > > From: rdo-list-bounces at redhat.com > > > > > [mailto:rdo-list-bounces at redhat.com] On Behalf Of Kaul, Yaniv > > > > > Sent: Tuesday, September 16, 2014 11:09 AM > > > > > To: rdo-list at redhat.com > > > > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > > > > > -----Original Message----- > > > > > > From: rdo-list-bounces at redhat.com > > > > > > [mailto:rdo-list-bounces at redhat.com] > > > > > > On Behalf Of whayutin > > > > > > Sent: Tuesday, September 16, 2014 2:45 AM > > > > > > To: rdo-list at redhat.com > > > > > > Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > > > > > > Greetings, > > > > > > > > > > > > You should now have success installing RDO Icehouse on > > > > > > CentOS-7.0 using the quickstart instructions. > > > > > > > > > > Failed, with the new RPM (and without it, so perhaps it's something > else): > > > > > ... > > > > > Copying Puppet modules and manifests [ DONE ] > > > > > Applying 10.103.234.139_prescript.pp > > > > > 10.103.234.139_prescript.pp: [ ERROR ] > > > > > Applying Puppet manifests [ ERROR ] > > > > > > > > > > ERROR : Error appeared during Puppet run: > > > > > 10.103.234.139_prescript.pp > > > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > > > openstack-selinux' returned > > > > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > > (openstack- > > > > > icehouse) You will find full trace in log > > > > > /var/tmp/packstack/20140916-095516- > > > > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > > > > Please check log file > > > > > /var/tmp/packstack/20140916-095516-ANJtSb/openstack- > > > > > setup.log for more information > > > > > > > > > > Additional information: > > > > > * Time synchronization installation was skipped. Please note > > > > > that unsynchronized time on server instances might be problem > > > > > for some OpenStack components. > > > > > * Did not create a cinder volume group, one already existed > > > > > * File /root/keystonerc_admin has been created on OpenStack > > > > > client host 10.103.234.139. To use the command line tools you > > > > > need to source the > > > file. > > > > > * To access the OpenStack Dashboard browse to > > > > > http://10.103.234.139/dashboard . > > > > > Please, find your login credentials stored in the > > > > > keystonerc_admin in your home directory. > > > > > > > > > > > > > > > The error in the log above points to: > > > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > > > openstack-selinux' returned > > > > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > > (openstack- > > > > > icehouse) You will find full trace in log > > > > > /var/tmp/packstack/20140916-095516- > > > > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > > > > > > > And manually trying: > > > > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > (openstack- > > > icehouse) > > > > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 > > > > Installed: > > > > selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- > > > x86_64) > > > > selinux-policy-targeted = 3.12.1-153.el7 > > > > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > (openstack- > > > icehouse) > > > > Requires: selinux-policy-base >= 3.12.1-153.el7_0.10 > > > > Installed: > > > > selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- > > > x86_64) > > > > selinux-policy-base = 3.12.1-153.el7 > > > > Available: selinux-policy-minimum-3.12.1-153.el7.noarch > > > > (centos7- > > > x86_64) > > > > selinux-policy-base = 3.12.1-153.el7 > > > > Available: selinux-policy-mls-3.12.1-153.el7.noarch (centos7- > x86_64) > > > > selinux-policy-base = 3.12.1-153.el7 > > > > > > > > > > > > I manually found and installed the required packages. Perhaps it > > > > was not > > > propagated to mirrors yet (as I failed to download it from multiple mirrors). > > > > It continued, then failed on: > > > > 10.103.234.139_nova.pp: [ ERROR ] > > > > Applying Puppet manifests [ ERROR ] > > > > > > > > ERROR : Error appeared during Puppet run: 10.103.234.139_nova.pp > > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > > openstack-nova-compute' returned 1: Error: Package: > > > > gnutls-utils-3.1.18-8.el7.x86_64 (centos7-x86_64) > > > > > > > > I'll check my repos. > > > > Y. > > > > > > Ya.. Please check your repos, my CentOS-7.0 install is laying down > > > openstack- selinux-0.5.15-1.el7ost.noarch > > > > > > Yaniv, are you installing the rdo-release rpm? It looks like your > > > install is pulling the openstack-selinux rpm from the CentOS yum repository > itself. > > > > Indeed, I'm installing the RDO release RPM before that. > > [root at lgdrm403 ~]# yum repolist > > Loaded plugins: fastestmirror, priorities, rhnplugin This system is > > receiving updates from RHN Classic or Red Hat Satellite. > > Loading mirror speeds from cached hostfile > > * epel: fedora-epel.mirror.iweb.com > > 17 packages excluded due to repository priority protections > > repo id repo name > status > > centos7-x86_64 CentOS 7 (x86_64) > 8,461+4 > > epel/x86_64 Extra Packages for > Enterprise Linux 7 - x86_64 5,605+13 > > openstack-icehouse OpenStack Icehouse > Repository 1,131+245 > > !xio_custom7 XtremIO custom7 > repository 40 > > repolist: 15,237 > > > > > > > > Take a look at the quickstart for RDO You can also take a look at > > > our installs on CentOS-7.0 https://prod- rdojenkins.rhcloud.com/ > > > > I liked the comment 'These jobs use known workarounds to complete..' > > What am I supposed to do with the content @ https://github.com/redhat- > openstack/khaleesi/tree/master/workarounds/ ? > > Y. > > Contribute of course! And this one is using Ansible! There must be 50 ways to install your OpenStack! (To paraphrase Simon & Garfunkel) (I'm using a mixture of bash and packstack, with the intent of moving to Foreman). Y. > > All the workarounds have been disabled. > https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production- > centos-70-aio-packstack-neutron-gre-rabbitmq/56/consoleFull > 03:02:18 workaround: > 03:02:18 iptables_install: false > 03:02:18 centos7_release: false > 03:02:18 mysql_centos7: false > 03:02:18 messagebus_centos7: false > > > > > > > > > > Notice the /etc/yum.repos.d/ directory on the controller, available > > > in the log file. > > > > > > Let me know if you still have any issues. > > > Thanks! > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://openstack.redhat.com/Quickstart > > > > > > > > > > > > Thank you :) > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > From rbowen at redhat.com Tue Sep 16 14:41:40 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 16 Sep 2014 10:41:40 -0400 Subject: [Rdo-list] RDO test cases Message-ID: <54184C24.3020106@redhat.com> Please help me in filling out the test cases for next week's RDO Juno test day. Thanks. https://openstack.redhat.com/RDO_test_day_Juno_milestone_3_test_cases -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From whayutin at redhat.com Tue Sep 16 15:21:00 2014 From: whayutin at redhat.com (whayutin) Date: Tue, 16 Sep 2014 11:21:00 -0400 Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) In-Reply-To: <648473255763364B961A02AC3BE1060D03C59F44DC@MX19A.corp.emc.com> References: <1410824721.2971.2.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F43CF@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03C59F43DA@MX19A.corp.emc.com> <1410868362.2753.3.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F4460@MX19A.corp.emc.com> <1410869486.2753.6.camel@localhost.localdomain> <648473255763364B961A02AC3BE1060D03C59F44DC@MX19A.corp.emc.com> Message-ID: <1410880860.2753.15.camel@localhost.localdomain> On Tue, 2014-09-16 at 10:19 -0400, Kaul, Yaniv wrote: > > -----Original Message----- > > From: whayutin [mailto:whayutin at redhat.com] > > Sent: Tuesday, September 16, 2014 3:11 PM > > To: Kaul, Yaniv > > Cc: rdo-list at redhat.com > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > On Tue, 2014-09-16 at 07:59 -0400, Kaul, Yaniv wrote: > > > > -----Original Message----- > > > > From: whayutin [mailto:whayutin at redhat.com] > > > > Sent: Tuesday, September 16, 2014 2:53 PM > > > > To: Kaul, Yaniv > > > > Cc: rdo-list at redhat.com > > > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > > On Tue, 2014-09-16 at 04:31 -0400, Kaul, Yaniv wrote: > > > > > > -----Original Message----- > > > > > > From: rdo-list-bounces at redhat.com > > > > > > [mailto:rdo-list-bounces at redhat.com] On Behalf Of Kaul, Yaniv > > > > > > Sent: Tuesday, September 16, 2014 11:09 AM > > > > > > To: rdo-list at redhat.com > > > > > > Subject: Re: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > > > > > > > -----Original Message----- > > > > > > > From: rdo-list-bounces at redhat.com > > > > > > > [mailto:rdo-list-bounces at redhat.com] > > > > > > > On Behalf Of whayutin > > > > > > > Sent: Tuesday, September 16, 2014 2:45 AM > > > > > > > To: rdo-list at redhat.com > > > > > > > Subject: [Rdo-list] CentOS-7.0 and RDO (all clear) > > > > > > > > > > > > > > Greetings, > > > > > > > > > > > > > > You should now have success installing RDO Icehouse on > > > > > > > CentOS-7.0 using the quickstart instructions. > > > > > > > > > > > > Failed, with the new RPM (and without it, so perhaps it's something > > else): > > > > > > ... > > > > > > Copying Puppet modules and manifests [ DONE ] > > > > > > Applying 10.103.234.139_prescript.pp > > > > > > 10.103.234.139_prescript.pp: [ ERROR ] > > > > > > Applying Puppet manifests [ ERROR ] > > > > > > > > > > > > ERROR : Error appeared during Puppet run: > > > > > > 10.103.234.139_prescript.pp > > > > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > > > > openstack-selinux' returned > > > > > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > > > (openstack- > > > > > > icehouse) You will find full trace in log > > > > > > /var/tmp/packstack/20140916-095516- > > > > > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > > > > > Please check log file > > > > > > /var/tmp/packstack/20140916-095516-ANJtSb/openstack- > > > > > > setup.log for more information > > > > > > > > > > > > Additional information: > > > > > > * Time synchronization installation was skipped. Please note > > > > > > that unsynchronized time on server instances might be problem > > > > > > for some OpenStack components. > > > > > > * Did not create a cinder volume group, one already existed > > > > > > * File /root/keystonerc_admin has been created on OpenStack > > > > > > client host 10.103.234.139. To use the command line tools you > > > > > > need to source the > > > > file. > > > > > > * To access the OpenStack Dashboard browse to > > > > > > http://10.103.234.139/dashboard . > > > > > > Please, find your login credentials stored in the > > > > > > keystonerc_admin in your home directory. > > > > > > > > > > > > > > > > > > The error in the log above points to: > > > > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > > > > openstack-selinux' returned > > > > > > 1: Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > > > (openstack- > > > > > > icehouse) You will find full trace in log > > > > > > /var/tmp/packstack/20140916-095516- > > > > > > ANJtSb/manifests/10.103.234.139_prescript.pp.log > > > > > > > > > > And manually trying: > > > > > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > > (openstack- > > > > icehouse) > > > > > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 > > > > > Installed: > > > > > selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- > > > > x86_64) > > > > > selinux-policy-targeted = 3.12.1-153.el7 > > > > > Error: Package: openstack-selinux-0.5.15-1.el7ost.noarch > > > > > (openstack- > > > > icehouse) > > > > > Requires: selinux-policy-base >= 3.12.1-153.el7_0.10 > > > > > Installed: > > > > > selinux-policy-targeted-3.12.1-153.el7.noarch (@centos7- > > > > x86_64) > > > > > selinux-policy-base = 3.12.1-153.el7 > > > > > Available: selinux-policy-minimum-3.12.1-153.el7.noarch > > > > > (centos7- > > > > x86_64) > > > > > selinux-policy-base = 3.12.1-153.el7 > > > > > Available: selinux-policy-mls-3.12.1-153.el7.noarch (centos7- > > x86_64) > > > > > selinux-policy-base = 3.12.1-153.el7 > > > > > > > > > > > > > > > I manually found and installed the required packages. Perhaps it > > > > > was not > > > > propagated to mirrors yet (as I failed to download it from multiple mirrors). > > > > > It continued, then failed on: > > > > > 10.103.234.139_nova.pp: [ ERROR ] > > > > > Applying Puppet manifests [ ERROR ] > > > > > > > > > > ERROR : Error appeared during Puppet run: 10.103.234.139_nova.pp > > > > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > > > > openstack-nova-compute' returned 1: Error: Package: > > > > > gnutls-utils-3.1.18-8.el7.x86_64 (centos7-x86_64) > > > > > > > > > > I'll check my repos. > > > > > Y. > > > > > > > > Ya.. Please check your repos, my CentOS-7.0 install is laying down > > > > openstack- selinux-0.5.15-1.el7ost.noarch > > > > > > > > Yaniv, are you installing the rdo-release rpm? It looks like your > > > > install is pulling the openstack-selinux rpm from the CentOS yum repository > > itself. > > > > > > Indeed, I'm installing the RDO release RPM before that. > > > [root at lgdrm403 ~]# yum repolist > > > Loaded plugins: fastestmirror, priorities, rhnplugin This system is > > > receiving updates from RHN Classic or Red Hat Satellite. > > > Loading mirror speeds from cached hostfile > > > * epel: fedora-epel.mirror.iweb.com > > > 17 packages excluded due to repository priority protections > > > repo id repo name > > status > > > centos7-x86_64 CentOS 7 (x86_64) > > 8,461+4 > > > epel/x86_64 Extra Packages for > > Enterprise Linux 7 - x86_64 5,605+13 > > > openstack-icehouse OpenStack Icehouse > > Repository 1,131+245 > > > !xio_custom7 XtremIO custom7 > > repository 40 > > > repolist: 15,237 > > > > > > > > > > > Take a look at the quickstart for RDO You can also take a look at > > > > our installs on CentOS-7.0 https://prod- rdojenkins.rhcloud.com/ > > > > > > I liked the comment 'These jobs use known workarounds to complete..' > > > What am I supposed to do with the content @ https://github.com/redhat- > > openstack/khaleesi/tree/master/workarounds/ ? > > > Y. > > > > Contribute of course! > > And this one is using Ansible! > There must be 50 ways to install your OpenStack! (To paraphrase Simon & Garfunkel) > > (I'm using a mixture of bash and packstack, with the intent of moving to Foreman). > Y. > We're only wrapping the supported installers w/ ansible. So ansible kicks off packstack, foreman, staypuft or instack. > > > > > All the workarounds have been disabled. > > https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-icehouse-production- > > centos-70-aio-packstack-neutron-gre-rabbitmq/56/consoleFull > > 03:02:18 workaround: > > 03:02:18 iptables_install: false > > 03:02:18 centos7_release: false > > 03:02:18 mysql_centos7: false > > 03:02:18 messagebus_centos7: false > > > > > > > > > > > > > > > Notice the /etc/yum.repos.d/ directory on the controller, available > > > > in the log file. > > > > > > > > Let me know if you still have any issues. > > > > Thanks! > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://openstack.redhat.com/Quickstart > > > > > > > > > > > > > > Thank you :) > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > From kbsingh at karan.org Tue Sep 16 21:25:25 2014 From: kbsingh at karan.org (Karanbir Singh) Date: Tue, 16 Sep 2014 22:25:25 +0100 Subject: [Rdo-list] Updated Cloud Images to test In-Reply-To: <5418A99B.6020609@karan.org> References: <5418A99B.6020609@karan.org> Message-ID: <5418AAC5.40901@karan.org> Hi guys, forwarding here as well, will track feedback here on the rdo list, so you dont need to cross post to the centos-devel list -------- Original Message -------- Subject: [CentOS-devel] Updated Cloud Images to test Date: Tue, 16 Sep 2014 22:20:27 +0100 From: Karanbir Singh Reply-To: The CentOS developers mailing list. To: The CentOS developers mailing list. hi folks, 24ff09989ed8e80bbf3ba7b85e1e00b024645b2b5f1bf65b22061bbeccbd2922 CentOS-7-x86_64-GenericCloud-20140916_01.qcow2 c2a30027bafd9042508dfe4ef46c2a9189bbe914e716721268b9812d32174420 CentOS-7-x86_64-GenericCloud-20140916_01.raw 6a31fa7ad8c5b10d4c58e2dec47bf5971bb21ec730b830d9b6201eada1c9200c CentOS-7-x86_64-GenericCloud-GA-7.0.1406_01.qcow2 52a27e9b1fbd55982a6e9221c115c3eac436bda3aa020b7882d03f32a9da04a6 CentOS-7-x86_64-GenericCloud-GA-7.0.1406_01.raw are now at http://cloud.centos.org/centos/7/devel/ Please test and provide feedback, these should now be pretty close to release grade. Main highlight here is that all content is .centos.org hosted, signed and we now include cloud-utils-growpart. Regards, -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel at centos.org http://lists.centos.org/mailman/listinfo/centos-devel From kchamart at redhat.com Wed Sep 17 08:22:15 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 17 Sep 2014 13:52:15 +0530 Subject: [Rdo-list] RDO test cases In-Reply-To: <54184C24.3020106@redhat.com> References: <54184C24.3020106@redhat.com> Message-ID: <20140917082215.GD27913@tesla.redhat.com> On Tue, Sep 16, 2014 at 10:41:40AM -0400, Rich Bowen wrote: > Please help me in filling out the test cases for next week's RDO Juno > test day. Thanks. > > https://openstack.redhat.com/RDO_test_day_Juno_milestone_3_test_cases Added my bit (/me might not be around during that week though.) Side question: Rich, isn't wiki a bit too messy to track test results with its quirky syntax? I think you found the Fedora test day web application a bit frustrating? If you have specific feedback, it'll be more appreciated by folks at test at lists.fedoraproject.org. It sucks that there's no existing smooth alternative, but if wiki is what people find comfortable for now, then fine. [1] http://fedoraproject.org/wiki/Test_Day:2014-03-19_RDO -- /kashyap From mail at arif-ali.co.uk Wed Sep 17 14:52:06 2014 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 17 Sep 2014 15:52:06 +0100 Subject: [Rdo-list] OVS bridge not coming up at boot on centos7 Message-ID: Hi chaps, I have been testing rdo-openstack for the last few months, and finally got the basic stuff working yesterday with all the necessary core bits, :) The one issue I have found with the system at the moment is that when I reboot any of the machines in the cluster, whether it's the controller or the nova node, the OVSBridge ports do not acquire an IP address. I then go onto the machine via IPMI SOL, and restart the networking through "systemctl restart network.service" So an extracts of commands below If someone can shed any light, hat would be great # cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" # uname -r 3.10.0-123.6.3.el7.x86_64 [root at stack03 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp2s1f0 DEVICE=enp2s1f0 ONBOOT=yes TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-xcat [root at stack03 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-xcat BOOTPROTO=static DEVICE=br-xcat ONBOOT=yes TYPE=OVSBridge DEVICETYPE=ovs IPADDR=10.0.0.3 NETMASK=255.255.254.0 # ovs-vsctl show 663f3055-d146-4e59-979d-741c8488edb8 Bridge br-int fail_mode: secure Port "qvo9bb27bce-15" tag: 1 Interface "qvo9bb27bce-15" Port "qvo6424e2c9-ec" tag: 1 Interface "qvo6424e2c9-ec" Port "qvo89552e51-eb" tag: 1 Interface "qvo89552e51-eb" Port "qvo8d8059a9-b7" tag: 1 Interface "qvo8d8059a9-b7" Port int-br-xcat Interface int-br-xcat Port br-int Interface br-int type: internal Port "qvo0ca24fd5-3d" tag: 1 Interface "qvo0ca24fd5-3d" Bridge br-xcat Port "enp2s1f0" Interface "enp2s1f0" Port br-xcat Interface br-xcat type: internal ovs_version: "2.0.0" Below are the list of the core RPMs installed on a nova compute node # rpm -qa | grep "openvswitch\|openstack\|libvirt\|qemu" libvirt-daemon-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-driver-nwfilter-1.1.1-29.el7_0.1.x86_64 libvirt-python-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-kvm-1.1.1-29.el7_0.1.x86_64 openstack-neutron-2014.1.2-1.el7.centos.noarch qemu-kvm-1.5.3-60.el7_0.7.x86_64 qemu-img-1.5.3-60.el7_0.7.x86_64 ipxe-roms-qemu-20130517-5.gitc4bce43.el7.noarch libvirt-client-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-driver-nodedev-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-driver-interface-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-driver-secret-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-config-network-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-driver-qemu-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-driver-lxc-1.1.1-29.el7_0.1.x86_64 openstack-nova-compute-2014.1.2-1.el7.centos.noarch openstack-utils-2014.1-3.el7.noarch openvswitch-2.0.0-6.el7.x86_64 openstack-neutron-ml2-2014.1.2-1.el7.centos.noarch libvirt-daemon-driver-storage-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-config-nwfilter-1.1.1-29.el7_0.1.x86_64 libvirt-daemon-driver-network-1.1.1-29.el7_0.1.x86_64 libvirt-1.1.1-29.el7_0.1.x86_64 openstack-nova-common-2014.1.2-1.el7.centos.noarch qemu-kvm-common-1.5.3-60.el7_0.7.x86_64 openstack-neutron-openvswitch-2014.1.2-1.el7.centos.noarch -- Arif Ali IRC: arif-ali at freenode LinkedIn: http://uk.linkedin.com/in/arifali -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Sep 17 15:18:31 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 17 Sep 2014 11:18:31 -0400 Subject: [Rdo-list] RDO Juno M3 test day rescheduled - October 1st & 2nd Message-ID: <5419A647.2020100@redhat.com> Due to several factors (for one thing, the dates we chose happened to coincide with Rosh Hashanah, a national holiday in the nation where many of our QA engineers live) we've decided to push the RDO Juno M3 test day to the following week, *October 1st & 2nd*. As before, the gathering place for this testing will be the #RDO irc channel on Freenode. Information about the test day may be found at https://openstack.redhat.com/RDO_test_day_Juno_milestone_3 and the test cases are being collected at https://openstack.redhat.com/RDO_test_day_Juno_milestone_3_test_cases We should have Juno packages available soon (Watch this list for announcements!), if you want to start testing on your own before then, but there are still some issues to be worked out, so you may choose to wait, depending on your patience and/or ability to work around problems on your own. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at arif-ali.co.uk Wed Sep 17 16:17:19 2014 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 17 Sep 2014 17:17:19 +0100 Subject: [Rdo-list] OVS bridge not coming up at boot on centos7 In-Reply-To: References: Message-ID: <5419B40F.6060702@arif-ali.co.uk> On 17/09/14 15:52, Arif Ali wrote: > Hi chaps, > > I have been testing rdo-openstack for the last few months, and finally > got the basic stuff working yesterday with all the necessary core bits, :) > > The one issue I have found with the system at the moment is that when > I reboot any of the machines in the cluster, whether it's the > controller or the nova node, the OVSBridge ports do not acquire an IP > address. > > I then go onto the machine via IPMI SOL, and restart the networking > through "systemctl restart network.service" > Could this be just related to the following bugzilla, and the new openvswitch is not yet available in rdo? https://bugzilla.redhat.com/show_bug.cgi?id=1120326 -- Arif Ali IRC: arif-ali at freenode LinkedIn: http://uk.linkedin.com/in/arifali From elias.moreno.tec at gmail.com Thu Sep 18 00:17:07 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Wed, 17 Sep 2014 19:47:07 -0430 Subject: [Rdo-list] Permission denied on console.log after failed migration In-Reply-To: References: Message-ID: Hello, I've an issue, I started a migration of an instance (not live if relevant), the instance was in a vmserver2 and it was supposed to start on vmserver4, there it failed with the message "libvirtError Cannot access /nova/instances/instance_uuid/console.log Access denied" Both servers were installed with packstack, the permission on the file is root:root 660 as it was on the original server. Selinux is not used, nova user is capable of passworless ssh between servers Libvirtd runs as root, operating system is CentOS 6.5, openstack version is icehouse, I've checked permissions and logs and all I have is this permission denied message Dies anyone have experienced something similar? Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From elias.moreno.tec at gmail.com Thu Sep 18 01:19:07 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Wed, 17 Sep 2014 20:49:07 -0430 Subject: [Rdo-list] Permission denied on console.log after failed migration In-Reply-To: References: Message-ID: Nevermind, if found out that the issue was not actually the vnc console.log file, the issue was that the folder 'instances' of nova had incorrect permissions, don't know who or what changed it, but that was it. The message is somehow missleading but there you have it in case anyone has the same issue. On Wed, Sep 17, 2014 at 7:47 PM, El?as David wrote: > Hello, I've an issue, I started a migration of an instance (not live if > relevant), the instance was in a vmserver2 and it was supposed to start on > vmserver4, there it failed with the message "libvirtError Cannot access > /nova/instances/instance_uuid/console.log Access denied" > > Both servers were installed with packstack, the permission on the file is > root:root 660 as it was on the original server. > > Selinux is not used, nova user is capable of passworless ssh between > servers > > Libvirtd runs as root, operating system is CentOS 6.5, openstack version > is icehouse, I've checked permissions and logs and all I have is this > permission denied message > > Dies anyone have experienced something similar? > > Thanks in advance! > -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Thu Sep 18 07:22:07 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 18 Sep 2014 03:22:07 -0400 Subject: [Rdo-list] RDO Juno M3 test day rescheduled - October 1st & 2nd In-Reply-To: <5419A647.2020100@redhat.com> References: <5419A647.2020100@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03C59F4811@MX19A.corp.emc.com> Thanks ? I was wondering how the original date would work out for RH IL ;-) My team will be performing Cinder testing with the XtremIO driver, mainly basic functionality (boot from volume, snapshots, etc.). I?ll be installing using Packstack, testing FC, iSCSI (w/ CHAP) and multi-protocol. We?ll be running Tempest volume tests as well as manual tests via Horizon. I hope to do it on both CentOS 7 and 6.5, w/ KVM. I wouldn?t mind testing Cinder as a backend to Glance, but I don?t think it actually works? (https://ask.openstack.org/en/question/7322/how-to-use-cinder-as-glance-default_store/ ). Due to company policy, I won?t have IRC access, so I?ll be using this channel for communication ? I hope it?s cool. Y. From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Rich Bowen Sent: Wednesday, September 17, 2014 6:19 PM To: rdo-list at redhat.com Subject: [Rdo-list] RDO Juno M3 test day rescheduled - October 1st & 2nd Due to several factors (for one thing, the dates we chose happened to coincide with Rosh Hashanah, a national holiday in the nation where many of our QA engineers live) we've decided to push the RDO Juno M3 test day to the following week, October 1st & 2nd. As before, the gathering place for this testing will be the #RDO irc channel on Freenode. Information about the test day may be found at https://openstack.redhat.com/RDO_test_day_Juno_milestone_3 and the test cases are being collected at https://openstack.redhat.com/RDO_test_day_Juno_milestone_3_test_cases We should have Juno packages available soon (Watch this list for announcements!), if you want to start testing on your own before then, but there are still some issues to be worked out, so you may choose to wait, depending on your patience and/or ability to work around problems on your own. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasithucsc at gmail.com Thu Sep 18 10:13:03 2014 From: yasithucsc at gmail.com (yasith tharindu) Date: Thu, 18 Sep 2014 15:43:03 +0530 Subject: [Rdo-list] Does GFS is stable to use for openstack Message-ID: We have a SAS based storage. And we are going to share the a LUN through GFS among compute nodes to enable Live migration. Does GFS is stable to use with Openstack. Are there production deployments done through GFS ? -- Thanks.. Regards... Blog: http://www.yasith.info Twitter : http://twitter.com/yasithnd LinkedIn : http://www.linkedin.com/in/yasithnd GPG Key ID : *57CEE66E* -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Thu Sep 18 18:00:59 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 18 Sep 2014 14:00:59 -0400 Subject: [Rdo-list] selinux preventing Horizon access? Message-ID: <648473255763364B961A02AC3BE1060D03C59F4A14@MX19A.corp.emc.com> IceHouse / CentOS 7- after reboot post install. type=AVC msg=audit(1411063019.099:1848): avc: denied { name_connect } for pid=5684 comm="httpd" dest=8776 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket getenforce Permissive solved it. [root at lgdrm403 httpd(keystone_admin)]# rpm -qa |grep -E "openstack|selinux" openstack-utils-2014.1-3.el7.noarch selinux-policy-targeted-3.12.1-153.el7_0.10.noarch openstack-nova-cert-2014.1.2-1.el7.centos.noarch python-django-openstack-auth-1.1.5-1.el7.noarch libselinux-2.2.2-6.el7.x86_64 openstack-glance-2014.1.2-4.el7.centos.noarch openstack-packstack-puppet-2014.1.1-0.28.dev1238.el7.noarch openstack-nova-novncproxy-2014.1.2-1.el7.centos.noarch openstack-dashboard-2014.1.2-2.el7.centos.noarch openstack-cinder-2014.1-2.el7.noarch libselinux-utils-2.2.2-6.el7.x86_64 openstack-nova-console-2014.1.2-1.el7.centos.noarch openstack-keystone-2014.1.2.1-1.el7.centos.noarch libselinux-python-2.2.2-6.el7.x86_64 openstack-puppet-modules-2014.1-23.el7.noarch libselinux-ruby-2.2.2-6.el7.x86_64 openstack-nova-api-2014.1.2-1.el7.centos.noarch openstack-nova-compute-2014.1.2-1.el7.centos.noarch openstack-nova-conductor-2014.1.2-1.el7.centos.noarch openstack-nova-scheduler-2014.1.2-1.el7.centos.noarch openstack-packstack-2014.1.1-0.28.dev1238.el7.noarch selinux-policy-3.12.1-153.el7_0.10.noarch openstack-selinux-0.5.15-1.el7ost.noarch openstack-nova-common-2014.1.2-1.el7.centos.noarch openstack-nova-network-2014.1.2-1.el7.centos.noarch -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Sep 19 07:23:30 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 19 Sep 2014 12:53:30 +0530 Subject: [Rdo-list] [Test request] Fedora 21 Alpha RC cloud images Message-ID: <20140919072330.GA1966@tesla.redhat.com> Heya, Adam Williamson from Fedora Project recently announced a call for testing of Fedora-21 Alpa RC[1]. If you have some spare cycles, please test Fedora Cloud Base images[2] in your test environment. Get the image: $ wget -c \ https://dl.fedoraproject.org/pub/alt/stage/21_Alpha_RC1/Cloud/Images/x86_64/Fedora-Cloud-Base-20140915-21_Alpha.x86_64.qcow2 Import it into OpenStack Glance: $ glance image-create --name f21alpha --is-public true \ --disk-format qcow2 --container-format bare \ < Fedora-Cloud-Base-20140915-21_Alpha.x86_64.qcow2 Test your workflows. Please file bugs for any issues or notify here (or on cloud at lists.fedoraproject.org). Happy testing! [1] https://lists.fedoraproject.org/pipermail/cloud/2014-September/004236.html [2] https://dl.fedoraproject.org/pub/alt/stage/21_Alpha_RC1/Cloud/Images/x86_64/ -- /kashyap From kchamart at redhat.com Fri Sep 19 07:29:41 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 19 Sep 2014 12:59:41 +0530 Subject: [Rdo-list] OVS bridge not coming up at boot on centos7 In-Reply-To: <5419B40F.6060702@arif-ali.co.uk> References: <5419B40F.6060702@arif-ali.co.uk> Message-ID: <20140919072941.GE19367@tesla.redhat.com> On Wed, Sep 17, 2014 at 05:17:19PM +0100, Arif Ali wrote: > On 17/09/14 15:52, Arif Ali wrote: > >Hi chaps, > > > >I have been testing rdo-openstack for the last few months, and finally got > >the basic stuff working yesterday with all the necessary core bits, :) > > > >The one issue I have found with the system at the moment is that when I > >reboot any of the machines in the cluster, whether it's the controller or > >the nova node, the OVSBridge ports do not acquire an IP address. > > > >I then go onto the machine via IPMI SOL, and restart the networking > >through "systemctl restart network.service" > > > Could this be just related to the following bugzilla, and the new > openvswitch is not yet available in rdo? > > https://bugzilla.redhat.com/show_bug.cgi?id=1120326 Yeah, most likely it is. I don't yet see a package for that version of OpenvSwitch (openvswitch-2.1.2-2.el7_0.1) here. https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ -- /kashyap From moreira.belmiro.email.lists at gmail.com Fri Sep 19 15:09:47 2014 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Fri, 19 Sep 2014 17:09:47 +0200 Subject: [Rdo-list] python-kombu for Icehouse Message-ID: Hi, we are upgrading to RDO Icehouse and we see that "python-kombu" version installed is "1.1.3-2.el6" that we get from epel on our SLC6 nodes. However, nova Icehouse requirement for python-kombu is >=2.4.8 We started building your RPM but it would be good to have a more recent version on el6 or even RDO for all infrastructures that are using RabbitMQ. thanks, Belmiro -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Fri Sep 19 16:56:45 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Fri, 19 Sep 2014 12:56:45 -0400 Subject: [Rdo-list] selinux preventing Horizon access? Message-ID: <648473255763364B961A02AC3BE1060D03C59F4B27@MX19A.corp.emc.com> Ahoy! Filed https://bugzilla.redhat.com/show_bug.cgi?id=1144539 , ARGH! Cap'n Y. (Ay, it's Talk like a pirate day) From: Kaul, Yaniv Sent: Thursday, September 18, 2014 9:01 PM To: rdo-list at redhat.com Subject: selinux preventing Horizon access? IceHouse / CentOS 7- after reboot post install. type=AVC msg=audit(1411063019.099:1848): avc: denied { name_connect } for pid=5684 comm="httpd" dest=8776 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket getenforce Permissive solved it. [root at lgdrm403 httpd(keystone_admin)]# rpm -qa |grep -E "openstack|selinux" openstack-utils-2014.1-3.el7.noarch selinux-policy-targeted-3.12.1-153.el7_0.10.noarch openstack-nova-cert-2014.1.2-1.el7.centos.noarch python-django-openstack-auth-1.1.5-1.el7.noarch libselinux-2.2.2-6.el7.x86_64 openstack-glance-2014.1.2-4.el7.centos.noarch openstack-packstack-puppet-2014.1.1-0.28.dev1238.el7.noarch openstack-nova-novncproxy-2014.1.2-1.el7.centos.noarch openstack-dashboard-2014.1.2-2.el7.centos.noarch openstack-cinder-2014.1-2.el7.noarch libselinux-utils-2.2.2-6.el7.x86_64 openstack-nova-console-2014.1.2-1.el7.centos.noarch openstack-keystone-2014.1.2.1-1.el7.centos.noarch libselinux-python-2.2.2-6.el7.x86_64 openstack-puppet-modules-2014.1-23.el7.noarch libselinux-ruby-2.2.2-6.el7.x86_64 openstack-nova-api-2014.1.2-1.el7.centos.noarch openstack-nova-compute-2014.1.2-1.el7.centos.noarch openstack-nova-conductor-2014.1.2-1.el7.centos.noarch openstack-nova-scheduler-2014.1.2-1.el7.centos.noarch openstack-packstack-2014.1.1-0.28.dev1238.el7.noarch selinux-policy-3.12.1-153.el7_0.10.noarch openstack-selinux-0.5.15-1.el7ost.noarch openstack-nova-common-2014.1.2-1.el7.centos.noarch openstack-nova-network-2014.1.2-1.el7.centos.noarch -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Fri Sep 19 19:19:28 2014 From: mrunge at redhat.com (Matthias Runge) Date: Fri, 19 Sep 2014 21:19:28 +0200 Subject: [Rdo-list] python-kombu for Icehouse In-Reply-To: References: Message-ID: <541C81C0.7030501@redhat.com> On 19/09/14 17:09, Belmiro Moreira wrote: > Hi, > we are upgrading to RDO Icehouse and we see that "python-kombu" version > installed is "1.1.3-2.el6" that we get from epel on our SLC6 nodes. > > However, nova Icehouse requirement for python-kombu is >=2.4.8 > > We started building your RPM but it would be good to have a more recent > version > on el6 or even RDO for all infrastructures that are using RabbitMQ. > > thanks, > Belmiro That's tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1108188 Matthias From xzhao at bnl.gov Fri Sep 19 19:22:44 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Fri, 19 Sep 2014 15:22:44 -0400 Subject: [Rdo-list] define service endpoints Message-ID: <541C8284.3000701@bnl.gov> Hello, When one defines services endpoints in keystone, eg. for neutron, the publicurl should be using the outfacing NIC IP (or external hostname of the controller), while the internalurl and adminurl should be using the internal management subnet NIC IP (or the internal hostname of the controller). Do I understand this right? My controller has an out-facing IP/hostname and an internal hostname/IP on the management subnet. Thanks, Xin From rdo-info at redhat.com Fri Sep 19 20:01:06 2014 From: rdo-info at redhat.com (RDO Forum) Date: Fri, 19 Sep 2014 20:01:06 +0000 Subject: [Rdo-list] [RDO] Red Hat welcomes Oracle to the RDO community! Message-ID: <000001488f80f76c-3d19e1f3-783e-4017-99f2-6d075b508459-000000@email.amazonses.com> rbowen started a discussion. Red Hat welcomes Oracle to the RDO community! --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/985/red-hat-welcomes-oracle-to-the-rdo-community Have a great day! From xzhao at bnl.gov Fri Sep 19 20:58:42 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Fri, 19 Sep 2014 16:58:42 -0400 Subject: [Rdo-list] define service endpoints In-Reply-To: <541C8284.3000701@bnl.gov> References: <541C8284.3000701@bnl.gov> Message-ID: <541C9902.3000704@bnl.gov> I have to say documentation on this issue is confusing. My understanding is that, by default, individual components talk to each other through the "publicurl" from the keystone service catalog. Not all components are implemented consistently to obey the "public" vs "internal" separation. So at this point, it's probably safe to define ALL urls using the internal IP/hostname. Do I understand this correctly? Thanks, Xin On 9/19/2014 3:22 PM, Zhao, Xin wrote: > Hello, > > When one defines services endpoints in keystone, eg. for neutron, the > publicurl should be using the outfacing NIC IP (or external hostname > of the controller), while the internalurl and adminurl should be using > the internal management subnet NIC IP (or the internal hostname of the > controller). Do I understand this right? My controller has an > out-facing IP/hostname and an internal hostname/IP on the management > subnet. > > Thanks, > Xin > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From ichi.sara at gmail.com Sun Sep 21 07:46:13 2014 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Sun, 21 Sep 2014 09:46:13 +0200 Subject: [Rdo-list] networking issues with the nova-docker driver Message-ID: Hello people, The nova-docker finally worked for me. Now I can launch containers with nova, in the dashboard it is said that the instances are running. But when I try to go into the containers I fail. It seems that the problem is due to networking issues. Nova fail to create netns for the containers in question and thus all what comes after fail as well. When I check the compute.log I find these messages. Would you please take a look and see if they are familiar to you and if so suggest me something ? Any suggestion or hint would be very appreciated. Thank you all ^^, Sara ====compute.log 2014-09-18 11:22:37.899 902 AUDIT nova.compute.claims [req-faa4b1ac-869a-4613-9a3b-9f97794b3cfd 6b8a39ff0bb9417eb1a3ce8bdf09cf00 3ca2fa62ac434e8f942e5823969f23db] [instance: f0be5fc3-9a00-4e9b-a2d5-af8ddacd5d94] Claim successful 2014-09-18 11:22:44.029 902 ERROR novadocker.virt.docker.vifs [req-faa4b1ac-869a-4613-9a3b-9f97794b3cfd 6b8a39ff0bb9417eb1a3ce8bdf09cf00 3ca2fa62ac434e8f942e5823969f23db] *Failed to attach vif* 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs Traceback (most recent call last): 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs File "/usr/lib/python2.6/site-packages/novadocker/virt/docker/vifs.py", line 206, in attach 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs container_id, run_as_root=True) 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs File "/usr/lib/python2.6/site-packages/nova/utils.py", line 165, in execute 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs return processutils.execute(*cmd, **kwargs) 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs File "/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py", line 193, in execute 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs cmd=' '.join(cmd)) 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs ProcessExecutionError: Unexpected error while running command. 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs Command: sudo nova-rootwrap /etc/nova/rootwrap.conf* ip link set ns59fc4e34-bc netns afb0b5b7b02aef73f07c34b7f456ace080bb9944d21376f7a05e2d08206c4b67* 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs Exit code: 255 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs Stdout: '' *2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs Stderr: 'Error: argument "afb0b5b7b02aef73f07c34b7f456ace080bb9944d21376f7a05e2d08206c4b67" is wrong: Invalid "netns" value\n\n'* 2014-09-18 11:22:44.029 902 TRACE novadocker.virt.docker.vifs 2014-09-18 11:23:22.087 902 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2014-09-18 11:23:22.459 902 AUDIT nova.compute.resource_tracker [-] Free ram (MB): -683 2014-09-18 11:23:22.459 902 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 9 2014-09-18 11:23:22.459 902 AUDIT nova.compute.resource_tracker [-] Free VCPUS: -3 2014-09-18 11:23:22.627 902 INFO nova.compute.resource_tracker [-] Compute_service record updated for otvmi307s.priv.atos.fr:o tvmi307s.priv.atos.fr ==== the output of the command *ovs-vsctl show* 95915800-961a-45de-ba73-09bc8c9c329b Bridge br-ex Port br-ex Interface br-ex type: internal Port "qg-79be5a9c-97" Interface "qg-79be5a9c-97" type: internal Bridge br-int fail_mode: secure Port "tap9683ca3b-87" tag: 1 Interface "tap9683ca3b-87" type: internal Port br-int Interface br-int type: internal Port "tapc8dd2048-aa" tag: 3 Interface "tapc8dd2048-aa" Port "qr-234b2621-29" tag: 1 Interface "qr-234b2621-29" type: internal Port "tap59fc4e34-bc" tag: 3 Interface "*tap59fc4e34-bc*" Port "tap360101f0-ee" tag: 3 Interface "tap360101f0-ee" Port "tap16ebed2a-3e" tag: 3 Interface "tap16ebed2a-3e" Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "tap741f9199-b9" tag: 4095 Interface "tap741f9199-b9" Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal ovs_version: "1.11.0" ========the output of the command ip link show 55: *ns59fc4e34-bc*: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 7e:5f:86:0e:1f:52 brd ff:ff:ff:ff:ff:ff 56: *tap59fc4e34-bc*: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether a6:e5:4b:fa:9d:84 brd ff:ff:ff:ff:ff:ff ==========the output of the command ls /var/run/netns 09ccbe2f093c904972bd886735b718e9e9fec41f1ee5afd5d53b5d1b8c7f0ae2 a96f3ec37b7aa764a86c61e0317b1360c449dec272a7fd6ce3cab4b45ea6c98b 5eda9416e30d2aa9e8bf1f37bbadca1c4688985b5a3272338557cd24ecefaab6 * afb0b5b7b02aef73f07c34b7f456ace080bb9944d21376f7a05e2d08206c4b67* 87992907919ec457bf93161326f472c485fdff0b7ff6ba0b6762dac66fcd2626 qdhcp-fecbcfdd-92bb-41aa-86da-4050b65d360b 9abc5b0dba7faa0d2ca1b34cedb93acceb7ec4c46e3390ed9d33843310548325 qrouter-d8cc218e-b99f-440a-9096-b18b4c447caf ======the output of the command docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES *afb0b5b7b02a * busybox:buildroot-2014.02 /bin/sh 11 minutes ago Up 11 minutes nova-f0be5fc3-9a00-4e9b-a2d5-af8ddacd5d94 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Mon Sep 22 10:55:30 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Mon, 22 Sep 2014 06:55:30 -0400 Subject: [Rdo-list] Juno - cinder API spams /var/log/messages ? Message-ID: <648473255763364B961A02AC3BE1060D03C59F4C9D@MX19A.corp.emc.com> Not sure why, but it logs stuff to /var/log/messages. [root at lgdrm404 cinder]# grep log /etc/cinder/cinder.conf |grep -v catalog |grep -vE "#" log_dir=/var/log/cinder use_syslog=False san_login = admin [root at lgdrm404 cinder]# rpm -qa |grep cinder python-cinder-2014.2-0.2.b3.el7.centos.noarch python-cinderclient-1.0.9-2.el7.centos.noarch openstack-cinder-2014.2-0.2.b3.el7.centos.noarch -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Mon Sep 22 16:47:04 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 22 Sep 2014 16:47:04 +0000 Subject: [Rdo-list] [RDO] Blog Roundup, week of September 15, 2014 Message-ID: <000001489e4263ef-b9453501-9c26-4652-b863-2872d9180c91-000000@email.amazonses.com> rbowen started a discussion. Blog Roundup, week of September 15, 2014 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/986/blog-roundup-week-of-september-15-2014 Have a great day! From rbowen at redhat.com Mon Sep 22 20:15:53 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 22 Sep 2014 16:15:53 -0400 Subject: [Rdo-list] RDO-related Meetups in the coming week (September 22, 2014) Message-ID: <54208379.60603@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts will be gathering. If you know of others, please do add them to http://openstack.redhat.com/Events If you attend any of these meetups, please take pictures, and send me some. If you blog about the events (and you should), please send me that, too. * Tuesday, September 23, Tech Day - Rochester, NY - RHCI & JBoss Middleware, *West Henrietta, NY* - http://www.meetup.com/RedHatTechDay/events/204676922/ * Wednesday, September 24, Learn about Red Hat's cloud technologies, *Tallinn, Estonia* - http://www.meetup.com/Estonian-Cloud-Computing-Meetup/events/206128132/ * Thursday, September 25th, TripleO and Heat for Application Deployment, Thursday, September 25th, *Seattle* - http://www.meetup.com/OpenStack-Seattle/events/196042532/ * Thursday, September 25th, OpenStack Swift Hackathon, *Valley Forge* - http://www.meetup.com/ValleyForgeTech/events/206841422/ * Friday, September 26th, Hacking on Nova: Research with OpenStack, *Pittsburgh* - http://www.meetup.com/openstack-pittsburgh/events/207813112/ * October 1-2, RDO Juno Milestone 3 test day, *#RDO on Freenode IRC* - https://openstack.redhat.com/RDO_test_day_Juno_milestone_3 * Wednesday, October 1, 1st *Tokyo* OpenStack Meetup at Midokura, http://www.meetup.com/Tokyo-OpenStack-Meetup/events/204771502/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Mon Sep 22 21:12:35 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Mon, 22 Sep 2014 17:12:35 -0400 Subject: [Rdo-list] Where can I get RH-Tempest for RDO Juno? Message-ID: <648473255763364B961A02AC3BE1060D03C59F4D96@MX19A.corp.emc.com> I managed to install RDO Juno (on CentOS 7), worked quite smoothly. Which Tempest should I use? I've used https://github.com/redhat-openstack/tempest with IceHouse. TIA, Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yeylon at redhat.com Tue Sep 23 19:11:08 2014 From: yeylon at redhat.com (Yaniv Eylon) Date: Tue, 23 Sep 2014 15:11:08 -0400 (EDT) Subject: [Rdo-list] Where can I get RH-Tempest for RDO Juno? In-Reply-To: <648473255763364B961A02AC3BE1060D03C59F4D96@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03C59F4D96@MX19A.corp.emc.com> Message-ID: <11623073.9365263.1411499468186.JavaMail.zimbra@redhat.com> basically you should work with upstream if you wants the latest version, the redhat-tempest is 'icehouse' only, so might be outdated if you would like to try it for Juno. ----- Original Message ----- > From: "Yaniv Kaul" > To: rdo-list at redhat.com > Sent: Tuesday, September 23, 2014 12:12:35 AM > Subject: [Rdo-list] Where can I get RH-Tempest for RDO Juno? > > I managed to install RDO Juno (on CentOS 7), worked quite smoothly. > Which Tempest should I use? I've used > https://github.com/redhat-openstack/tempest with IceHouse. > > TIA, > Y. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From Yaniv.Kaul at emc.com Tue Sep 23 19:24:22 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 23 Sep 2014 15:24:22 -0400 Subject: [Rdo-list] Where can I get RH-Tempest for RDO Juno? In-Reply-To: <11623073.9365263.1411499468186.JavaMail.zimbra@redhat.com> References: <648473255763364B961A02AC3BE1060D03C59F4D96@MX19A.corp.emc.com> <11623073.9365263.1411499468186.JavaMail.zimbra@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03C59F4FDA@MX19A.corp.emc.com> > -----Original Message----- > From: Yaniv Eylon [mailto:yeylon at redhat.com] > Sent: Tuesday, September 23, 2014 10:11 PM > To: Kaul, Yaniv > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Where can I get RH-Tempest for RDO Juno? > > basically you should work with upstream if you wants the latest version, the > redhat-tempest is 'icehouse' only, so might be outdated if you would like to try > it for Juno. After I got to enjoy the config_tempest utility? Why isn't brough upstream, btw? OK, I'll give that one a shot. Y. > > > > ----- Original Message ----- > > From: "Yaniv Kaul" > > To: rdo-list at redhat.com > > Sent: Tuesday, September 23, 2014 12:12:35 AM > > Subject: [Rdo-list] Where can I get RH-Tempest for RDO Juno? > > > > I managed to install RDO Juno (on CentOS 7), worked quite smoothly. > > Which Tempest should I use? I've used > > https://github.com/redhat-openstack/tempest with IceHouse. > > > > TIA, > > Y. > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > From anne.hill005 at gmail.com Tue Sep 23 19:33:07 2014 From: anne.hill005 at gmail.com (Anne Hill) Date: Wed, 24 Sep 2014 01:03:07 +0530 Subject: [Rdo-list] How to configure the Openstack setup with SAS Block storage Message-ID: Dear Members; Im stuck with following question. Appreciate your responses. I have a SAS block storage and needed to configure with Live migration. All controller and computes nodes connected to the block storage through SAS interfaces. In order to enable live migration, If i create a single SAS LUN (volume), and mount it among all node's "/var/lib/nova/instances/" folder, then setup same nova UID GID on all nodes, will be enough for live migration? Or do I have to run any distributed file system like GlusterFS or NFS? Will libvert detect SAS protocol and enable the live migration? Anne. Thanks in advance.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.hill005 at gmail.com Tue Sep 23 19:34:27 2014 From: anne.hill005 at gmail.com (Anne Hill) Date: Wed, 24 Sep 2014 01:04:27 +0530 Subject: [Rdo-list] Help on configuring the Openstack setup with SAS Block storage Message-ID: Dear Members; Im stuck with following question. Appreciate your responses. I have a SAS block storage and needed to configure with Live migration. All controller and computes nodes connected to the block storage through SAS interfaces. In order to enable live migration, If i create a single SAS LUN (volume), and mount it among all node's "/var/lib/nova/instances/" folder, then setup same nova UID GID on all nodes, will be enough for live migration? Or do I have to run any distributed file system like GlusterFS or NFS? Will libvert detect SAS protocol and enable the live migration? Anne. Thanks in advance.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkammer at redhat.com Tue Sep 23 19:54:54 2014 From: tkammer at redhat.com (Tal Kammer) Date: Tue, 23 Sep 2014 15:54:54 -0400 (EDT) Subject: [Rdo-list] Where can I get RH-Tempest for RDO Juno? In-Reply-To: <648473255763364B961A02AC3BE1060D03C59F4FDA@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03C59F4D96@MX19A.corp.emc.com> <11623073.9365263.1411499468186.JavaMail.zimbra@redhat.com> <648473255763364B961A02AC3BE1060D03C59F4FDA@MX19A.corp.emc.com> Message-ID: <1007481995.53240162.1411502094510.JavaMail.zimbra@redhat.com> Yaniv (K :) ) We are currently working hard on making redhat-openstack/tempest support all branches but we believe we should concentrate our efforts first on stabilizing the icehouse branch as probably most things we find will effect Juno as well. On a side note, you can still use this repo for Juno, we just don't guarantee full support for our config_tempest.py utility. In regards to upstream, we fully intend to push it there (as we have in the past), this is the Red Hat way after all :) Tal Kammer ----- Original Message ----- > > -----Original Message----- > > From: Yaniv Eylon [mailto:yeylon at redhat.com] > > Sent: Tuesday, September 23, 2014 10:11 PM > > To: Kaul, Yaniv > > Cc: rdo-list at redhat.com > > Subject: Re: [Rdo-list] Where can I get RH-Tempest for RDO Juno? > > > > basically you should work with upstream if you wants the latest version, > > the > > redhat-tempest is 'icehouse' only, so might be outdated if you would like > > to try > > it for Juno. > > After I got to enjoy the config_tempest utility? Why isn't brough upstream, > btw? > OK, I'll give that one a shot. > Y. > > > > > > > > > ----- Original Message ----- > > > From: "Yaniv Kaul" > > > To: rdo-list at redhat.com > > > Sent: Tuesday, September 23, 2014 12:12:35 AM > > > Subject: [Rdo-list] Where can I get RH-Tempest for RDO Juno? > > > > > > I managed to install RDO Juno (on CentOS 7), worked quite smoothly. > > > Which Tempest should I use? I've used > > > https://github.com/redhat-openstack/tempest with IceHouse. > > > > > > TIA, > > > Y. > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Tal Kammer Automation and infra Team Lead, Openstack platform. Red Hat Israel From kchamart at redhat.com Thu Sep 25 07:00:47 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 25 Sep 2014 12:30:47 +0530 Subject: [Rdo-list] Fedora 21 Virtualization test day in progress today Message-ID: <20140925070047.GP23311@tesla.redhat.com> Heya, [/me forgot send this note a bit earlier, but as they say better late than never.] Today is Fedora 21 Virtualization test day[1] where you get to test/tinker with all the newest improvements in Virtualization space. If you use Fedora for your test platform and have a few spare cycles, now is the time to participate and test your workflows. Here's[2] the wiki page with plenty of details for participating in the test day. And, if you can't make it today, your results are still valuable if you can provide them here[3] whenever it works for you. You can hop onto #fedora-test-day on Freenode for any real-time discussion. [1] https://lists.fedoraproject.org/pipermail/virt/2014-September/004172.html [2] https://fedoraproject.org/wiki/Test_Day:2014-09-25_Virtualization [3] http://testdays.qa.fedoraproject.org/testdays/show_event?event_id=19 -- /kashyap From limao at cisco.com Thu Sep 25 09:18:09 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Thu, 25 Sep 2014 09:18:09 +0000 Subject: [Rdo-list] Will RDO support Juno on CentOS6? Message-ID: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> Hi , I'm using CentOS6 to install RDO now, and I notice that we do not have epel-6 in the following repo: https://repos.fedorapeople.org/repos/openstack/openstack-juno/ Will RDO support CentOS6 in Juno? Thanks. Regards, Liping Mao -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.oulevey at cern.ch Thu Sep 25 09:37:05 2014 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Thu, 25 Sep 2014 11:37:05 +0200 Subject: [Rdo-list] mirroring rdo Message-ID: <5423E241.5060506@cern.ch> Hi Folks, Do you have plan to provide an efficient way/guideline to mirror rdo repositories ? We use mrepo and it is not very efficient. Discussion [1] was started a year ago, and I didn't see any announcement/update yet. thanks, -- Thomas [1] : https://openstack.redhat.com/forum/discussion/590/rsync-server-for-rdo-repository From ihrachys at redhat.com Thu Sep 25 10:21:02 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 25 Sep 2014 12:21:02 +0200 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> Message-ID: <5423EC8E.8080901@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 25/09/14 11:18, Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco) wrote: > Will RDO support CentOS6 in Juno? No, there are no plans to build Juno for EL6. It will be available for Fedora and EL7 only. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUI+yOAAoJEC5aWaUY1u575bgIAKqHyC7PFGovxGiJ7HhnHcGF Cr4hU72FwdmrniPWVfRC1OHY3n6E12xKkw7QUR1vaIeVGHvrswD5xNoGgqVRERl0 cu/3AHsD4u8eV/b1FAZe6UYOHlr/Tk7UZo/z8WA64jq4rZpP21j4YWPtxAc8dVGA n9qfm4XK7RuwZRx6hoF6ju/3gdIO2nEL6HKZX8LPg/lHfUZzNhvUkynkRXMf3eOZ C+HeI9WgsjEGFBwvgN2r0zlMcSx6C41YmMXXcfNJEFOCRgbA6Ui6gae2aQawKdJQ uY0CUfYVDVrYpptUIBpwkBTDkeJM4Qr6hxtmBihzD4t4uWxIB/iyGwcs6nIwn8k= =kwaE -----END PGP SIGNATURE----- From kchamart at redhat.com Thu Sep 25 12:07:57 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 25 Sep 2014 17:37:57 +0530 Subject: [Rdo-list] mirroring rdo In-Reply-To: <5423E241.5060506@cern.ch> References: <5423E241.5060506@cern.ch> Message-ID: <20140925120757.GW23311@tesla.redhat.com> On Thu, Sep 25, 2014 at 11:37:05AM +0200, Thomas Oulevey wrote: > Hi Folks, > > Do you have plan to provide an efficient way/guideline to mirror rdo > repositories ? There's no immediate guide that I know of, but from the kind of mirroring I see Fedora folks use for Fedor mirrors, it's usually a combination of `rsync` for intelligent downloads & some scripting to refresh metadata. FWIW, maybe you can try `reposync` to sync repos and have some kind of `cron` job to refresh RPM metadata. It is just ported[1] this week to the much faster RPM package manager `dnf` and is available in dnf-plugins-core-0.1.4). [1] https://bugzilla.redhat.com/show_bug.cgi?id=1139738 -- /kashyap > We use mrepo and it is not very efficient. > > > Discussion [1] was started a year ago, and I didn't see any > announcement/update yet. > > thanks, > -- > Thomas > > [1] : > https://openstack.redhat.com/forum/discussion/590/rsync-server-for-rdo-repository From rbowen at redhat.com Thu Sep 25 12:23:37 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 25 Sep 2014 08:23:37 -0400 Subject: [Rdo-list] mirroring rdo In-Reply-To: <20140925120757.GW23311@tesla.redhat.com> References: <5423E241.5060506@cern.ch> <20140925120757.GW23311@tesla.redhat.com> Message-ID: <54240949.5000106@redhat.com> On 09/25/2014 08:07 AM, Kashyap Chamarthy wrote: > FWIW, maybe you can try `reposync` to sync repos and have some kind of > `cron` job to refresh RPM metadata. It is just ported[1] this week to > the much faster RPM package manager `dnf` and is available in > dnf-plugins-core-0.1.4). +1. reposync is what seems to be the best way to handle this. It would be nice to know who all is doing this, since we do some basic download stats from our repos, but we're more interested in getting lots of people using it than having precise stats. More of a nice to know than a required, but if you could tell me some usage estimates (number of machines you're deploying on, and so on) that would be awesome. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From limao at cisco.com Thu Sep 25 14:08:49 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Thu, 25 Sep 2014 14:08:49 +0000 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <5423EC8E.8080901@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> Message-ID: <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> Hi Ihar, Thanks very much for your quick response. I'm a little bit curious about why Juno do not support EL6, is this because CentOS6 is using python2.6 , and Juno is the release to support python 2.6? And it seems like if we want to upgrade from Icehouse/Havana to juno in production, we need to upgrade CentOS6 to CentOS7 ... Regards, Liping Mao -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Ihar Hrachyshka Sent: 2014?9?25? 18:21 To: rdo-list at redhat.com Subject: Re: [Rdo-list] Will RDO support Juno on CentOS6? -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 25/09/14 11:18, Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco) wrote: > Will RDO support CentOS6 in Juno? No, there are no plans to build Juno for EL6. It will be available for Fedora and EL7 only. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUI+yOAAoJEC5aWaUY1u575bgIAKqHyC7PFGovxGiJ7HhnHcGF Cr4hU72FwdmrniPWVfRC1OHY3n6E12xKkw7QUR1vaIeVGHvrswD5xNoGgqVRERl0 cu/3AHsD4u8eV/b1FAZe6UYOHlr/Tk7UZo/z8WA64jq4rZpP21j4YWPtxAc8dVGA n9qfm4XK7RuwZRx6hoF6ju/3gdIO2nEL6HKZX8LPg/lHfUZzNhvUkynkRXMf3eOZ C+HeI9WgsjEGFBwvgN2r0zlMcSx6C41YmMXXcfNJEFOCRgbA6Ui6gae2aQawKdJQ uY0CUfYVDVrYpptUIBpwkBTDkeJM4Qr6hxtmBihzD4t4uWxIB/iyGwcs6nIwn8k= =kwaE -----END PGP SIGNATURE----- _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From pmyers at redhat.com Thu Sep 25 14:16:08 2014 From: pmyers at redhat.com (Perry Myers) Date: Thu, 25 Sep 2014 10:16:08 -0400 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> Message-ID: <542423A8.9070201@redhat.com> > I'm a little bit curious about why Juno do not support EL6, is this > because CentOS6 is using python2.6 , and Juno is the release to > support python 2.6? And it seems like if we want to upgrade from > Icehouse/Havana to juno in production, we need to upgrade CentOS6 to > CentOS7 ... Partly it is because the upstream OpenStack community is very eager to move off of Python 2.6. We had pressure even in Icehouse to move off of 2.6, but because CentOS7 wasn't released at that point, we had a bit of a buffer and released Icehouse on both CentOS6 and CentOS7 But now that CentOS7 is released, it makes sense to now get off of the old python stack so that the upstream community can focus on python 2.7 and beyond. In addition, we have limited resources for packaging, CI and testing. If we could get help from other community members to take on the burden of packaging for EL6, helping to test it and maintain CI jobs for it, we would not be opposed to that. But with the existing RDO packaging/CI team that we have, continuing to maintain BOTH EL6 and EL7 is just too much. And yes, if you wanted to go from RDO Icehouse on EL6 to RDO Juno, it would involve a transition from CentOS6 to CentOS7. When we considered what to do with RDO Icehouse, initially we thought of only doing RDO Icehouse on CentOS7 or CentOS6 but not both. In the end, we did Icehouse on 6 and 7 specifically to provide a release where there was overlap. But we only ever intended (and had staffing) to provide a single release on both platforms. I hope that answers some of your questions. Thanks for asking! Perry From limao at cisco.com Thu Sep 25 14:25:13 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Thu, 25 Sep 2014 14:25:13 +0000 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <542423A8.9070201@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> Message-ID: <316691786CCAEE44AE90482264E3AB8206BCDF98@xmb-rcd-x08.cisco.com> Thank Perry for your kindly help! It's very clear. Thanks. Regards, Liping Mao -----Original Message----- From: Perry Myers [mailto:pmyers at redhat.com] Sent: 2014?9?25? 22:16 To: Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco); Ihar Hrachyshka; rdo-list at redhat.com Subject: Re: [Rdo-list] Will RDO support Juno on CentOS6? > I'm a little bit curious about why Juno do not support EL6, is this > because CentOS6 is using python2.6 , and Juno is the release to > support python 2.6? And it seems like if we want to upgrade from > Icehouse/Havana to juno in production, we need to upgrade CentOS6 to > CentOS7 ... Partly it is because the upstream OpenStack community is very eager to move off of Python 2.6. We had pressure even in Icehouse to move off of 2.6, but because CentOS7 wasn't released at that point, we had a bit of a buffer and released Icehouse on both CentOS6 and CentOS7 But now that CentOS7 is released, it makes sense to now get off of the old python stack so that the upstream community can focus on python 2.7 and beyond. In addition, we have limited resources for packaging, CI and testing. If we could get help from other community members to take on the burden of packaging for EL6, helping to test it and maintain CI jobs for it, we would not be opposed to that. But with the existing RDO packaging/CI team that we have, continuing to maintain BOTH EL6 and EL7 is just too much. And yes, if you wanted to go from RDO Icehouse on EL6 to RDO Juno, it would involve a transition from CentOS6 to CentOS7. When we considered what to do with RDO Icehouse, initially we thought of only doing RDO Icehouse on CentOS7 or CentOS6 but not both. In the end, we did Icehouse on 6 and 7 specifically to provide a release where there was overlap. But we only ever intended (and had staffing) to provide a single release on both platforms. I hope that answers some of your questions. Thanks for asking! Perry From ihrachys at redhat.com Thu Sep 25 14:27:20 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 25 Sep 2014 16:27:20 +0200 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> Message-ID: <54242648.1030504@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 25/09/14 16:08, Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco) wrote: > Hi Ihar, > > Thanks very much for your quick response. > > I'm a little bit curious about why Juno do not support EL6, is this > because CentOS6 is using python2.6 , and Juno is the release to > support python 2.6? And it seems like if we want to upgrade from > Icehouse/Havana to juno in production, we need to upgrade CentOS6 > to CentOS7 ... The reasons behind it are mostly about not enough time to properly maintain parallel releases for EL6 and EL7. We've experienced that need to support two branches requires a lot of energy that is better spent to make EL7 release more mature. Upstream still supports Juno for Python 2.6, but support will be dropped in Kilo (the next cycle). As for the upgrade path, yes, you'll need EL7 to be able to use RDO Juno. You will need to migrate to EL7 in the future anyway if you're planning to use pieces from Kilo. BTW the same approach is true for the next version of Red Hat commercial OpenStack distribution (RHEL-OSP6 based on Juno). I hope that helps, /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUJCZIAAoJEC5aWaUY1u570q8IAIgLpPSjNbyEwMC4czAeI2xH vMTokJQ89clPQdTyYqkYHcgnX4S6cfcvQxyMCz/qJTpCPPh7tG37Y5cZxDQc904O IcGkoHBQn4tplfEdJnslxgHlBeQmdSYzV0sWmyqRbdPlxvY5UH7ihWz3yHvUvcFa bLdiPUplsntE41rF9cvM4+ZSRDpCZhPiwQR4ntvYFHmt3dR4pLjOVi/XKPxWjEwN U5xcG854MVei/G/XV2cvNgDIzY6ybqTsoEjDnfA22lHgH8DwmIkSMgp/hnyiVzqX dLdBrlVkFfma5QFthfp/abZcTNu7k9+NA6JUKvYvbsXuV9Kapo5N+46AhkrgFFI= =hg50 -----END PGP SIGNATURE----- From mail at arif-ali.co.uk Thu Sep 25 14:33:43 2014 From: mail at arif-ali.co.uk (Arif Ali) Date: Thu, 25 Sep 2014 15:33:43 +0100 Subject: [Rdo-list] mirroring rdo In-Reply-To: <54240949.5000106@redhat.com> References: <5423E241.5060506@cern.ch> <20140925120757.GW23311@tesla.redhat.com> <54240949.5000106@redhat.com> Message-ID: <542427C7.2030809@arif-ali.co.uk> On 25/09/14 13:23, Rich Bowen wrote: > > On 09/25/2014 08:07 AM, Kashyap Chamarthy wrote: >> FWIW, maybe you can try `reposync` to sync repos and have some kind of >> `cron` job to refresh RPM metadata. It is just ported[1] this week to >> the much faster RPM package manager `dnf` and is available in >> dnf-plugins-core-0.1.4). > > +1. reposync is what seems to be the best way to handle this. > > It would be nice to know who all is doing this, since we do some basic > download stats from our repos, but we're more interested in getting > lots of people using it than having precise stats. More of a nice to > know than a required, but if you could tell me some usage estimates > (number of machines you're deploying on, and so on) that would be > awesome. > > --Rich > Rich, So I am using reposync to synchronise the rdo-openstack as well as several other repos for my OpenStack implementation. My list of files that I use are in https://gitlab.arif-ali.co.uk/arif/openstack-lab/tree/master/reposync. I use the yum.conf within that directory that reposync points to. I don't have a cron job per say, but when I go back to my environment, I do tend to synchronise by using a script before I start any new work on the system. So I have deployed the same set of openstack RPMs in my PoC environment more than 20 times now, whether it is from the controller side or the nova side, this is an environment of 3 bare-metal machines; but this is not fixed environment, as they can easily be re-provisioned to any type at any time; i.e. I could have an all-in-one centos7 and all-in-one rhels7 on 2 different machines for testing purposes; or a 3 node Openstack implementation with 1 x controller, and 2 x nova+neutron-agents. I hope that this is useful I am more than happy to provide people with my setup if anyone is interested regards, -- Arif Ali IRC: arif-ali at freenode LinkedIn: http://uk.linkedin.com/in/arifali From limao at cisco.com Thu Sep 25 14:34:18 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Thu, 25 Sep 2014 14:34:18 +0000 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <54242648.1030504@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <54242648.1030504@redhat.com> Message-ID: <316691786CCAEE44AE90482264E3AB8206BCDFE5@xmb-rcd-x08.cisco.com> Thanks Ihar for your kindly help, It makes sense. Regards, Liping Mao -----Original Message----- From: Ihar Hrachyshka [mailto:ihrachys at redhat.com] Sent: 2014?9?25? 22:27 To: Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco); rdo-list at redhat.com Subject: Re: [Rdo-list] Will RDO support Juno on CentOS6? -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 25/09/14 16:08, Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco) wrote: > Hi Ihar, > > Thanks very much for your quick response. > > I'm a little bit curious about why Juno do not support EL6, is this > because CentOS6 is using python2.6 , and Juno is the release to > support python 2.6? And it seems like if we want to upgrade from > Icehouse/Havana to juno in production, we need to upgrade CentOS6 to > CentOS7 ... The reasons behind it are mostly about not enough time to properly maintain parallel releases for EL6 and EL7. We've experienced that need to support two branches requires a lot of energy that is better spent to make EL7 release more mature. Upstream still supports Juno for Python 2.6, but support will be dropped in Kilo (the next cycle). As for the upgrade path, yes, you'll need EL7 to be able to use RDO Juno. You will need to migrate to EL7 in the future anyway if you're planning to use pieces from Kilo. BTW the same approach is true for the next version of Red Hat commercial OpenStack distribution (RHEL-OSP6 based on Juno). I hope that helps, /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUJCZIAAoJEC5aWaUY1u570q8IAIgLpPSjNbyEwMC4czAeI2xH vMTokJQ89clPQdTyYqkYHcgnX4S6cfcvQxyMCz/qJTpCPPh7tG37Y5cZxDQc904O IcGkoHBQn4tplfEdJnslxgHlBeQmdSYzV0sWmyqRbdPlxvY5UH7ihWz3yHvUvcFa bLdiPUplsntE41rF9cvM4+ZSRDpCZhPiwQR4ntvYFHmt3dR4pLjOVi/XKPxWjEwN U5xcG854MVei/G/XV2cvNgDIzY6ybqTsoEjDnfA22lHgH8DwmIkSMgp/hnyiVzqX dLdBrlVkFfma5QFthfp/abZcTNu7k9+NA6JUKvYvbsXuV9Kapo5N+46AhkrgFFI= =hg50 -----END PGP SIGNATURE----- From rbowen at redhat.com Thu Sep 25 16:43:40 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 25 Sep 2014 12:43:40 -0400 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <542423A8.9070201@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> Message-ID: <5424463C.90409@redhat.com> On 09/25/2014 10:16 AM, Perry Myers wrote: > In addition, we have limited resources for packaging, CI and testing. If > we could get help from other community members to take on the burden of > packaging for EL6, helping to test it and maintain CI jobs for it, we > would not be opposed to that. But with the existing RDO packaging/CI > team that we have, continuing to maintain BOTH EL6 and EL7 is just too much. I want to draw attention to this statement. If you're that person - if you're the person in the community who has the packaging know-how and the desire to see this happen, we want to help you get up to speed to take on that task. We'd love to have people outside of the Red Hat engineering team doing some of the packaging work that we lack either the time or the priority to do. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From pmyers at redhat.com Thu Sep 25 17:03:57 2014 From: pmyers at redhat.com (Perry Myers) Date: Thu, 25 Sep 2014 13:03:57 -0400 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <5424463C.90409@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> Message-ID: <54244AFD.2050405@redhat.com> On 09/25/2014 12:43 PM, Rich Bowen wrote: > > On 09/25/2014 10:16 AM, Perry Myers wrote: >> In addition, we have limited resources for packaging, CI and testing. If >> we could get help from other community members to take on the burden of >> packaging for EL6, helping to test it and maintain CI jobs for it, we >> would not be opposed to that. But with the existing RDO packaging/CI >> team that we have, continuing to maintain BOTH EL6 and EL7 is just too >> much. > > I want to draw attention to this statement. If you're that person - if > you're the person in the community who has the packaging know-how and > the desire to see this happen, we want to help you get up to speed to > take on that task. We'd love to have people outside of the Red Hat > engineering team doing some of the packaging work that we lack either > the time or the priority to do. +1 We don't want RDO to be a Red Hat only effort. We'd love contributors on the packaging, CI, testing, docs side to come from a wider community effort. We'll invest time to help bring people up to speed and work with them so they can be productive and help out in any area that they want to contribute in. The more the merrier :) From rbowen at redhat.com Thu Sep 25 18:54:26 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 25 Sep 2014 14:54:26 -0400 Subject: [Rdo-list] Reminder: RDO test day, next week, Oct 1, 2 Message-ID: <542464E2.9090609@redhat.com> Just a reminder that next week we're having an RDO test day for Juno Milestone 3. We'll have packages for Fedora 20, RHEL 6.5 and 7, and CentOS7. You can find more details on the test day website at https://openstack.redhat.com/RDO_test_day_Juno_milestone_3 Please indicate if you're planning to participate by putting your name there. And come to #RDO on the Freenode IRC network for questions and discussion during the event. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From limao at cisco.com Fri Sep 26 02:46:05 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Fri, 26 Sep 2014 02:46:05 +0000 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <54244AFD.2050405@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> Message-ID: <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> Thanks Perry and Rich for your info. It seems like we need to move to EL7 early or late, because community will drop the python2.6 when kilo. So currently, I will focus on testing the Juno rpms in EL7. Thanks again for your kindly help and quick response. Regards, Liping Mao -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Perry Myers Sent: 2014?9?26? 1:04 To: Rich Bowen; rdo-list at redhat.com Subject: Re: [Rdo-list] Will RDO support Juno on CentOS6? On 09/25/2014 12:43 PM, Rich Bowen wrote: > > On 09/25/2014 10:16 AM, Perry Myers wrote: >> In addition, we have limited resources for packaging, CI and testing. >> If we could get help from other community members to take on the >> burden of packaging for EL6, helping to test it and maintain CI jobs >> for it, we would not be opposed to that. But with the existing RDO >> packaging/CI team that we have, continuing to maintain BOTH EL6 and >> EL7 is just too much. > > I want to draw attention to this statement. If you're that person - if > you're the person in the community who has the packaging know-how and > the desire to see this happen, we want to help you get up to speed to > take on that task. We'd love to have people outside of the Red Hat > engineering team doing some of the packaging work that we lack either > the time or the priority to do. +1 We don't want RDO to be a Red Hat only effort. We'd love contributors on the packaging, CI, testing, docs side to come from a wider community effort. We'll invest time to help bring people up to speed and work with them so they can be productive and help out in any area that they want to contribute in. The more the merrier :) _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From limao at cisco.com Fri Sep 26 02:55:25 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Fri, 26 Sep 2014 02:55:25 +0000 Subject: [Rdo-list] The version of Openvswitch in Juno Message-ID: <316691786CCAEE44AE90482264E3AB8206BCE23F@xmb-rcd-x08.cisco.com> Hi , I notice that the openvswitch in juno el7 is 2.0.0, but some new feature in juno such as DVR need 2.1.0 . https://wiki.openstack.org/wiki/Neutron/DVR_L2_Agent f. For rules like these in the integration bridge, where a long list of ports might appear in the 'output port' action, this document proposes the use of 'Group Tables' facility available from OpenVswitch(OVS) version 2.1. I did not do a real test about this, anyone know that will 2.0.0 works well when we enable DVR? Thanks. Regards, Liping Mao -------------- next part -------------- An HTML attachment was scrubbed... URL: From limao at cisco.com Fri Sep 26 09:10:39 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Fri, 26 Sep 2014 09:10:39 +0000 Subject: [Rdo-list] AIO juno on CentOS7 issues Message-ID: <316691786CCAEE44AE90482264E3AB8206BCE313@xmb-rcd-x08.cisco.com> Hi , When I install Juno AIO on CentOS7 today, I get two issues. #Issue 1, glance-api can't start up: Here is the error message in glance-api.log: 2014-09-26 07:50:39.994 22950 INFO glance.wsgi.server [-] (22950) wsgi starting up on http://0.0.0.0:9292/ 2014-09-26 07:50:39.995 22943 INFO glance.wsgi.server [-] Started child 22951 2014-09-26 07:50:39.996 22951 INFO glance.wsgi.server [-] (22951) wsgi starting up on http://0.0.0.0:9292/ 2014-09-26 07:50:39.997 22943 INFO glance.wsgi.server [-] Started child 22952 2014-09-26 07:50:39.998 22952 INFO glance.wsgi.server [-] (22952) wsgi starting up on http://0.0.0.0:9292/ 2014-09-26 07:50:40.000 22943 INFO glance.wsgi.server [-] Started child 22953 2014-09-26 07:50:40.000 22953 INFO glance.wsgi.server [-] (22953) wsgi starting up on http://0.0.0.0:9292/ 2014-09-26 07:50:40.034 22943 CRITICAL glance [-] error: [Errno 13] Permission denied 2014-09-26 07:50:40.034 22943 TRACE glance Traceback (most recent call last): 2014-09-26 07:50:40.034 22943 TRACE glance File "/usr/bin/glance-api", line 10, in 2014-09-26 07:50:40.034 22943 TRACE glance sys.exit(main()) 2014-09-26 07:50:40.034 22943 TRACE glance File "/usr/lib/python2.7/site-packages/glance/cmd/api.py", line 84, in main 2014-09-26 07:50:40.034 22943 TRACE glance systemd.notify_once() 2014-09-26 07:50:40.034 22943 TRACE glance File "/usr/lib/python2.7/site-packages/glance/openstack/common/systemd.py", line 66, in notify_once 2014-09-26 07:50:40.034 22943 TRACE glance _sd_notify(True, 'READY=1') 2014-09-26 07:50:40.034 22943 TRACE glance File "/usr/lib/python2.7/site-packages/glance/openstack/common/systemd.py", line 39, in _sd_notify 2014-09-26 07:50:40.034 22943 TRACE glance sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM) 2014-09-26 07:50:40.034 22943 TRACE glance File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 125, in __init__ 2014-09-26 07:50:40.034 22943 TRACE glance fd = _original_socket(family_or_realsock, *args, **kwargs) 2014-09-26 07:50:40.034 22943 TRACE glance File "/usr/lib64/python2.7/socket.py", line 187, in __init__ 2014-09-26 07:50:40.034 22943 TRACE glance _sock = _realsocket(family, type, proto) 2014-09-26 07:50:40.034 22943 TRACE glance error: [Errno 13] Permission denied 2014-09-26 07:50:40.034 22943 TRACE glance #Issue 2, nova-api can't start up: The error message in nova-api.log: 2014-09-26 07:59:42.206 28353 TRACE nova Traceback (most recent call last): 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/bin/nova-api", line 10, in 2014-09-26 07:59:42.206 28353 TRACE nova sys.exit(main()) 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 55, in main 2014-09-26 07:59:42.206 28353 TRACE nova server = service.WSGIService(api, use_ssl=should_use_ssl) 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/service.py", line 331, in __init__ 2014-09-26 07:59:42.206 28353 TRACE nova self.manager = self._get_manager() 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/service.py", line 383, in _get_manager 2014-09-26 07:59:42.206 28353 TRACE nova return manager_class() 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/api/manager.py", line 30, in __init__ 2014-09-26 07:59:42.206 28353 TRACE nova self.network_driver.metadata_accept() 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 666, in metadata_accept 2014-09-26 07:59:42.206 28353 TRACE nova iptables_manager.apply() 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 434, in apply 2014-09-26 07:59:42.206 28353 TRACE nova self._apply() 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 322, in inner 2014-09-26 07:59:42.206 28353 TRACE nova with lock(name, lock_file_prefix, external, lock_path): 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ 2014-09-26 07:59:42.206 28353 TRACE nova return self.gen.next() 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 287, in lock 2014-09-26 07:59:42.206 28353 TRACE nova with ext_lock: 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 171, in __enter__ 2014-09-26 07:59:42.206 28353 TRACE nova self.acquire() 2014-09-26 07:59:42.206 28353 TRACE nova File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 166, in acquire 2014-09-26 07:59:42.206 28353 TRACE nova initial_value=1) 2014-09-26 07:59:42.206 28353 TRACE nova OSError: [Errno 38] Function not implemented 2014-09-26 07:59:42.206 28353 TRACE nova I temporarily skip the two issues by run glance-api and nova-api with root user... After this , everything else works well for me. Anyone get this kind of error before? Regards, Liping Mao -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Fri Sep 26 09:34:10 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 26 Sep 2014 11:34:10 +0200 Subject: [Rdo-list] AIO juno on CentOS7 issues In-Reply-To: <316691786CCAEE44AE90482264E3AB8206BCE313@xmb-rcd-x08.cisco.com> References: <316691786CCAEE44AE90482264E3AB8206BCE313@xmb-rcd-x08.cisco.com> Message-ID: <54253312.2000609@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Do you have SELinux enabled? Any AVCs in audit.log? /Ihar On 26/09/14 11:10, Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco) wrote: > Hi , > > > > When I install Juno AIO on CentOS7 today, I get two issues. > > > > #Issue 1, glance-api can?t start up: > > Here is the error message in glance-api.log: > > 2014-09-26 07:50:39.994 22950 INFO glance.wsgi.server [-] (22950) > wsgi starting up on http://0.0.0.0:9292/ > > 2014-09-26 07:50:39.995 22943 INFO glance.wsgi.server [-] Started > child 22951 > > 2014-09-26 07:50:39.996 22951 INFO glance.wsgi.server [-] (22951) > wsgi starting up on http://0.0.0.0:9292/ > > 2014-09-26 07:50:39.997 22943 INFO glance.wsgi.server [-] Started > child 22952 > > 2014-09-26 07:50:39.998 22952 INFO glance.wsgi.server [-] (22952) > wsgi starting up on http://0.0.0.0:9292/ > > 2014-09-26 07:50:40.000 22943 INFO glance.wsgi.server [-] Started > child 22953 > > 2014-09-26 07:50:40.000 22953 INFO glance.wsgi.server [-] (22953) > wsgi starting up on http://0.0.0.0:9292/ > > 2014-09-26 07:50:40.034 22943 CRITICAL glance [-] error: [Errno > 13] Permission denied > > 2014-09-26 07:50:40.034 22943 TRACE glance Traceback (most recent > call last): > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/bin/glance-api", line 10, in > > 2014-09-26 07:50:40.034 22943 TRACE glance sys.exit(main()) > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib/python2.7/site-packages/glance/cmd/api.py", line 84, in > main > > 2014-09-26 07:50:40.034 22943 TRACE glance > systemd.notify_once() > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib/python2.7/site-packages/glance/openstack/common/systemd.py", > > line 66, in notify_once > > 2014-09-26 07:50:40.034 22943 TRACE glance _sd_notify(True, > 'READY=1') > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib/python2.7/site-packages/glance/openstack/common/systemd.py", > > line 39, in _sd_notify > > 2014-09-26 07:50:40.034 22943 TRACE glance sock = > socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM) > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 125, > in __init__ > > 2014-09-26 07:50:40.034 22943 TRACE glance fd = > _original_socket(family_or_realsock, *args, **kwargs) > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib64/python2.7/socket.py", line 187, in __init__ > > 2014-09-26 07:50:40.034 22943 TRACE glance _sock = > _realsocket(family, type, proto) > > 2014-09-26 07:50:40.034 22943 TRACE glance error: [Errno 13] > Permission denied > > 2014-09-26 07:50:40.034 22943 TRACE glance > > > > > > #Issue 2, nova-api can?t start up: > > The error message in nova-api.log: > > 2014-09-26 07:59:42.206 28353 TRACE nova Traceback (most recent > call last): > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/bin/nova-api", line 10, in > > 2014-09-26 07:59:42.206 28353 TRACE nova sys.exit(main()) > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 55, in > main > > 2014-09-26 07:59:42.206 28353 TRACE nova server = > service.WSGIService(api, use_ssl=should_use_ssl) > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/service.py", line 331, in > __init__ > > 2014-09-26 07:59:42.206 28353 TRACE nova self.manager = > self._get_manager() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/service.py", line 383, in > _get_manager > > 2014-09-26 07:59:42.206 28353 TRACE nova return > manager_class() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/api/manager.py", line 30, in > __init__ > > 2014-09-26 07:59:42.206 28353 TRACE nova > self.network_driver.metadata_accept() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line > 666, in metadata_accept > > 2014-09-26 07:59:42.206 28353 TRACE nova > iptables_manager.apply() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line > 434, in apply > > 2014-09-26 07:59:42.206 28353 TRACE nova self._apply() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", > > line 322, in inner > > 2014-09-26 07:59:42.206 28353 TRACE nova with lock(name, > lock_file_prefix, external, lock_path): > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ > > 2014-09-26 07:59:42.206 28353 TRACE nova return > self.gen.next() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", > > line 287, in lock > > 2014-09-26 07:59:42.206 28353 TRACE nova with ext_lock: > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", > > line 171, in __enter__ > > 2014-09-26 07:59:42.206 28353 TRACE nova self.acquire() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", > > line 166, in acquire > > 2014-09-26 07:59:42.206 28353 TRACE nova initial_value=1) > > 2014-09-26 07:59:42.206 28353 TRACE nova OSError: [Errno 38] > Function not implemented > > 2014-09-26 07:59:42.206 28353 TRACE nova > > > > > > > > I temporarily skip the two issues by run glance-api and nova-api > with root user? After this , everything else works well for me. > > Anyone get this kind of error before? > > > > > > Regards, > > Liping Mao > > > > > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUJTMSAAoJEC5aWaUY1u57h0YIAIR3C4YwRfCX7iBMWZXzRXYZ owFxyGHhnP8B+8xtKc5+ewfhXe8plU6I+RvGFVgGWCk/ZdN1eSyUcmSKUynrz5Sk Qp6WNT9JCOQ3nkWqK3lHYHEpa6koixQRm2f27Kw1/dYhjej+MX0bPa3e0Z+w0rZ4 eDILUlURj9NyMegSGEwCf0IBTB/ElMPmq5DMSpXQxgcRQ6qcCvqvcTn6FI/3XeL2 VjuTxSOXmrtUYjbHziAUbEh/KpWokIYvVCZTS2pDNHm8z6rZjj4wfvTBrYyfJyaA 8j02i+f7sMYYYiWlDBWpwok+TxMFWvUpykjEi2O/kamyeDo4/L10sFpV56FzxQU= =0WqW -----END PGP SIGNATURE----- From limao at cisco.com Fri Sep 26 09:52:07 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Fri, 26 Sep 2014 09:52:07 +0000 Subject: [Rdo-list] AIO juno on CentOS7 issues In-Reply-To: <54253312.2000609@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCE313@xmb-rcd-x08.cisco.com> <54253312.2000609@redhat.com> Message-ID: <316691786CCAEE44AE90482264E3AB8206BCE341@xmb-rcd-x08.cisco.com> Thanks Ihar, My SELinux is enable, and have AVCs in the audit.log: type=AVC msg=audit(1411721759.040:33286): avc: denied { dac_override } for pid=15974 comm="nova-api" capability=1 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:system_r:nova_api_t:s0 tclass=capability type=AVC msg=audit(1411721759.040:33286): avc: denied { dac_read_search } for pid=15974 comm="nova-api" capability=2 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:system_r:nova_api_t:s0 tclass=capability type=SYSCALL msg=audit(1411721759.040:33286): arch=c000003e syscall=2 success=no exit=-13 a0=e183d0 a1=0 a2=1b6 a3=0 items=0 ppid=1 pid=15974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="nova-api" exe="/usr/bin/python2.7" subj=system_u:system_r:nova_api_t:s0 key=(null) After I disable SELinux, nova-api and glane-api can works well without error. Thanks. Regards, Liping Mao -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Ihar Hrachyshka Sent: 2014?9?26? 17:34 To: rdo-list at redhat.com Subject: Re: [Rdo-list] AIO juno on CentOS7 issues -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Do you have SELinux enabled? Any AVCs in audit.log? /Ihar On 26/09/14 11:10, Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco) wrote: > Hi , > > > > When I install Juno AIO on CentOS7 today, I get two issues. > > > > #Issue 1, glance-api can?t start up: > > Here is the error message in glance-api.log: > > 2014-09-26 07:50:39.994 22950 INFO glance.wsgi.server [-] (22950) wsgi > starting up on http://0.0.0.0:9292/ > > 2014-09-26 07:50:39.995 22943 INFO glance.wsgi.server [-] Started > child 22951 > > 2014-09-26 07:50:39.996 22951 INFO glance.wsgi.server [-] (22951) wsgi > starting up on http://0.0.0.0:9292/ > > 2014-09-26 07:50:39.997 22943 INFO glance.wsgi.server [-] Started > child 22952 > > 2014-09-26 07:50:39.998 22952 INFO glance.wsgi.server [-] (22952) wsgi > starting up on http://0.0.0.0:9292/ > > 2014-09-26 07:50:40.000 22943 INFO glance.wsgi.server [-] Started > child 22953 > > 2014-09-26 07:50:40.000 22953 INFO glance.wsgi.server [-] (22953) wsgi > starting up on http://0.0.0.0:9292/ > > 2014-09-26 07:50:40.034 22943 CRITICAL glance [-] error: [Errno 13] > Permission denied > > 2014-09-26 07:50:40.034 22943 TRACE glance Traceback (most recent call > last): > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/bin/glance-api", line 10, in > > 2014-09-26 07:50:40.034 22943 TRACE glance sys.exit(main()) > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib/python2.7/site-packages/glance/cmd/api.py", line 84, in main > > 2014-09-26 07:50:40.034 22943 TRACE glance > systemd.notify_once() > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib/python2.7/site-packages/glance/openstack/common/systemd.py", > > line 66, in notify_once > > 2014-09-26 07:50:40.034 22943 TRACE glance _sd_notify(True, > 'READY=1') > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib/python2.7/site-packages/glance/openstack/common/systemd.py", > > line 39, in _sd_notify > > 2014-09-26 07:50:40.034 22943 TRACE glance sock = > socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM) > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 125, in > __init__ > > 2014-09-26 07:50:40.034 22943 TRACE glance fd = > _original_socket(family_or_realsock, *args, **kwargs) > > 2014-09-26 07:50:40.034 22943 TRACE glance File > "/usr/lib64/python2.7/socket.py", line 187, in __init__ > > 2014-09-26 07:50:40.034 22943 TRACE glance _sock = > _realsocket(family, type, proto) > > 2014-09-26 07:50:40.034 22943 TRACE glance error: [Errno 13] > Permission denied > > 2014-09-26 07:50:40.034 22943 TRACE glance > > > > > > #Issue 2, nova-api can?t start up: > > The error message in nova-api.log: > > 2014-09-26 07:59:42.206 28353 TRACE nova Traceback (most recent call > last): > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/bin/nova-api", line 10, in > > 2014-09-26 07:59:42.206 28353 TRACE nova sys.exit(main()) > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 55, in main > > 2014-09-26 07:59:42.206 28353 TRACE nova server = > service.WSGIService(api, use_ssl=should_use_ssl) > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/service.py", line 331, in > __init__ > > 2014-09-26 07:59:42.206 28353 TRACE nova self.manager = > self._get_manager() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/service.py", line 383, in > _get_manager > > 2014-09-26 07:59:42.206 28353 TRACE nova return > manager_class() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/api/manager.py", line 30, in > __init__ > > 2014-09-26 07:59:42.206 28353 TRACE nova > self.network_driver.metadata_accept() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line > 666, in metadata_accept > > 2014-09-26 07:59:42.206 28353 TRACE nova > iptables_manager.apply() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line > 434, in apply > > 2014-09-26 07:59:42.206 28353 TRACE nova self._apply() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", > > line 322, in inner > > 2014-09-26 07:59:42.206 28353 TRACE nova with lock(name, > lock_file_prefix, external, lock_path): > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ > > 2014-09-26 07:59:42.206 28353 TRACE nova return > self.gen.next() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", > > line 287, in lock > > 2014-09-26 07:59:42.206 28353 TRACE nova with ext_lock: > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", > > line 171, in __enter__ > > 2014-09-26 07:59:42.206 28353 TRACE nova self.acquire() > > 2014-09-26 07:59:42.206 28353 TRACE nova File > "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", > > line 166, in acquire > > 2014-09-26 07:59:42.206 28353 TRACE nova initial_value=1) > > 2014-09-26 07:59:42.206 28353 TRACE nova OSError: [Errno 38] Function > not implemented > > 2014-09-26 07:59:42.206 28353 TRACE nova > > > > > > > > I temporarily skip the two issues by run glance-api and nova-api with > root user? After this , everything else works well for me. > > Anyone get this kind of error before? > > > > > > Regards, > > Liping Mao > > > > > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUJTMSAAoJEC5aWaUY1u57h0YIAIR3C4YwRfCX7iBMWZXzRXYZ owFxyGHhnP8B+8xtKc5+ewfhXe8plU6I+RvGFVgGWCk/ZdN1eSyUcmSKUynrz5Sk Qp6WNT9JCOQ3nkWqK3lHYHEpa6koixQRm2f27Kw1/dYhjej+MX0bPa3e0Z+w0rZ4 eDILUlURj9NyMegSGEwCf0IBTB/ElMPmq5DMSpXQxgcRQ6qcCvqvcTn6FI/3XeL2 VjuTxSOXmrtUYjbHziAUbEh/KpWokIYvVCZTS2pDNHm8z6rZjj4wfvTBrYyfJyaA 8j02i+f7sMYYYiWlDBWpwok+TxMFWvUpykjEi2O/kamyeDo4/L10sFpV56FzxQU= =0WqW -----END PGP SIGNATURE----- _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From Tim.Bell at cern.ch Fri Sep 26 10:15:38 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 26 Sep 2014 10:15:38 +0000 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> > > We don't want RDO to be a Red Hat only effort. We'd love contributors on the > packaging, CI, testing, docs side to come from a wider community effort. > > We'll invest time to help bring people up to speed and work with them so they > can be productive and help out in any area that they want to contribute in. > > The more the merrier :) > Thanks for the clarification on the plans. In view of installed base at CERN (now over 3,000 servers), we would be interested in el6 support for Juno and would like to discuss the details of how this could be done in Paris. The components of particular interest for us are Nova, Ceilometer and the client packages. Could you describe the work involved so we can review skills and availability on our side ? How about an RDO BOF at the summit ? Tim, Jan, et al. From pmyers at redhat.com Fri Sep 26 13:49:21 2014 From: pmyers at redhat.com (Perry Myers) Date: Fri, 26 Sep 2014 09:49:21 -0400 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> Message-ID: <54256EE1.8020608@redhat.com> On 09/26/2014 06:15 AM, Tim Bell wrote: >> >> We don't want RDO to be a Red Hat only effort. We'd love contributors on the >> packaging, CI, testing, docs side to come from a wider community effort. >> >> We'll invest time to help bring people up to speed and work with them so they >> can be productive and help out in any area that they want to contribute in. >> >> The more the merrier :) >> > > Thanks for the clarification on the plans. In view of installed base > at CERN (now over 3,000 servers), we would be interested in el6 > support for Juno and would like to discuss the details of how this > could be done in Paris. The components of particular interest for us > are Nova, Ceilometer and the client packages. > > Could you describe the work involved so we can review skills and > availability on our side ? There are a few aspects of supporting a distribution: * Packaging of all of the core components of OpenStack, which means keeping dependencies up to date, dealing with rebases from the stable branch releases, backporting critical fixes from the stable branch * Maintaining, monitoring and troubleshooting CI jobs * Test day work, making sure that the distro works smoothly * Bug triage for things specific to the platform (i.e. a bug on CentOS6 that is not present on CentOS7) I'm sure others can weigh in with more details, but that's at least a start. Folks like Alan Pevec, Wes Hayutin, Jakub Ruzicka can probably assist with more details. One thing is that if we are going to do an EL6 distro, it can't just contain a subset of the components that you identified above (Nova, Ceilometer, clients). > How about an RDO BOF at the summit ? A BOF sounds like a good idea. I'm not sure how one goes about organizing a BOF though... Anyone else know how this might work? Perry From rbryant at redhat.com Fri Sep 26 13:51:31 2014 From: rbryant at redhat.com (Russell Bryant) Date: Fri, 26 Sep 2014 09:51:31 -0400 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <54256EE1.8020608@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> <54256EE1.8020608@redhat.com> Message-ID: <54256F63.8060607@redhat.com> On 09/26/2014 09:49 AM, Perry Myers wrote: >> How about an RDO BOF at the summit ? > > A BOF sounds like a good idea. > > I'm not sure how one goes about organizing a BOF though... Anyone else > know how this might work? The design summit includes a "pods" area that can be used for self-organized development meetups. Some of the pods are dedicated to projects. I believe there is general space, as well though. Once the details of the design summit are more clear, it's just a matter of picking a time and agreeing to meet in this area. -- Russell Bryant From apevec at gmail.com Fri Sep 26 14:53:51 2014 From: apevec at gmail.com (Alan Pevec) Date: Fri, 26 Sep 2014 16:53:51 +0200 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <54256EE1.8020608@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> <54256EE1.8020608@redhat.com> Message-ID: > There are a few aspects of supporting a distribution: > > * Packaging of all of the core components of OpenStack, which means > keeping dependencies up to date, dealing with rebases from the stable > branch releases, backporting critical fixes from the stable branch > > * Maintaining, monitoring and troubleshooting CI jobs > > * Test day work, making sure that the distro works smoothly > > * Bug triage for things specific to the platform (i.e. a bug on CentOS6 > that is not present on CentOS7) * Deal with differences in base components like kernel, libvirt, qemu-kvm With RHEL6 it is becoming increasingly impossible to get required features into above components to support OpenStack, we already went through to hoops until Icehouse. Solution could be to replace all those with upstream rebuilds (kernel-ml, virt-preview for EL) but what's then left from the base OS and how supportable would be that combination? Cheers, Alan From pmyers at redhat.com Fri Sep 26 14:56:57 2014 From: pmyers at redhat.com (Perry Myers) Date: Fri, 26 Sep 2014 10:56:57 -0400 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> <54256EE1.8020608@redhat.com> Message-ID: <54257EB9.6030606@redhat.com> On 09/26/2014 10:53 AM, Alan Pevec wrote: >> There are a few aspects of supporting a distribution: >> >> * Packaging of all of the core components of OpenStack, which means >> keeping dependencies up to date, dealing with rebases from the stable >> branch releases, backporting critical fixes from the stable branch >> >> * Maintaining, monitoring and troubleshooting CI jobs >> >> * Test day work, making sure that the distro works smoothly >> >> * Bug triage for things specific to the platform (i.e. a bug on CentOS6 >> that is not present on CentOS7) > > * Deal with differences in base components like kernel, libvirt, qemu-kvm > > With RHEL6 it is becoming increasingly impossible to get required > features into above components to support OpenStack, we already went > through to hoops until Icehouse. > Solution could be to replace all those with upstream rebuilds > (kernel-ml, virt-preview for EL) but what's then left from the base OS > and how supportable would be that combination? I would say instead that we instead just accept that EL6 OpenStack going forward will have more limited features and certain things will not work. From apevec at gmail.com Fri Sep 26 15:28:03 2014 From: apevec at gmail.com (Alan Pevec) Date: Fri, 26 Sep 2014 17:28:03 +0200 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <54257EB9.6030606@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> <54256EE1.8020608@redhat.com> <54257EB9.6030606@redhat.com> Message-ID: > I would say instead that we instead just accept that EL6 OpenStack going > forward will have more limited features and certain things will not work. What's the motivation to upgrade to Juno then? Upstream stable/icehouse will live 15 months i.e. until July 2015 so RDO Icehouse EL6 will be available until then. For longer support see https://access.redhat.com/site/support/policy/updates/openstack/platform Cheers, Alan From Tim.Bell at cern.ch Fri Sep 26 15:31:17 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 26 Sep 2014 15:31:17 +0000 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> <54256EE1.8020608@redhat.com> <54257EB9.6030606@redhat.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E501023869E3@CERNXCHG42.cern.ch> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Alan Pevec > Sent: 26 September 2014 17:28 > To: Perry Myers > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Will RDO support Juno on CentOS6? > > > I would say instead that we instead just accept that EL6 OpenStack > > going forward will have more limited features and certain things will not work. > > What's the motivation to upgrade to Juno then? Upstream stable/icehouse will > live 15 months i.e. until July 2015 so RDO Icehouse EL6 will be available until > then. For longer support see > https://access.redhat.com/site/support/policy/updates/openstack/platform > For us, we want to move to Juno+CentOS 7 for new installs. It is not clear to me that we can run for an extended period with some of the compute nodes on Icehouse and others on Juno. Theory indicates that this may be possible but it remains to be seen. Tim > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From rbowen at redhat.com Fri Sep 26 17:47:56 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 26 Sep 2014 13:47:56 -0400 Subject: [Rdo-list] Reminder: RDO Juno M3 test day next week Message-ID: <5425A6CC.80504@redhat.com> A reminder that we'll be doing a test day for Juno Milestone 3 next week, October 1 and 2. Details are at https://openstack.redhat.com/RDO_test_day_Juno_milestone_3 The test case page - https://openstack.redhat.com/RDO_test_day_Juno_milestone_3_test_cases - is still looking kind of sparse, and this would be a great place for folks to jump in with suggestions of the kinds of scenarios that we want to test. Thanks. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From limao at cisco.com Mon Sep 29 03:04:47 2014 From: limao at cisco.com (Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)) Date: Mon, 29 Sep 2014 03:04:47 +0000 Subject: [Rdo-list] python-sqlalchemy is removed in epel Message-ID: <316691786CCAEE44AE90482264E3AB8206BCF768@xmb-rcd-x08.cisco.com> Hi , I find that python-sqlalchemy is removed from EL7, and in Juno repo, we have this rpm. https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ But in icehouse for EL7 we do not have this. https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/ I think it will make packstack not work for icehouse EL7. Icehouse EL7 works for me a few days ago, python-sqlalchemy is installed from EL7 at that time: [root at rcehouse-rdo-centos7 ~]# yum list installed python-sqlalchemy Loaded plugins: fastestmirror, priorities Loading mirror speeds from cached hostfile * base: centos.mirror.facebook.net * epel: mirror.nus.edu.sg * extras: holmes.umflint.edu * updates: mirror.acsnet.com 18 packages excluded due to repository priority protections Installed Packages python-sqlalchemy.x86_64 0.8.4-1.el7 @epe Should python-sqlalchemy also be added in icehouse repo? Thanks. Regards, Liping Mao -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Mon Sep 29 10:57:32 2014 From: apevec at gmail.com (Alan Pevec) Date: Mon, 29 Sep 2014 12:57:32 +0200 Subject: [Rdo-list] python-sqlalchemy is removed in epel In-Reply-To: <316691786CCAEE44AE90482264E3AB8206BCF768@xmb-rcd-x08.cisco.com> References: <316691786CCAEE44AE90482264E3AB8206BCF768@xmb-rcd-x08.cisco.com> Message-ID: > I find that python-sqlalchemy is removed from EL7 I have no idea why was python-sqlalchemy retired from epel7, I can't find any releng ticket or bz about it. pjp, I see you owned EPEL branches in pkgdb, any clues why it was retired in EPEL7 ? In EPEL6 it is clear since it was in RHEL6 base but python-sqlalchemy is not in RHEL7 base or extras afaik. Cheers, Alan From dneary at redhat.com Mon Sep 29 14:56:49 2014 From: dneary at redhat.com (Dave Neary) Date: Mon, 29 Sep 2014 10:56:49 -0400 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> <54256EE1.8020608@redhat.com> <54257EB9.6030606@redhat.com> Message-ID: <54297331.4000703@redhat.com> Hi, On 09/26/2014 11:28 AM, Alan Pevec wrote: >> I would say instead that we instead just accept that EL6 OpenStack going >> forward will have more limited features and certain things will not work. > > What's the motivation to upgrade to Juno then? Upstream > stable/icehouse will live 15 months i.e. until July 2015 so RDO > Icehouse EL6 will be available until then. For longer support see > https://access.redhat.com/site/support/policy/updates/openstack/platform If you're happy with a smaller release set, then you get the Juno version of Nova, Neutron, Cinder, Keystone, etc, but you don't get any newer projects bundled/packaged. You will quickly run into issues related to networking because of the newer features in Open vSwitch which are needed, and continuing to make it work with an older Python (or getting it working with a newer Python + dependencies pulled in via a SCL) will be challenging, but aside from those issues, you should be able to maintain the subset of modules which were previously packaged on EL6 pretty easily. Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 From ihrachys at redhat.com Mon Sep 29 15:05:28 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 29 Sep 2014 17:05:28 +0200 Subject: [Rdo-list] Will RDO support Juno on CentOS6? In-Reply-To: <54297331.4000703@redhat.com> References: <316691786CCAEE44AE90482264E3AB8206BCDE75@xmb-rcd-x08.cisco.com> <5423EC8E.8080901@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCDF43@xmb-rcd-x08.cisco.com> <542423A8.9070201@redhat.com> <5424463C.90409@redhat.com> <54244AFD.2050405@redhat.com> <316691786CCAEE44AE90482264E3AB8206BCE223@xmb-rcd-x08.cisco.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102385569@CERNXCHG42.cern.ch> <54256EE1.8020608@redhat.com> <54257EB9.6030606@redhat.com> <54297331.4000703@redhat.com> Message-ID: <54297538.5030305@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 29/09/14 16:56, Dave Neary wrote: > which are needed, and continuing to make it work with an older > Python This specific issue should not be of concern since upstream supports Python 2.6 for Juno. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUKXU4AAoJEC5aWaUY1u57DrUH/1H4qKhVjDbB0uagtCbAndma jWBlFMo+J6w8J5QRQZxTncKgsSCGPqM6c8qCHDiwmS/XW136p013PzbproaCxdu6 0CN3EBpGvBckciz/UqzmrHdhAJDu1M7xryMHRFUB3fNVlC40SekFyjKiiMaHR4nf vAItMADaieltowL5z6iVf5Retr1H+rS+OuI9YLlSiKA7JvER4USDBCJDmLb0XAPz Z4Rweo4DAuT+lrv9Suhn41rU9OMhVqTCSL/Ydiv+KaHW6fp5R9DE76YQWMBqTqxM fB1LM/fVTAW40RxYOAw/ZeuJEsJ21y4cqOaB16upH8qCAR0J1Q0SMdbHrIewTIU= =sq27 -----END PGP SIGNATURE----- From mbayer at redhat.com Mon Sep 29 15:11:59 2014 From: mbayer at redhat.com (Mike Bayer) Date: Mon, 29 Sep 2014 11:11:59 -0400 Subject: [Rdo-list] python-sqlalchemy is removed in epel In-Reply-To: References: <316691786CCAEE44AE90482264E3AB8206BCF768@xmb-rcd-x08.cisco.com> Message-ID: On Sep 29, 2014, at 6:57 AM, Alan Pevec wrote: >> I find that python-sqlalchemy is removed from EL7 > > I have no idea why was python-sqlalchemy retired from epel7, I can't > find any releng ticket or bz about it. > > pjp, I see you owned EPEL branches in pkgdb, any clues why it was > retired in EPEL7 ? > In EPEL6 it is clear since it was in RHEL6 base but python-sqlalchemy > is not in RHEL7 base or extras afaik. It seems to me adding it to icehouse would be the consistent thing. However we might want to check on python-migrate and python-alembic also? These two packages still appear to be present in EPEL 7 - I?m not sure how that even works if their dependency python-sqlalchemy isn?t there ? > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From rdo-info at redhat.com Mon Sep 29 16:04:24 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 29 Sep 2014 16:04:24 +0000 Subject: [Rdo-list] [RDO] Blog Roundup, week of September 22, 2014 Message-ID: <00000148c227d753-ceaf5966-31ea-4ff9-bf11-957730434c8d-000000@email.amazonses.com> rbowen started a discussion. Blog Roundup, week of September 22, 2014 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/987/blog-roundup-week-of-september-22-2014 Have a great day! From pj.pandit at yahoo.co.in Mon Sep 29 17:36:14 2014 From: pj.pandit at yahoo.co.in (P J P) Date: Tue, 30 Sep 2014 01:36:14 +0800 Subject: [Rdo-list] python-sqlalchemy is removed in epel In-Reply-To: References: <316691786CCAEE44AE90482264E3AB8206BCF768@xmb-rcd-x08.cisco.com> Message-ID: <1412012174.39151.YahooMailNeo@web192405.mail.sg3.yahoo.com> Hello Alan, > On Monday, 29 September 2014 4:27 PM, Alan Pevec wrote: > pjp, I see you owned EPEL branches in pkgdb, any clues why it was > retired in EPEL7 ? In EPEL6 it is clear since it was in RHEL6 base but python-sqlalchemy > is not in RHEL7 base or extras afaik. Not sure why it was retired from epel7, I'll try to find out and update here asap. --- Regards -Prasad http://feedmug.com From rbowen at redhat.com Mon Sep 29 18:56:10 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 29 Sep 2014 14:56:10 -0400 Subject: [Rdo-list] Fwd: [Openstack] User Survey 2014 In-Reply-To: <5425105E.5020904@openstack.org> References: <5425105E.5020904@openstack.org> Message-ID: <5429AB4A.7070307@redhat.com> As you know, each year right before the OpenStack Summit there's a user survey to see who's using OpenStack, for what, where, and so on. Please take 10 minutes out of your day, some time this week, to complete the survey. Even if you've done the survey before, please go through and update it. Your responses expire if you don't update them each time. Also, we want to be sure that we have the latest information in the survey, so that we have a clear idea of what's going on in the OpenStack world. --Rich -------- Forwarded Message -------- Subject: [Openstack] User Survey 2014 Date: Fri, 26 Sep 2014 15:06:06 +0800 From: Tom Fifield To: openstack at lists.openstack.org, OpenStack Operators Hi all, As you know, before previous summits we have been running a survey of OpenStack users. We???re doing it again, and we need your help! If your organization is running an OpenStack cloud, please participate in the survey [http://www.openstack.org/user-survey]. If you already completed the survey before, you can simply log back in to update your deployment details and answer a few new questions. Please note that if your surveyresponse has not been updated in12 months, it willexpire,sowe encourage you to take this time to update your existing profile so your deployment can be included in the upcoming analysis. As a member of our community, please help us spread the word. We're trying to gather as much real-world deployment data as possible to share back with you. The information provided is confidential and will only be presented in aggregate unless the user consents to making it public. The deadline to complete the survey and be part of the next report is October 7 at 23:00 UTC. Questions? Check out the FAQ [https://www.openstack.org/user-survey/faq] or contact me ;) Thanks for your help! Regards, Tom _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Mon Sep 29 19:15:54 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 29 Sep 2014 15:15:54 -0400 Subject: [Rdo-list] Fwd: [Openstack] User Survey 2014 In-Reply-To: <5429AB4A.7070307@redhat.com> References: <5425105E.5020904@openstack.org> <5429AB4A.7070307@redhat.com> Message-ID: <20140929191554.GB13852@redhat.com> On Mon, Sep 29, 2014 at 02:56:10PM -0400, Rich Bowen wrote: > Even if you've done the survey before, please go through and update it. Your > responses expire if you don't update them each time. Also, we want to be > sure that we have the latest information in the survey, so that we have a > clear idea of what's going on in the OpenStack world. Wish it would just authenticate against launchpad rather than making one enter credentials. I don't know them off the top of my head, so I always punt on filling this out... I know, nothing but whine, whine, whine. Yes, I will go look up my password... -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Mon Sep 29 19:20:38 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 29 Sep 2014 15:20:38 -0400 Subject: [Rdo-list] RDO-related Meetups in the coming week (September 29, 2014) Message-ID: <5429B106.1080304@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts will be gathering. If you know of others, please let me know, and/or add them to http://openstack.redhat.com/Events If you attend any of these meetups, please take pictures, and send me some. If you blog about the events (and you should), please send me that, too. * Monday, September 29, Learn about OpenStack Heat and Docker, Centennial, Colorado - http://www.meetup.com/OpenStack-Denver/events/205445542/ * October 1-2, RDO Juno Milestone 3 test day, #RDO on Freenode IRC - https://openstack.redhat.com/RDO_test_day_Juno_milestone_3 * Wednesday, October 1, 1st Tokyo OpenStack Meetup at Midokura - http://www.meetup.com/Tokyo-OpenStack-Meetup/events/204771502/ * Thursday, October 2, OpenStack Neutron: Networking as a Service, Auckland - http://www.meetup.com/New-Zealand-OpenStack-User-Group/events/203803432/ * Thursday, October 2, South Bay OpenStack Meetup, Beginner track, SFBay OpenStack - http://www.meetup.com/openstack/events/150932712/ * Thursday, October 2, OpenStack Barbican, San Antonio, TX - http://www.meetup.com/Alamo-City-Python-Group/events/210157372/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Sep 29 19:22:06 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 30 Sep 2014 00:52:06 +0530 Subject: [Rdo-list] Fwd: [Openstack] User Survey 2014 In-Reply-To: <20140929191554.GB13852@redhat.com> References: <5425105E.5020904@openstack.org> <5429AB4A.7070307@redhat.com> <20140929191554.GB13852@redhat.com> Message-ID: <20140929192206.GA28733@tesla.redhat.com> On Mon, Sep 29, 2014 at 03:15:54PM -0400, Lars Kellogg-Stedman wrote: > On Mon, Sep 29, 2014 at 02:56:10PM -0400, Rich Bowen wrote: > > Even if you've done the survey before, please go through and update it. Your > > responses expire if you don't update them each time. Also, we want to be > > sure that we have the latest information in the survey, so that we have a > > clear idea of what's going on in the OpenStack world. > > Wish it would just authenticate against launchpad rather than making > one enter credentials. I don't know them off the top of my head, so I > always punt on filling this out... Heh, I was just wondering the same. Resentfully hit the "Reset password" link, it's instantaneous at-least to get a new password :-) -- /kashyap From apevec at gmail.com Mon Sep 29 19:55:12 2014 From: apevec at gmail.com (Alan Pevec) Date: Mon, 29 Sep 2014 21:55:12 +0200 Subject: [Rdo-list] python-sqlalchemy is removed in epel In-Reply-To: References: <316691786CCAEE44AE90482264E3AB8206BCF768@xmb-rcd-x08.cisco.com> Message-ID: 2014-09-29 17:11 GMT+02:00 Mike Bayer : > It seems to me adding it to icehouse would be the consistent thing. However we might want to check on python-migrate and python-alembic also? These two packages still appear to be present in EPEL 7 - I?m not sure how that even works if their dependency python-sqlalchemy isn?t there ? I've pushed the whole family alchemy/alembic/migrate to RDO Icehouse EL7 repo until we figure out what happened. Cheers, Alan From berendt at b1-systems.de Tue Sep 30 05:11:31 2014 From: berendt at b1-systems.de (Christian Berendt) Date: Tue, 30 Sep 2014 07:11:31 +0200 Subject: [Rdo-list] Create a link rdo-release-juno.rpm rdo.fedorapeople.org Message-ID: <542A3B83.8010607@b1-systems.de> At https://review.openstack.org/#/c/123837/ we have the problem that we want to document the usage of https://rdo.fedorapeople.org/rdo-release.rpm for a specific release. At the moment this is not possible because https://rdo.fedorapeople.org/rdo-release.rpm always points to the latest stable release. Is it possible to cretae an additional link named rdo-release-juno.rpm pointing to the latest available repository package? rdo-release.rpm --> rdo-release-juno-1.noarch.rpm rdo-release-juno.rpm --> rdo-release-juno-1.noarch.rpm rdo-release-icehouse.rpm --> rdo-release-icehouse-4.noarch.rpm This way we do not have to update the documentation after the release of a new repository package and we have working links in older documents of stable releases after a new release. For example the links in http://docs.openstack.org/icehouse/install-guide/install/yum/content/ will not work after the release of Juno. Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From pbrady at redhat.com Tue Sep 30 09:59:26 2014 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Tue, 30 Sep 2014 10:59:26 +0100 Subject: [Rdo-list] Create a link rdo-release-juno.rpm rdo.fedorapeople.org In-Reply-To: <542A3B83.8010607@b1-systems.de> References: <542A3B83.8010607@b1-systems.de> Message-ID: <542A7EFE.5010408@redhat.com> On 09/30/2014 06:11 AM, Christian Berendt wrote: > At https://review.openstack.org/#/c/123837/ we have the problem that we > want to document the usage of > https://rdo.fedorapeople.org/rdo-release.rpm for a specific release. At > the moment this is not possible because > https://rdo.fedorapeople.org/rdo-release.rpm always points to the latest > stable release. > > Is it possible to cretae an additional link named rdo-release-juno.rpm > pointing to the latest available repository package? > > rdo-release.rpm --> rdo-release-juno-1.noarch.rpm > rdo-release-juno.rpm --> rdo-release-juno-1.noarch.rpm > rdo-release-icehouse.rpm --> rdo-release-icehouse-4.noarch.rpm > > This way we do not have to update the documentation after the release of > a new repository package and we have working links in older documents of > stable releases after a new release. > > For example the links in > http://docs.openstack.org/icehouse/install-guide/install/yum/content/ > will not work after the release of Juno. The redirects are already in place: https://rdo.fedorapeople.org/rdo-release.rpm # currently icehouse https://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm https://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm thanks, P?draig. From berendt at b1-systems.de Tue Sep 30 10:32:35 2014 From: berendt at b1-systems.de (Christian Berendt) Date: Tue, 30 Sep 2014 12:32:35 +0200 Subject: [Rdo-list] Create a link rdo-release-juno.rpm rdo.fedorapeople.org In-Reply-To: <542A7EFE.5010408@redhat.com> References: <542A3B83.8010607@b1-systems.de> <542A7EFE.5010408@redhat.com> Message-ID: <542A86C3.3070801@b1-systems.de> On 09/30/2014 11:59 AM, P?draig Brady wrote: > The redirects are already in place: > > https://rdo.fedorapeople.org/rdo-release.rpm # currently icehouse > https://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm > https://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm Thanks. I updated the review request accordingly. Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From pj.pandit at yahoo.co.in Tue Sep 30 11:21:10 2014 From: pj.pandit at yahoo.co.in (P J P) Date: Tue, 30 Sep 2014 19:21:10 +0800 Subject: [Rdo-list] python-sqlalchemy is removed in epel In-Reply-To: <1412012174.39151.YahooMailNeo@web192405.mail.sg3.yahoo.com> References: <316691786CCAEE44AE90482264E3AB8206BCF768@xmb-rcd-x08.cisco.com> <1412012174.39151.YahooMailNeo@web192405.mail.sg3.yahoo.com> Message-ID: <1412076070.72722.YahooMailNeo@web192403.mail.sg3.yahoo.com> Hi, > On Monday, 29 September 2014 11:06 PM, P J P wrote: > Not sure why it was retired from epel7, I'll try to find out and update > here asap. Please see -> https://fedorahosted.org/rel-eng/ticket/6009 I've built the package, but can not push it yet because the repository is still [Blocked]. --- Regards -Prasad http://feedmug.com From whayutin at redhat.com Tue Sep 30 14:13:38 2014 From: whayutin at redhat.com (whayutin) Date: Tue, 30 Sep 2014 14:13:38 +0000 Subject: [Rdo-list] Using RHEL-7 with Juno Message-ID: <1412086418.2885.12.camel@localhost.localdomain> FYI.. In addition to the standard RHEL base repositories please ensure the following repositories are also enabled. [rhel-7-server-extras-rpms] name = Red Hat Enterprise Linux 7 Server - Extras (RPMs) [rhel-7-server-optional-rpms] name = Red Hat Enterprise Linux 7 Server - Optional (RPMs) Thanks! From apevec at gmail.com Tue Sep 30 16:41:14 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 30 Sep 2014 18:41:14 +0200 Subject: [Rdo-list] Using RHEL-7 with Juno In-Reply-To: <1412086418.2885.12.camel@localhost.localdomain> References: <1412086418.2885.12.camel@localhost.localdomain> Message-ID: > In addition to the standard RHEL base repositories please ensure the > following repositories are also enabled. > > [rhel-7-server-extras-rpms] > name = Red Hat Enterprise Linux 7 Server - Extras (RPMs) > > [rhel-7-server-optional-rpms] > name = Red Hat Enterprise Linux 7 Server - Optional (RPMs) Those are actually EPEL7 deps[*] and RDO depends on EPEL. I've updated "NOTE for RHN users" in https://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F to mention required RHEL7 repos. Cheers, Alan [*] https://fedoraproject.org/wiki/EPEL/epel7 From pj.pandit at yahoo.co.in Tue Sep 30 17:48:36 2014 From: pj.pandit at yahoo.co.in (P J P) Date: Wed, 1 Oct 2014 01:48:36 +0800 Subject: [Rdo-list] python-sqlalchemy is removed in epel In-Reply-To: <1412076070.72722.YahooMailNeo@web192403.mail.sg3.yahoo.com> References: <316691786CCAEE44AE90482264E3AB8206BCF768@xmb-rcd-x08.cisco.com> <1412012174.39151.YahooMailNeo@web192405.mail.sg3.yahoo.com> <1412076070.72722.YahooMailNeo@web192403.mail.sg3.yahoo.com> Message-ID: <1412099316.50059.YahooMailNeo@web192406.mail.sg3.yahoo.com> Hello Alan, > On Tuesday, 30 September 2014 4:51 PM, P J P wrote: > Please see -> https://fedorahosted.org/rel-eng/ticket/6009 > I've built the package, but can not push it yet because the repository is > still [Blocked]. Please see -> https://bugzilla.redhat.com/show_bug.cgi?id=1148045 The latest python-sqlalchemy-0.9.7 has been pushed to epel7 testing repositories, and shall soon be available via -stable too. Hope it helps. --- Regards -Prasad http://feedmug.com